
Bill Description: Senate Bill 1297 would create a state regulatory framework for what the bill defines as “conversational AI services.”
Rating: -2
Does it give government any new, additional, or expanded power to prohibit, restrict, or regulate activities in the free market? Conversely, does it eliminate or reduce government intervention in the market?
Senate Bill 1297 would create Chapter 21, Title 48, Idaho Code, titled the “Conversational AI Safety Act.” The bill would create definitions, impose regulations, and specify penalties and enforcement.
The bill would define a “conversational AI service” as “an artificial intelligence software application, web interface, or computer program that is accessible to the general public and that primarily simulates human conversation and interaction through textual, visual, or aural communications.”
The bill would say that, “if reasonable persons would be misled to believe that they are interacting with a human, an operator shall clearly and conspicuously disclose that the conversational AI service is artificial intelligence.”
The bill would also require anyone “who develops and makes available a conversational AI service to the public” to “adopt a protocol for the conversational AI service to respond to user prompts regarding suicidal ideation that includes but is not limited to making reasonable efforts to provide a response to users that refers them to crisis service providers such as a suicide hotline, crisis text line, or other appropriate crisis services.”
It would also be prohibited to “cause or program a conversational AI service to make any representation or statement that explicitly indicates that the conversational AI service is designed to provide professional mental or behavioral health care.”
The bill would add additional regulations for a conversational AI service that is being used by someone under age 18 or under age 13. The bill refers to a service operator who “knows or has reasonable certainty that an account holder is a minor,” but it does not specify how the AI service is supposed to determine a user’s age.
Among the regulations applicable to minors, the AI service would be required to “clearly and conspicuously disclose to minor account holders that they are interacting with artificial intelligence as a persistent visible disclaimer; or both at the beginning of each session; and appearing at least every three (3) hours in a continuous conversational AI service interaction.”
It would also require that the AI service “offer tools for minor account holders and, where such account holders are under thirteen (13) years of age, their parents or guardians, to manage the account holder's privacy and account settings” and “offer related tools to the parents or guardians of minor account holders thirteen (13) years of age and older, as appropriate based on relevant risks.”
(-1)
Does it directly or indirectly create or increase penalties for victimless crimes or non-restorative penalties for non-violent crimes? Conversely, does it eliminate or decrease penalties for victimless crimes or non-restorative penalties for non-violent crimes?
A violation of the regulations contained in this bill would result in an injunction and liability for “civil penalties of one thousand dollars ($1,000) per violation, not to exceed five hundred thousand dollars ($500,000) per operator, or actual damages, whichever is greater.”
These civil penalties would be sought by the attorney general, and the bill would not create a private right of action.
(-1)


