Health Care – As introduced, prohibits a person from developing or deploying an artificial intelligence system that advertises or represents to the public that such system is or is able to act as a qualified mental health professional.
Amends TCA Title 33; Title 47 and Title 63.
This bill would make it unlawful for anyone who develops or deploys an artificial-intelligence system to advertise or represent that the system is—or can—act as a qualified mental-health professional. Any violation is treated as an unfair or deceptive practice under Tennessee’s Consumer Protection Act, punishable by a civil penalty of $5,000 per offense. It takes effect July 1, 2026.
At its core, the measure aims to protect patients from relying on AI tools that could mislead them into believing they’re receiving care from a licensed counselor, psychologist, or psychiatrist. Supporters argue that AI chatbots and virtual therapists are evolving fast, and patient safety demands clear guardrails around who—or what—can promise legitimate mental-health expertise. Critics might say it burdens AI innovators with extra compliance costs and chills investment in promising new technology.
From a conservative, original-intent standpoint, states have long exercised a police power to curb fraud and protect public welfare—this fits squarely within that tradition. Tennessee isn’t claiming any new federal authority or directing AI policy nationwide; it’s simply saying within our borders you may not falsely market an AI system as a licensed mental-health professional. That said, it does add a specific regulatory layer to the fast-moving world of AI, and we should weigh the consumer-protection gains against the risk of overregulating a sector that thrives on experimentation.
In practical terms, enforcement falls under the existing consumer-protection framework, so there’s no new bureaucracy—just a tailored penalty for a clear-cut misrepresentation. The fiscal note finds no significant impact on state budgets or agencies. Ultimately, the bill balances the state’s duty to protect citizens from deceptive health claims with the desire not to unduly hamper emerging AI technologies.
Make your support known to members of the committee and ask them to vote YES on this bill.
Click on the button to Take Action now.
NO ACTION NEEDEDNO ACTION NEEDED