On June 22, 2025, the Governor of Texas signed the Texas Responsible Artificial Governance Act (TRAIGA) into law, prohibiting the development of AI systems for certain purposes, protecting consumers, and developing a sandbox program. This new law goes into effect on January 1, 2026 and includes numerous notable provisions for developers of AI systems, which are discussed below.
Who does the Act apply to?
The TRAIGA applies to any person who:
- Promotes, advertisers, or conducts business in Texas;
- Produces a product or service used by residents of Texas; or
- Develops or deploys an Artificial Intelligence system in Texas.
It is important to note that this Act does not just apply to those developing and selling AI systems in Texas, but that it also imposes obligations on those doing business in the State that want to use those AI systems developed or sold by others.
How does the Act define “Artificial Intelligence system”?
Since TRAIGA can apply to anyone using, developing, or deploying an AI system in Texas, it’s important to know how the Act defines these systems. The Act defines an Artificial Intelligence system as “any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate the outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments.”
Disclosure requirements
The Act requires governmental agencies and healthcare services or treatment providers that make an AI available for interaction with consumers to provide a notice to those consumers that they will be interacting with an AI. This notice must be provided before or at the time of the interaction and must be clear and conspicuous, written in plain language, and must not use any dark patterns.
Prohibitions
The Act provides multiple prohibitions on the types of AI that cannot be developed or deployed in the State of Texas. The first set of these prohibitions relates to the creation or development of an AI system by anyone. Developers may not create an AI that encourages a person to commit physical self-harm, harm another person, or engage in any criminal activity. Developers also cannot develop and individuals cannot deploy an AI system with the intent to unlawfully discriminate against a protected class in violation of state or federal law or violate their rights under the United States Constitution. Lastly, persons may not develop or distribute an AI system that produces visual material, deep fake videos, images or text-based conversations that depict or simulate sexual conduct impersonating a child under the age of 18.
The second set of prohibitions involve the development or deployment of AI systems within governmental entities. Governmental entities are prohibited from developing or deploying an AI system for the purpose of uniquely identifying an individual using biometric data or the gathering of such data from public sources, and may not develop or deploy an AI system to be used for social scoring purposes.
Enforcement
TRAIGA will be enforced by the Texas State Attorney General and, as such, does not provide a right for individuals to sue businesses for violations. Upon receiving a complaint, the Attorney General may issue a civil investigative demand requesting information from the developer or deployer of the AI system. For curable violations, meaning violations that can be fixed, the Attorney General may impose fines of not less than $10,000 to no more than $12,000 per violation. If a violation continues and is not cured, the Attorney General may impose additional fines of $2,000 to $40,000 per day per violation. For violations that cannot be cured, the Attorney General may impose fines of not less than $80,000 to no more than $200,000 per violation.
With the potential for such high fines, it is important that any persons or businesses considering the creation of an AI system that will be created or deployed in Texas to consider how their AI may be used. It is also important to note that an AI should be evaluated for potential uses, not just anticipated uses and guardrails be put in to ensure that the AI system would not violate the prohibitions listed above.