Credit: Zeke Barbaro / Getty Images

On Jan. 16, a political attack ad paid for by Texas Attorney General Ken Paxton’s campaign aired, featuring Sen. John Cornyn (R-Texas) and Rep. Jasmine Crockett (D-Texas) dancing together as partners in the “Senate Swing” and the “Washington Waltz.” Yet, this attack ad is different from its predecessors for a remarkable reason: The images aren’t real.

The video clips of Crockett and Cornyn tapping boots in a Texas dance hall and waltzing in front of the White House were generated by artificial intelligence. AI is rapidly changing the way we interact with technology and has revolutionized toolkits across almost every industry – including political advertising. On Jan. 1, Texas enacted HB 149, the Texas Responsible Artificial Intelligence Governance Act to “protect public safety, individual rights, and privacy while encouraging the safe advancement of AI technology in Texas.”

TRAIGA prohibits developers and deployers of AI systems from engaging in behavioral manipulation, using biometric data for surveillance, or performing any function that violates existing constitutional or civil rights law, among other things. Additionally, the law explicitly states that AI cannot be used “in a manner that intentionally results in political viewpoint discrimination” (Sec §551.056) and excludes deepfake videos (like the “Partners” ad) from existing statutory protections. However, “developers” and “deployers” are narrowly defined, meaning only platforms, vendors, and government agencies are subject to penalties. Using state authority and funding to generate AI deepfakes is barred in Paxton’s capacity as AG, but his campaign office is an end user – not a deployer. Making matters more complex, authority is concentrated at the state level, and only one office has the authority to enforce TRIAGA: the Texas attorney general.

Additionally, HB 149 relies on an intent-based liability standard, rather than the risk- or impact-based frameworks recently adopted in other Colorado and Utah AI laws. Violations require proof of malicious intent, regardless of whether the AI system has caused demonstrable harm. By prioritizing intent, TRAIGA reduces regulatory exposure for companies developing AI in the Lone Star State. Tech giants like Tesla, OpenAI, and Samsung have flocked to Texas for its cheap land and permissive liability framework. As of this writing, Texas hosts almost 400 AI data centers – all of which require large amounts of water and power.

“When you look at the data center power usage globally, the expected use value will go all the way from 400 TWh in 2024 to 1600 TWh by 2030.”

Vivek Mohindra, special adviser to the vice chair and COO at Dell Technologies

“When you look at the data center power usage globally, the expected use value will go all the way from 400 TWh [terawatt-hours] in 2024 to 1600 TWh by 2030,” said Vivek Mohindra, special adviser to the vice chair and COO at Dell Technologies, at the 2026 State of AI in Austin event on Jan. 27. Everything is bigger in Texas, including our infrastructure burden: a Jan. 28 white paper from the Houston Advanced Research Center reports that Texas data centers consumed more than 9,500 megawatts of electricity and consumed an estimated 25 billion gallons of water in 2025, which is projected to increase to 161 billion gallons by 2030 with the current growth of the industry.

Yet, HB 149 explicitly preempts cities and counties from adopting their own AI regulations, even as they navigate long-term utility service agreements. Margaret Cook, Ph.D., VP of water and community resilience at HARC, told the Chronicle that tensions are already emerging as industrial growth and water scarcity clash. She cited Corpus Christi as an example.

“They’ve been under strict water restrictions, but they give industry a variance for a small fee,” Cook said. “The community recently rejected a desalination plant for a variety of reasons, one of which was that residents didn’t want to pay higher rates to socialize the cost of water. … This is a warning sign for other communities that want to welcome large water users with low water rates and their current supply, knowing they are trading their community’s long-term water supply for a short-term win.” 

With preemption in place, municipalities have been left to manage the infrastructural consequences of rapid growth without authority over the technology that drives it. That tension is particularly evident in Austin, where AI leaders like Tesla, Meta, Google, Amazon, and Microsoft have taken up residence.

Credit: Zeke Barbaro / Getty Images

“Austin already has deep roots in the semiconductor industry,” said Sean Bauld, executive director at Austin AI Alliance, at the Jan. 27 event. “We’ve been doing more around smart manufacturing. There’s a huge opportunity, a huge impact surrounding what the physical world can do, and AI is at the very beginning of that. … The winners of the AI race may be those who adapt to change the best.”

Barred from directly regulating AI, Austin is adapting by using internal standards as its primary tools. On Dec. 23, a week before TRAIGA went into effect, the city released a memorandum that tasked the city manager with creating internal AI governance standards – including guidelines for human oversight, workforce protections (“no displacement without consultation”), and analysis of environmental and utility impacts. Additionally, it requires formal consultation with AFSCME Local 1624 – the union that represents all city of Austin employees – before AI tools are deployed in ways that could alter job duties or working conditions.

“The AI systems used at the city so far are used to assist workers and public safety: real-time wildfire detection, AI precheck to speed up development plans, and past use to provide COVID-19 information to the public,” said Todd Kiluk, manager for union representatives and data analytics at AFSCME Local 1624. “The AI resolution allows AFSCME to bring up concerns and work with city management as AI is integrated into the city’s workforce.”

In response to HB 149’s preemption, Austin has shifted from formal policy to internal regulation to maintain limited authority. The city now faces the challenge of balancing economic growth in the Silicon Hills with the long-term demands placed on its water, power, workforce, and public trust.

In a survey conducted by AAIA, Austinites praised AI for 24/7 access, data analysis, and decision-making support, yet 63% of respondents cited deepfakes and misinformation as a primary concern. The findings highlight a central contradiction of TRAIGA: While the law regulates intent, public anxiety is driven by impact. In a state where political reality can be generated with a few keystrokes, the question is no longer whether AI causes harm, but whether a legal framework focused on intent can address downstream consequences.

A note to readers: Bold and uncensored, The Austin Chronicle has been Austin’s independent news source for over 40 years, expressing the community’s political and environmental concerns and supporting its active cultural scene. Now more than ever, we need your support to continue supplying Austin with independent, free press. If real news is important to you, please consider making a donation of $5, $10 or whatever you can afford, to help keep our journalism on stands.