Press "Enter" to skip to content

Cato Institute CEO on Regulating AI’s Future, Lightly

The U.S. is taking a more interventionist approach to regulating artificial intelligence compared to how it treated past technological revolutions, asserts Peter Goettler, president and CEO of libertarian think tank Cato Institute, in an emailed Q&A with The AI Innovator.

He believes this approach is misguided, drawing on lessons from the early days of the internet to make the case for a lighter-touch framework that preserves innovation, civil liberties and U.S. leadership.

For Goettler, the stakes are not only abstract but deeply personal. He speaks of a bright AI-shaped future for his grandchildren — and his hope that AI could one day unlock treatments for his wife’s multiple sclerosis so she can walk again.

What follows is a lightly edited version of the Q&A.

The AI Innovator: If permission-first AI regulation risks hampering future innovation, what framework would you recommend that would strike a better balance between consumer interests and AI advancements? 

Peter Goettler: Past successes of U.S. technology regulation can be celebrated — and replicated in the case of artificial intelligence. With the growth of the internet, the United States adopted a light-touch regulatory approach. As a result, the American internet ecosystem thrived, and U.S. technology companies came to dominate the global industry.

This approach contrasts with the heavier-handed approach taken in Europe, rendering European technology providers also-rans whose most significant tech export is those annoying cookie consent pop-ups you see whenever you visit a website. (These are largely the result of two EU regulations, 2002’s ePrivacy Directive and 2018’s General Data Protection Regulation.)

Unlike the pro-innovation policies that led to a fast-growing internet a generation ago, there now seems to be an emerging and concerning predisposition even in the United States that treats artificial intelligence as an inherently dangerous technology that needs to be boxed to prevent harm. This raises the specter of burdensome regulation that might inhibit the technology and undermine U.S. leadership. Instead, we should seek to repeat the success of the U.S. approach to internet regulation.

To that end, we should avoid creating unnecessary layers of duplicative laws and regulations for conduct that is already regulated, prohibited, or illegal. It’s likely that artificial intelligence will further empower scammers, fraud artists, and other malicious actors. But we already have laws against fraud, theft, discrimination, and the like. These rules should be applied when AI is used for such purposes to bring the perpetrators to justice. We don’t need regulations that fecklessly attempt to foresee and circumvent every way AI could be applied for a nefarious purpose and risk hobbling the industry in the process.

We don’t need regulations that fecklessly attempt to foresee and circumvent every way AI could be applied for a nefarious purpose and risk hobbling the industry in the process.

Another reason we shouldn’t attempt to foresee and forestall every way in which artificial intelligence might be used malignantly is that it can inhibit consumer adaptation, whereby norms and practices emerge organically, empowering consumers in the safe application of new technologies. Regulation aside, education and experience have an important role to play in protecting consumers from malicious actors now armed with artificial intelligence.

Again, our experience with the internet is instructive. Regulators didn’t try to predict and attempt to broadly prohibit use of the technology due to the potential downsides, but instead responded to these specific concerns more narrowly. And with internet privacy, we’ve seen a steady increase in awareness of these issues, with consumers becoming more adept at leveraging the options available to protect themselves — including the emergence of market solutions and products to assist them.

With so much focus on how AI might empower malign actors, there has not been enough attention paid to the risks posed by AI in the hands of the government itself. Like many other emergent technologies, AI can provide the government with powerful new capabilities in surveillance and law enforcement, for example, that can pose significant threats to civil liberties. We need to be at least as diligent in assessing and addressing these risks as we are with others posed by this technology and ensure that civil liberties are protected without villainizing technology more generally.

Finally, a federal regulatory framework is necessary to forestall and preempt a potential metastasizing of state and local laws that could similarly hamstring AI. With slow progress toward a viable federal AI framework, states have moved quickly to fill the void. In 2025, all 50 states considered some form of AI-focused legislation, with more than 1,000 AI-related bills introduced and upward of 100 signed into law. While we normally favor local decision-making, the need for federal standards in AI regulation is an example of what motivated the Constitutional Convention and the grant of power to Congress to regulate commerce among the 50 states. A recent Trump executive order attempts to address this, but I’m skeptical it’s something that can legally be accomplished by executive fiat. Congress needs to act.

Some policymakers say strong regulation is necessary because AI can lead to more societal harms than past technologies. What do you think of this argument and are there lessons from earlier tech revolutions that can be instructive? 

One of the reasons AI seems different is simply that we’ve forgotten the fears that were stoked by once revolutionary technologies that are now commonplace. The lessons of history are numerous and consistent, which is why I tend to be relatively sanguine about the risks of new technology and enthusiastic about the possible benefits. Has there been a transformational technology in history that didn’t generate great fear as it emerged? Warnings of threats to health, safety, or human socialization and interaction have accompanied all the technological revolutions of which I can think — the printing press, railroads, electricity, telephony, the automobile, television. And in every case, ruinous consequences didn’t materialize, and humankind enjoyed enormous benefits.

There are, of course, more than a few very smart people worried about the risks posed by AI. And even a techno-optimist must acknowledge there’s some possibility — even if small — that the technology could produce a catastrophic outcome. I’m not a scientist or technologist, so I’m not going to pretend that I can precisely assess the risks — although I doubt any scientist or technologist could. But I take great comfort in the compelling pattern of history.

If we knew for sure there was a nontrivial probability that AI would produce a ruinous outcome for humankind, could we stop it even if we wanted to?

With advancements in AI proceeding at a breakneck pace, do we really believe government bureaucrats or those in the know of political sausage-making have a clue as to how to regulate it effectively? Could regulators, however well intended, really protect us from the risks while preserving the upsides?

And maybe there’s another lesson of history: if we knew for sure there was a nontrivial probability that AI would produce a ruinous outcome for humankind, could we stop it even if we wanted to? It’s hard to believe we can flip some kind of switch and completely arrest all innovation. With important national security considerations creating pressure for nations to stay on the cutting edge of AI, an effort to slow down the technology’s advances is unlikely to succeed in anything except disadvantaging the nations that make a serious attempt to do so.

Admittedly, the risk of calamitous outcomes isn’t the only fear associated with AI. There’s potential for AI to empower scamming, fraud, discrimination, and other criminal or immoral behaviors. But it’s important to remember that we already have a criminal code that prohibits these activities and enables the punishment of offenders. The right answer here is to deal with transgressions when they occur by whatever means, rather than undermining the technology as a whole — and its potential benefits — through burdensome regulation that seeks to disarm it ahead of time.

You’ve warned that the U.S. could lose ground to countries with more flexible AI policies. In practical terms, what does ‘falling behind’ look like? 

Yes, I do believe it’s important for the United States to maintain its leadership in AI innovation. And at the outset I should note that I usually try to avoid seeing the U.S. as being “in competition” with other nations, because such framing can reflect the grip of zero-sum thinking. I’m thankful that the United States is such a center of technological creativity, as evidenced by the fact that our country is responsible for a disproportionate share of the world’s innovation.

But if other countries were contributing more in this regard, thus diluting our share, it would be good for them, the world, and ultimately for us as well. Similarly, the United States is a very wealthy country, but that does not mean I don’t want to see other countries become richer, even if that means they narrow their gap with us. This benefits everyone. There may be reasons to be wary of China as a potential geopolitical adversary, but for the past few decades I’ve been encouraged by the fact that so many Chinese citizens have moved out of poverty and into better lives.

In the context of artificial intelligence, however, the calculus is somewhat different. The pace of change is so rapid that the capabilities gap between those at the cutting edge and everyone else can be substantial, at least for a time. And this gap will not apply only to the things that make us more productive, more creative, and more prosperous; it will also apply to areas with direct national security implications, such as defense technology and cybersecurity.

For these reasons, I do think it matters that the United States remains the world leader in artificial intelligence. In this case, the language of competition is not merely zero-sum thinking but may reflect a genuine concern about security and resilience.

That competition, however, should not be understood narrowly as a bilateral race between the United States and China. It will also play out through the technology stacks adopted by our allies and partners. The standards, infrastructure, platforms, and governance frameworks that prevail globally will shape not only security outcomes but the character of the digital world itself.

If democratic countries are dependent on technologies designed within more restrictive or authoritarian systems, the consequences could extend well beyond defense or intelligence concerns. They could influence how information flows, how ideas are explored, and how individuals engage with knowledge and with one another.

If democratic countries are dependent on technologies designed within more restrictive or authoritarian systems, the consequences could extend well beyond defense or intelligence concerns.

In that sense, leadership in AI is also about whether the technologies that diffuse globally reflect the values of open societies — free expression, pluralism, and the expansion of human thought — rather than more centralized, controlled, or surveillance-oriented models. Ensuring that our allies have access to, and confidence in, technology ecosystems rooted in those values is itself a strategic objective.

This brings me to what falling behind might look like. One risk involves domestic obstacles that could impede our ability to fully exploit AI. For decades, overregulation and misguided regulation have undermined the robustness of electricity supply in the United States and our ability to expand capacity when needed. With the growth of AI use, that capacity is needed now. While there is growing recognition of the problem, I remain skeptical that regulatory frameworks built over many decades can adapt quickly enough to prevent serious supply constraints.

Another risk is the paradox of technology adoption lagging technological leadership. As Google’s Kent Walker has noted, despite the U.S. lead in AI innovation, we appear to be falling behind in the adoption and deployment of AI applications. It’s not entirely clear what that means over the long term. Perhaps there’s an argument that pushing the envelope on applications creates a virtuous circle that reinforces innovation itself. But what is clear is that slow adoption leaves enormous amounts of opportunity, productivity, and wealth unrealized.

Finally, it’s worth emphasizing that maintaining leadership abroad requires coherence at home. If the United States wants AI technologies associated with open societies to prevail globally, it will need to remain committed to those same principles domestically — particularly free expression and openness — when governing these technologies. Leadership in AI is not just about technical capacity; it’s also about whether the systems we build, and export credibly embody the values we claim to defend.

President Trump is supporting a more hands-off approach to regulating AI. What do you think of his approach and what other actions would you recommend to the administration? 

The Trump administration’s posture seems focused on encouraging AI innovation and development, as well as maintaining American leadership in the technology. This is a positive shift from the Biden administration, which was pre-occupied with the potential downsides and risks of the technology, and using regulation to mitigate these risks. Particularly at such an early stage when one might be skeptical the risks can be properly identified, much less mitigated, and without compromising the development of the technology.

To the extent the Trump administration is committed to removing obstacles to AI innovation and avoiding an overly burdensome regulatory approach, I view this as a step in the right direction. But there are some elements of their strategy that could be cause for concern.

My preference is for government to stay out of the way and let the technology and its adoption develop, dealing with issues that could merit regulation as they arise. History and experience suggest skepticism about the government’s ability to mitigate downside without causing negative unintended consequences is always warranted. But we are equally skeptical about the government’s ability to intervene in ways that can benefit the technology.

The administration’s AI Action Plan suggests a federal government role in AI education, worker training, and a variety of potential investments and support across a wide range of the AI ecosystem. And I suppose it’s not just skepticism about whether such interventions can be efficacious — rather, I don’t see this as a necessary or legitimate role for the federal government.

Another concern is that the administration’s AI Action Plan, as well as its AI-related executive orders, express a commitment to act against ideological bias in AI models or output. As we’ve seen, such bias may be a legitimate issue, but it should not be the basis for federal action. The government shouldn’t be intervening in what amounts to private sector speech and expression in one direction or another.

Even in situations where I might agree with a particular administration’s point of view, we can’t step onto the slippery slope of successive administrations claiming private-sector bias in one direction or another and each taking action to push it back the other way. To the extent private-sector bias exists and is perceived to be a problem, the way to deal with it is to allow offsetting private action rather than pursue government intervention or coercion.

How has AI affected your life, personally? 

I get the greatest satisfaction out of AI chatbots when I’m using them to spark creativity. Despite rapid advancement and improvement, I still find them to be unreliable when sourcing factual data and information. But as a sounding board to assist in idea generation, they can be invaluable. For example, when writing, we now always have a partner alongside of whom we can bounce ideas — not to produce text — meaning writer’s block may be a thing of the past.

And whether at work or home, chatbots are fantastic portals through which to explore new topics about which one knows virtually nothing. You can readily surface the best source material on all sides of an issue and learn an incredible amount very quickly. Asking the AI to teach you about a topic can still be fraught, but getting to the best articles or books quickly is a huge advantage.

In our work at Cato, our application of AI runs the gamut from process automation, research support, multimedia content creation, job candidate screening, marketing insight, and much more. And since the process of software development has collapsed so dramatically from a time and expense standpoint, we are beginning to generate savings by quickly producing software that in the past would require us to purchase or lease an off-the-shelf product.

Since the functionality is focused and tailored to our needs, the software is simpler, easier to use, and less clunky than a commercial product that must address every need of a wide and diverse base of users. In fact, with coding now hyper-efficient, there’s a growing incidence of disposable, one-and-done software: low cost can justify creating a one-use program for a single task that’s unlikely to be repeated.

I’m betting my grandchildren are going to inhabit a wondrous world that will allow them to live incredible lives — thanks to AI and other innovations and technological progress.

Of course, when I’m asked how something affects me personally, my family naturally comes to mind — and regarding the future I always think of my grandchildren. Here I’ll admit to a bit of trepidation, but not because I believe it likely that artificial intelligence poses an existential threat that could destroy their world. To the contrary, I’m betting my grandchildren are going to inhabit a wondrous world that will allow them to live incredible lives — thanks to AI and other innovations and technological progress.

But we are at an inflection point that raises a question: What kind of education does today’s five-year-old need to flourish and thrive in the amazing world that’s coming? I’m confident we’ll figure that out over time, but the children of today must be educated now — before we are likely to have figured this out.

And when I consider AI in the context of my family, my wife is also foremost in my mind. She is afflicted with progressive multiple sclerosis that confines her to a wheelchair. One of my dreams is that AI can accelerate scientific advances making restoration of the nervous system — and my wife’s ability to walk — a reality. When one thinks of all the health afflictions AI might hold the key to reversing, it makes it all the more important we think very hard before erecting obstacles to the rapid development of the technology.

Author

×