Without a doubt, AI is the most disruptive technology of the last decade. But in the rush to create and adopt AI-based tools and enjoy early economic and competitive benefits, we are only now beginning to properly consider the unforeseen and potentially challenging consequences of such rapid implementation.
Regulatory frameworks such as the EU AI Act, which entered into force in August 2024, and California’s SB 1047 are attempting to catch up with the fast-paced AI development and adoption. Although many of the provisions are based on common sense, such as risk-scoring of AI applications and setting out more stringent requirements for high-risk use cases in law enforcement, there is increasing concern that the EU AI Act will adversely affect the evolution of AI in the region.
Time will only tell how stringently the regulations will be enforced, as the regulatory burden for tracking conformity to these requirements will seriously complicate the deployment of AI for even the most low-risk use cases.
The risk of monopolies and stifled innovation
Regardless, the AI Act creates a strong risk for monopolistic regulatory capture. The act could result in such complex regulatory burdens that only the largest corporations would have the resources to comply with them. If the regulations are enforced aggressively, the likely result is that AI deployment in the EU will slow down and gradually move out, with existing large enterprises becoming the only remaining companies seriously operating in AI.
The potential for overly rigorous guardrails was also in evidence with the California SB 1047 legislation, which was subsequently vetoed by California Gov. Gavin Newsom. Just as with the EU AI Act, SB 1047 would have created egregious reporting requirements that would stifle innovation from smaller companies and concentrate power in a handful of large companies that could afford the legal and compliance departments needed to comply. Moreover, its third party auditing requirement could have a significant impact on the cost of model development, which in turn might detrimentally affect companies downstream that are consuming the models in the longer term.
SB 1047 also generated concerns that regulation, which could have a far-reaching impact on global AI development, should only come from federal or international governing bodies with a wider remit, to ensure a holistic approach. This is not, however, only an issue for national agendas nor purely a technology debate. As we have seen from previous innovation-driven regulations, such as those relating to data privacy and protection, this is a societal challenge that will affect the companies developing, implementing and using AI tools as well as everyday consumers who engage with regular services that will soon be underpinned by AI.
Two major AI trends are coming
As a result, we will likely see two major AI trends develop in the near future. First, early adopters of AI will start to realize significant positive impacts on worker productivity as enterprises begin to embed AI across the organization. Roll-out will focus on identifying the use cases that are most appropriate for AI augmentation and training employees to maximize its potential. Areas that are ripe for AI enhancement include marketing, software engineering, and customer service, where human and AI agent cooperation can supercharge team efficiency.
Regulators in the United States and the EU will start to converge on a status quo regarding AI compliance.
Second, regulators in the United States and the EU will start to converge on a status quo regarding AI compliance. Much like with GDPR compliance, AI regulatory pressure will be primarily focused a few of the largest corporations – which will likely be made an example of – due to the misuse of LLMs by a few bad actors. Companies can start to prepare for this now by starting their AI readiness journey, which will require them to begin to build a data governance foundation.
By cataloging and classifying all the data across the organization, enterprises can begin to do the necessary work to ensure that the data used for internal AI development is safe, high quality, and free of sensitive information. Alongside this, organizations can start to create AI model registries, ensuring that they have the documentation, audit trails, and justification for the application of AI in their enterprise.
The bottom line is that AI is one of the most promising and important technologies ever developed. It has the potential to create massive value across the global economy, but we must protect against high-risk edge cases. Regulators need to be careful when building and enforcing regulations so as not to prevent reasonable and responsible usage and development of AI both regionally and globally.