Sovereign AI and the Clash of Regulatory Models
My insights on the clash of the digital empires' regulatory models, AI's impact on the job market and much more from a recent panel discussion organised by The Lighthouse Budapest.
I recently took part in a panel discussion titled "AI: Ally or Adversary?" at a private club meeting organised by a bunch of very talented young professionals under the aegis of The Lighthouse Budapest. The conversation covered a broad spectrum of topics, including the EU's digital sovereignty, the global race for AI leadership, AI’s impact on the job market, and much more. Below is a summary of my responses from the discussion.
Geopolitical & Macro Perspective
Should Europe and smaller markets push for ‘sovereign AI,’ or is it naive to think they can go against Big Tech dependency?
I think this is the smart way to go, as it would decrease Europeans’ reliance on foreign technologies and tech stacks.
That said, there are clear challenges. Most leading AI models are currently owned by US or Chinese companies. However, just as the GDPR demonstrated the EU’s commitment to protecting citizens’ data, a principle rooted in the Union’s founding values, the push for AI sovereignty aligns with Europe’s broader strategic goals. Building large-scale AI infrastructures, improving data accessibility (a core objective of the EU’s data-related legislation), and enhancing AI literacy and upskilling across the bloc will all be essential steps.
It will, however, require enormous financial investment if the EU is to compete with the US and China. Moreover, Europe may need to bolster its competitiveness by revisiting or easing certain regulatory constraints - an adjustment that, in some respects, appears to be already underway.
China is betting on state-driven AI, the US on corporate-driven AI, and Europe leans on regulation. Which model is most likely to dominate?
Regulation is not an end in itself, but a means to an end. It is therefore necessary to look at the objectives each regulatory model seeks to achieve when comparing them.
China’s state-driven model is designed to secure an advantage in the global AI race. This competition is mainly about who can build and control the biggest, best and most widely used tech stack. The United States already demonstrated similar dominance in areas such as the Internet, domain name system, and the international financial system - platforms that much of the world relies on today. This reliance has granted the US immense influence and soft power across the globe.
China, seeking to replicate the former success of the United States, is now investing heavily in open AI models that are easier and cheaper for other countries to adopt, customize, and deploy, particularly in markets with lighter regulatory burdens than the EU. If these nations become dependent on China’s technological infrastructure, it could serve as the ideal springboard for Beijing to export not only its technology but also its governance models, including the use of digital tools for surveillance and social control, along with its cumbersome loans. This emerging “digital Belt and Road Initiative” could become a powerful instrument for China to gain an edge over the US in the global technology race and extend its soft power. Moreover, the defining feature of China’s technological approach (the consolidation of state control and the preservation of the Chinese Communist Party’s monopoly on power) may hold strong appeal for other authoritarian regimes.
The United States' market-driven model focuses on protecting and incentivizing innovation with minimal legal regulation, which is viewed as a necessary, but undesirable intervention. This hands-off approach has enabled US tech firms to achieve global dominance. However, regulating technology poses a significant challenge: intervening too early, without fully understanding its legal and societal impacts, risks stifling innovation and disrupting free markets; yet regulation that comes too late can struggle to mitigate harmful effects. This dilemma is evident in the EU’s recent efforts to curb the global dominance of major platforms like Google, Amazon, and Meta through comprehensive regulations such as the Digital Markets Act and the Digital Services Act.
During the Biden administration, there appeared to be a shift towards a more European-style regulatory model with stronger oversight of tech firms. However, these initiatives were quickly rolled back under the Trump administration, which advocates deregulation to maintain and bolster the dominant positions of US firms worldwide, pursuant to their AI Action Plan. This Trump-era approach frames the EU’s regulatory actions as barriers to innovation and economic competitiveness, emphasizing free-market principles and opposing what it sees as excessive regulatory interference.
The US and the EU are at odds over tech regulation due to their different priorities. The EU’s rights driven model is based on the protection of fundamental rights and the safety of its citizens, which, according to the EU, is best achieved by regulating the technological sector. However, US-based companies have frequently disregarded these rules. For instance, Facebook’s latest "consent or pay" model appeared designed to pay only lip service to the EU’s data protection regime without genuine compliance. Another recent example is US providers training their large language models on a vast amount of content scraped from the internet, including copyrighted content from European authors and publishers, without putting in place proper licensing schemes, and by breaching EU copyright rules.
In the meantime, the EU has emerged as a global regulatory superpower. Through what is often termed as the "Brussels effect", its regulations such as the GDPR or the AI Act have been adopted as blueprint by a host of other countries as well. It remains to be seen just how much the EU will hold firm to its regulatory stance or accommodate US demands to water down its digital regulations, which would be critical for US firms to thrive in the European market if they are to stand a chance in the global AI race. The recent withdrawal of the AI Liability Directive, reported under US pressure, appears to be a bad omen. This directive would have been crucial for protecting individuals harmed by the improper use of generative AI tools.

Industry & Future
Many AI startups build on APIs from the same 2–3 giants. Is that true innovation, or just repackaging someone else’s power?
This is how the AI ecosystem works. The core innovation lies in creating and developing the AI models which serve as the powerful engines of AI systems. However, to truly unlock their potential, you need robust AI systems featuring intuitive user interfaces, great user experience, well-designed workflows, automation and assistants. Accordingly, it is more of an interdependence than just dependence.
Innovation therefore occurs both at the foundational AI model level and at the application layer where these models are integrated into user-centric solutions. This creates an interdependent ecosystem rather than one of simple dependence, where model providers supply capabilities that startups leverage creatively to address specific problems and deliver value.
Everyone thought copywriters would disappear, yet now there’s a $300k job for a ‘ChatGPT content strategist.’ What does that reveal about the gap between hype and reality?
When discussing AI's impact on the job market impact, two narratives often emerge. The first claims that lots of people will lose their jobs because of AI, but this is somewhat exaggerated on the whole. AI primarily replaces and/or augments tasks rather than people and entire jobs. Whether someone's job is at risk depends largely on how repetitive or "automatable" their tasks are. In some industries, like legal services, junior roles and tasks face greater and more immediate automation risks, but as models advance, even higher-level tasks might be impacted.
On the other hand, the AI is creating numerous new job and career opportunities. Like for instance, in the legal business, roles such as AI officers, legal prompt engineers, workflow designers are emerging and similar trends are expected in other sectors as well. Managing this transition effectively to minimize disruption and enable workforce adaptability will be essential to harness AI’s full potential.
Can any company honestly say: ‘Our data is safe’ when using AI?
There are always going to be risks involved. Just as people didn’t confidently claim commercial air travel was perfectly safe until it had a long track record of rigorous safety improvements and trust, AI data security requires time, transparency, and continuous development to build that same level of trust. Risks range from data poisoning and adversarial attacks to model inversion and insecure APIs, which hackers are exploring in increasingly sophisticated ways.
The tolerance or appetite for these risks also varies by industry. For instance, in the legal sector, the confidentiality of client data and GDPR compliance are paramount, and non-negotiable. This is why most legal tech providers promise to comply with these requirements, though whether these protections will indeed be effective in practice remains to be seen. Building trust in AI data security depends on ongoing vigilance, robust governance, transparency and real accountability.
What’s the one AI promise you think will definitely fail in the next five years?
The idea that AI will wipe out consultants, lawyers, doctors. While AI will surely change business models and automate many routine tasks, it will not replace entire professions.
I believe that expertise, experience and accountability shall remain essential, especially because humans and legal entities can be held liable - something AI, lacking legal personality, cannot.
Human involvement will continue to be crucial for important, costly, strategic, or high-risk decisions. No one will rely solely on generative AI outputs when it comes to critical health, financial, or legal matters. Instead, AI may, if used responsibly and carefully, serve as a powerful tool to augment professional judgment, but not substitute it. I like to think that the future lies in effective human-AI collaboration, where AI handles repetitive or data-intensive work, while humans provide oversight, critical thinking, interpretation, and ethical responsibility.