As AI tools are increasingly embedded in people’s daily lives and the global economy, policymakers worldwide have declared their pursuit of “AI sovereignty”—the idea that governments should invest in and secure control over domestic AI systems, training data, and cloud computing infrastructure. Freedom House’s recent Freedom on the Net report (which we co-authored) found that such government investments in AI may enable censorship and surveillance, particularly in authoritarian states. Policymakers and technology firms in democratic countries should work with civil society organizations to ensure that safeguards are embedded in such systems to protect people’s rights. Technology firms that contribute to the AI stack in authoritarian countries should conduct due diligence to ensure their products are not facilitating human rights violations.
Sovereignty Across the Stack
The AI sovereignty framework is used flexibly to describe control over different components of the AI stack—the varied digital and physical infrastructure that powers AI systems. AI sovereignty initiatives comprise efforts to develop chatbots and other local application-layer AI services, to cultivate a country’s domestic foundational and large language models (LLMs), or to ensure that energy and computing infrastructure are accessible to domestic companies.
Leaders pursuing these initiatives often state concerns about U.S. or Chinese dominance in the AI industry alongside their calls for sovereign AI investments. Despite the “sovereign” moniker, many of these efforts nevertheless rely on equipment or services from U.S.- or China-based companies. These AI companies are often contracted to supply cloud computing power, semiconductors, and other forms of support that fuel sovereign AI initiatives. Researchers were quick to coin phrases like “sovereignty-as-a-service” and “sovereignty washing” to describe global technology firms’ rush to rebrand product offerings as facilitating sovereign AI.
At present, truly sovereign control over the entire AI stack remains out of reach. The complex supply chains for critical minerals, chipmaking technology, and other components necessary to power an AI system are fundamentally transnational. For instance, Taiwan-based companies, led by chip manufacturer Taiwan Semiconductor Manufacturing Company, are responsible for the production of 90 percent of advanced chip manufacturing. At the same time, the Netherlands-based ASML controls 90 percent of the market share for the lithography machines on which chipmakers rely, as of the end of 2025. Governments with their sights set on establishing AI sovereignty will inevitably need technology produced outside of their borders.
The focus on AI sovereignty dovetails with a broader movement towards technological sovereignty, which governments have pursued with varying levels of intensity. More democratic governments have expressed a desire to reduce their overreliance on technology products from foreign companies, predominantly based in the United States and China. For instance, the European Union’s commitment to “digital sovereignty” has grown more dogged in recent years amid increased tensions with the second Trump administration and U.S.-based technology firms.
Meanwhile, authoritarian governments have adopted a more restrictive framework to wall off the domestic internet from the global network. In 2010, the Chinese government announced its pursuit of cyber sovereignty; other authoritarian leaders, most notably Vladimir Putin, have since followed suit. The drive for AI sovereignty is accelerating the pursuit of all types of digital sovereignty: in authoritarian contexts where governments are pursuing a more restrictive framework, the push for AI sovereignty threatens to amplify efforts to isolate entire countries from the global internet.
Human Rights in the “War of Chips and Algorithms”
In authoritarian countries, sovereign AI initiatives could compound existing threats to fundamental rights, such as freedom of expression, or reinforce the repression of marginalized minority groups. Models developed under government oversight may incorporate censorship of certain content embedded in their legal frameworks, such as criticism of the authorities. While these concerns may also be present in democracies, they are especially acute in authoritarian environments where civil society has extremely limited capability to hold the government accountable.
In Thailand, for instance, the National Science and Technology Development Agency rolled out Pathumma LLM, a model trained to “understand Thai context and culture,” in early 2025. The agency also announced a cash infusion from an Nvidia subsidiary in June with the aim of further developing a native Thai LLM. Criticism of the monarchy is highly censored in Thailand and authorities have enshrined widespread restrictions on expression about sensitive political and social issues into law, in violation of Thailand’s obligations under the International Covenant on Civil and Political Rights. These restrictions will likely be incorporated in AI systems developed by companies that adhere to Thai government pressure. Vietnam, which routinely censors content critical of the government and imprisons people who post it, has also seen a glut of sovereign AI investment over the course of 2025, including a $400 million investment in AI infrastructure by premiere Vietnamese technology firm FPT in April and the launch of a national data fund in June. A Vietnamese Politburo member characterized government investments in AI as a means of affirming the country’s “historical sovereignty … in the digital realm.”
While developing AI systems that reflect local culture is sensible, there is a clear risk that in some cases the promotion of local culture will be used as justification to enact censorship. Previous Freedom House research found that AI governance frameworks in China and Vietnam—both one-party states ranked as “not free” in Freedom House’s Freedom on the Net and Freedom in the World reports—require generative AI chatbots, whether developed by foreign or domestic firms, to toe the Communist Party line on sensitive topics. A November 2025 experiment from Gazzetta, a media resilience research lab, found that China-based DeepSeek’s chatbot is more knowledgeable about Chinese labor organizing issues compared to Gemini and ChatGPT, but more aggressively suppresses information about them.
AI investments may also facilitate government surveillance, especially in authoritarian countries that lack strong privacy safeguards. The Persian Gulf monarchies have emerged as hubs for AI investment, particularly the United Arab Emirates (UAE), which announced a massive AI infrastructure project in May 2025 in partnership with the United States. UAE officials have billed the effort as essential for data sovereignty in the AI boom. The company at the forefront of the UAE’s AI industry, G42, has been scrutinized by members of the U.S. Congress and the Biden administration for its apparent ties to Emirati and Chinese companies that specialize in surveillance and spyware technology. More broadly, the Emirati government has a long history of deploying such tools for mass monitoring and targeted surveillance of human rights defenders.
In these authoritarian environments, rapid AI investment may serve to increase the efficacy and application of repressive surveillance methods. For instance, Chinese law enforcement agencies have used AI-empowered facial recognition systems and other movement-tracing technologies to track protests and gatherings. In November 2025, The New York Times reported on a conference in Beijing where companies advertised a range of AI tools that could facilitate surveillance, including speech recognition software geared towards the country’s ethnic minorities.
Sovereign AI development in authoritarian contexts is likely to advance ongoing efforts to disconnect from the global internet. In June 2025, the Russian and Belarusian governments launched a joint plan to develop AI built on “fundamental and traditional values,” invoking the same justifications that these regimes cite when restricting access to the global internet. These initiatives have been matched by efforts to direct internet users toward domestic, government-friendly platforms and websites. Russian authorities have enacted legislation to establish a sovereign internet, restricted access to a wide array of social media platforms and messaging applications, and introduced new blocking technology to isolate the country’s network from the global internet. Ahead of Belarus’s sham January 2025 election, the government ordered the blocking of all websites hosted outside of the country.
Iran’s vice president for science similarly announced a national AI platform in March 2025, framing it as part of a “war of chips and algorithms.” A senior scientist working on the project noted that the platform was designed to function even if Iran were to be completely disconnected from the global internet, potentially a gesture to the effect of international sanctions on the country or to Tehran’s long-term effort to cut Iranians off from the global network, as it did in response to mass protests beginning in January 2026.
Balancing AI Sovereignty and AI Governance
In many cases, government investment in AI development reflects an understandable desire to ensure that this new wave of technology serves local interests and generates economic benefits outside of the richest and most developed countries. Some initiatives advanced under the banner of AI sovereignty offer legitimate benefits; for example, states have a genuine interest in regulating the transit and processing of sensitive government data when integrating AI systems into decision-making. However, these uses require thoughtful decision-making from policymakers about when to integrate AI systems into the daily work of government, and commensurate investment in rules for how they are developed and deployed.
The world’s democracies have already launched efforts to cultivate sovereignty over some parts of the AI stack. France’s President Emmanuel Macron urged European policymakers to embed sovereignty in the selection process for AI hubs equipped with supercomputers, dubbed “gigafactories,” commenting that U.S. and Chinese officials had taken similar approaches. In South Korea, authorities announced they would commit extensive government support, in the form of access to computing power and personnel, to five South Korean consortia that aim to cultivate a Korean foundation model that can compete with the world’s leading AI services.
Civil society organizations in some of the world’s largest democracies have sought to push policymakers to invest in new standards for AI governance to accompany these investments. In July 2024, the Brazilian government announced a $4 billion investment plan for sovereign AI to give Brazilians and their institutions more control over the development of AI in the country. Some $1 billion of the funding was allocated to state-owned technology firms. Brazilian civil society organizations have urged policymakers to embed fundamental rights safeguards as they develop the country’s rules for AI governance. Similarly, Indian academics and organizations have called on authorities to put rights-respecting governance measures in place as the government moves forward with AI projects, such as research into open-source LLMs that can operate amid the country’s linguistic complexity, which was announced in January 2025.
Mitigating Harms in the Sprint Toward Sovereignty
Those at the forefront of AI innovation, whether in government or the private sector, should ensure that sovereign AI initiatives do not exacerbate digital repression. The right way forward on AI sovereignty requires governance frameworks that enable democratic oversight, enshrine safeguards for fundamental rights, and fuel innovation tailored to local needs.
As governments deploy AI services and develop their own systems, they should conduct human rights impact assessments to ensure that their products do not curtail freedom of expression and other fundamental rights, and commit to openness and transparency to ensure democratic accountability is possible. These initiatives should include identifying key human rights risks from across the full lifecycle of design and deployment, empowering impacted communities to comment on potential human rights impacts, developing and implementing mitigation plans, and evaluating the success or shortcomings of those plans. In addition, governments should limit data collection for AI systems to strictly what is needed to provide a necessary service, and work to maintain interoperability and data portability when such systems are deployed.
Companies contracted by governments to develop or deploy AI tools in the name of sovereign efforts should be prepared to conduct human rights due diligence efforts as well. Localization and customization should never be prioritized over human rights principles, particularly when working with governments that have a track record of digital repression, such as those ranked Not Free by Freedom on the Net.
Across the board, those advancing sovereign AI as a framework should consult with local civil society organizations about how to ensure such initiatives present genuine benefits without harming human rights and implement policies that make explicit commitments to upholding human rights principles. These basic due diligence measures are essential to ensure that the sprint toward AI sovereignty does not leave people’s rights in the dust.
FEATURED IMAGE: Visualization of digital surveillance (via Getty Images)
Great Job Kian Vesteinsson & the Team @ Just Security for sharing this story.




