Post

Rewiring Democracy Now: Switzerland shows us an alternative to corporate AI

An article for The Renovator about Imanol Schlag and the most advanced Public AI project in the world, Switzerland's Apertus model.

Rewiring Democracy Now: Switzerland shows us an alternative to corporate AI

This article was written with Bruce Schneier and originally published by The Renovator on 2026-02-21 as the second part of a multi-part series updating stories from our book, Rewiring Democracy.

Skeptics of AI often point to the many significant, unchecked harms that AI produces in society today. Bosses cut human jobs, even if it means stealing human creativity to build AI replacements— regardless of whether the replacement technology is up to the job. Megacorporations aim to capture all the value created by AI for their shareholders, imperiling the global economy with risky and unsustainable ventures. And AI companies use exorbitant amounts of energy and natural resources to fuel all of this, with no apparent concern for local environmental damage or global climate impacts.

These harms are real and alarming, but not ones generated by AI technology. They’re business strategies, and problems generated by capitalism. They manifest [a][b][c] because of a manufactured “arms race” that incites massive investment, deregulation that permits corporations to externalize AI’s true costs, and the corporate capture of government that permits all of it.

Public AI as an Alternative to Corporate AI

AI without all these deleterious effects is possible, but it requires an alternative model of AI development, one that is not corporate-led.

Instead of using capitalistic incentives to drive AI, we can incentivize developers to serve the public good. AI models can be trained on appropriately licensed data instead of stolen content. They can be built using renewable energy at a scale optimized to balance cost and usefulness, instead of maximized to eke out gains in performance benchmarks or chase “superintelligence.” They can be served up at cost to meet public demand, like a utility, instead of forced onto consumers to drive market valuations. Perhaps most importantly, AI’s development can be responsive to public demands and civic principles, rather than tuned for private profit.

This is the notion behind public AI, a concept—and a movement—for building AI in the public interest. Together with our colleague Henry Farrell, we were among the first to argue for this more democratic, just, and public-interest-based model of AI development in the weeks after the release of ChatGPT took the world by storm. In our book, Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship, we conclude that a public alternative to the corporate AI model is essential to steering the technology to benefit democracy globally.

Many other authors and institutions have urged the creation of public AI, including Mozilla Foundation, Open Future, the Economic Security Project, the Vanderbilt Policy Accelerator, and Brookings (in a piece we co-authored). A movement of scholars and practitioners worldwide has organized around the idea.

Apertus Changes the Public AI Landscape

This vision of public AI may sound like a pipe dream, but it took a big step towards reality last year. On September 2, the Swiss AI Initiative launched its first large-scale language model, Apertus. The name is pronounced “aparTOOS,” and means “open” in Latin. While not the first model to aspire to the principles laid out above, Apertus is now the world’s most advanced public AI model.

Prior projects in the public AI sphere have generally fallen short of the concept. AI Singapore, a government-wide initiative of Singapore, was one of the earliest innovators in this space. Its SEA-LION model family, in development since 2023, aims to tailor AI capabilities for specialization in Southeast Asian languages and for use in applications like public administration, healthcare, and education.

The earliest versions of SEA-LION were trained from scratch, at a relatively small scale, on permissively licensed data. More recent versions, like the latest v4 model, are largely fine-tuned versions of commercial open-weight models from Google and Alibaba. The transition from public agency-controlled and transparent model development to reliance on by-products of corporate AI development, whose training data and practices are not disclosed, is a reflection of the difficult choices involved for public agencies balancing cost with usefulness. (An upcoming release of SEA-LION will be based on Apertus.)

Another example of public AI was the Spanish project ALIA, first released in January 2025. It was trained at the Barcelona Supercomputing Center and built to specialize in Spain’s co-official languages. ALIA was upgraded with a more refined, instruction-tuned version in December 2025 after a rocky initial reception.

Some non-governmental organizations have also been producing open and transparent AI. The Allen Institute for Artificial Intelligence (Ai2) has been developing the OLMo series of language models with a high degree of transparency: open data, open code, open standards. Unlike the Swiss and Spanish projects, Ai2 relies on corporate partners Google and NVIDIA for compute resources, and also unlike those European projects, Ai2 is not under public control.

How Apertus Came To Be

In November, to learn more about Apertus, we spoke to Imanol Schlag, AI Research Scientist at the ETH AI Center at the Swiss public university ETH Zurich and co-lead of Apertus, and Joshua Tan, Product Lead for the Public AI Inference Utility PublicAI.co.

Schlag’s co-leads on the Apertus project are Antoine Bosselut and Martin Jaggi, professors in natural language processing and machine learning at the public research university EPFL in Lausanne, Switzerland. He and his colleagues benefited from a generation of investment in Swiss national supercomputing infrastructure. The Swiss National Supercomputing Centre (CSCS) operates a computing cluster extended in 2024 with 2,688 advanced GPU nodes with a total of nearly 11,000 NVIDIA GH200 Grace Hopper “superchips.”

The CSCS headquarters in Lugano, Switzerland. Image credit: CSCS.

The Swiss AI Initiative, largely composed of members representing EPFL, ETH Zurich, and other Swiss research institutions, received grants of ten million GPU hours per year on Alps. The stated goals for the grants included developing “trustworthy AI” to “protect Switzerland’s digital sovereignty.” and to create products for use in both “industry and public administration.”

The Apertus team’s technical report released in September–and a model of transparency– provides ample details on the principles, design, and technical considerations behind the model. The scale of Apertus is unprecedented among fully open AI models: 70 billion parameters (OLMo 3 maxes at 32B) trained on 15 trillion tokens of data (roughly five times the size of the Library of Congress) across 4,096 GPUs. It’s hard to compare these figures to the most popular commercial models since the latter tend to have secretive training processes, but “frontier” model development now operates on a scale of many trillions of parameters and probably several times as much data. 70B is about the same size as the 2023 generation of models from Anthropic, Meta, and Alibaba.

Simple parameter and token counts are a reductive way to evaluate AI models, but they do point to significant trade-offs among cost, performance, and sustainability. Apertus was not designed to be the world’s most powerful model, but it is trained at a scale large enough to be useful and small enough to be within the grasp of a research project of a small European nation.

Apertus consumed 6 million GPU hours in its final training run and perhaps 10 million GPU hours total, including other experimentation, according to Schlag. Assuming commercial rates of ~$6-10 USD per hour, the total computation cost was between $36 and $100 million USD. Schlag estimated that about 20 research staff were substantially focused on the project over six months, as well as about 80 other contributors who devoted less time to the project. The training was entirely fueled by renewable Swiss hydropower.

The data to train Apertus came from a variety of sources familiar in open-source AI development, although Apertus took extra steps to filter those datasets to ensure compliance to the EU AI Act. The law sets expectations for AI developers to adhere to content owners’ opt-out preferences for their data to be excluded from AI model training. Since many rightsholders have changed their preferences over time, or simply taken steps to encode their preferences in machine-readable “robots.txt” files on their websites, the Apertus team made the effort to crawl the original data sources of widely used training sets like FineWeb2 to remove a total of more than 2.5 million documents. The Apertus team may not be unique in this rigorous level of adherence to opt-outs, but we are not aware of any other AI model trainers who do so.

The model is published and licensed for permissive noncommercial and commercial reuse (Apache 2 license) and available on the HuggingFace platform for any person or organization to download and deploy on their own hardware. The Swiss team has also made partnerships available to allow anyone to use Apertus, for a small cost.

Tan’s Public AI Inference Utility project describes itself as “public libraries for AI,” in that it makes more accessible resources that you might otherwise purchase commercially. You can try Apertus right now on their website, publicai.co. This is a project of the nonprofit Metagov with funding from Mozilla, the Future of Life Institute, and the Center for Cultural Innovation. For now, Apertus can be freely used, thanks to compute resources donated by multiple partners, including Swiss AI / CSCS, commercial cloud providers AWS and Cudo, and national AI and computing initiatives of Australia, Germany, and Singapore. Developers can use the Public AI Inference Utility to power applications at a nominal cost. In the future, the Utility has said it will explore ad-supported, usage-based, and premium subscription plans to sustain the service.

An example use of the Public AI Inference utility at publicai.co, demonstrating German-language question answering to a query asking for a definition of sovereign AI.

The Features, and Drawbacks, of Apertus

The first released version of the Apertus model is technically sophisticated, but its greatest innovations have more to do with policy choices than algorithmic breakthroughs.

Schlag’s group developed a Swiss AI Charter to align their model to Swiss civic traditions, which they summarize as direct democracy, privacy protection, and collective decision-making. The charter contains eleven articles articulating principles such as sustainability, personal autonomy, and human agency. Version 1 of the charter was used to train Apertus and is published in the technical report.

While the Charter was drafted by the Apertus team, they made efforts to engage with the public through a small survey of about 140 Swiss residents. They found an average agreement with their principles from 97% among respondents, which they took as validation that the Charter broadly represents Swiss public interests. In the future, the Swiss AI Initiative has said they intend to establish a democratic process with broader participation to refine the charter.

Using a process similar in spirit to Anthropic’s notion of Constitutional AI, the Apertus team performed post-training reinforcement learning to tune their model to adhere to the Swiss AI Charter using a technique called QRPO developed at EPFL. It works by prompting the trained Apertus model with questions on controversial issues and then uses another AI model (Qwen3-32B) to score how well Apertus follows the Charter in responding. The model is rewarded for high-scoring responses and penalized for low-scoring ones, producing a post-trained model that adheres better to the Charter.

An excerpt from the Swiss AI Charter published in the Apertus technical report.

In terms of industry benchmarks for model performance, Apertus is passable. The Apertus team reports that their 70 B parameter model achieves a 69.6% score on the popular MMLU benchmark for knowledge and a 78.1% on the HellaSwag benchmark for reasoning. In comparison to leading fully open models, these scores are about 8% below the OLMo2 32B parameter model and somewhat further behind the newer OLMo3 model. Meanwhile, the best performing proprietary models achieve better than 92% on MMLU (GPT-5) and 95% on HellaSwag (Claude 3). Apertus’ performance on these benchmarks is more comparable to GPT-3.5, the model that powered the original 2022 public release of ChatGPT, and gemma-3n-E4B, Google’s small scale open-weight model suitable for use on devices like phones.

Even though they are widely used to compare AI model quality, these automated benchmarks are flawed and can be poor indicators of real-world performance differences. Another measure is asking people which model outputs they prefer, which LMArena does at scale. LMArena does not yet collect data on Apertus, but a recent analysis found that only 67% of the time models with a 20% higher score on knowledge benchmarks like MMLU tend to be preferred over the lower-performing model. In other words, if you ask Apertus and GPT-5 to perform the same chat-based task repeatedly, historical trends suggest you may prefer the response of Apertus one third of the time. That leaves little doubt that the latest commercial models are higher performing than Apertus, but it also means Apertus can still be quite useful.

The Apertus team is hard at work now to produce two new versions of their model, versions 1.5 and 2.0 releases. Imanol told us that version 1.5 is targeting a release in spring 2026 and will leverage continued pretraining of Apertus 1.0 on additional datasets as well as improvements to their post-training pipeline to improve performance in legal, medical, educational, and other domains. They intend to add multimodal capabilities by adding tokenized images and perhaps also audio to the pretraining dataset, an AI training technique called early fusion. They also intend to improve Apertus’ agentic capabilities. Next up will be Apertus 2.0, for which they intend to maximize performance by training at an even larger scale—but using the same Alps compute infrastructure.

What’s to Come

The Apertus team is serious about building a sustainable initiative around their product. A cooperative has been founded, the Swiss Public Inference Utility (SPIU), to partner with Swiss AI. These institutional structures, parallel to Switzerland’s existing public university and computation centers, may be necessary for stewardship of Apertus as a civic technology product. As Schlag told us, maintaining an AI model for civic use involves a lot of tasks that don’t directly yield publications or win points during tenure review.

The institutions stewarding Apertus may be on the front lines of an international movement to develop public AI as an alternative to an ever-growing corporate AI ecosystem. As a result, they will face some significant decisions that may shape the character of that movement.

First, a decision about licensing data. Swiss AI has already taken a firm stand on compliance with the EU AI Act and allowing rightsholders to opt out from AI training. They do not, however, appear poised to compensate content creators. Corporate AI developers have become notorious for pirating copyrighted material to train models, to the extent that many fear the undermining of the publishing industry itself. But the corporate AI developers are now paying to license some content, creating a billion-dollar market for AI training data. If public AI developers do not participate in this market and compensate creators, they may suffer a widening gap in training data quality and scale relative to corporate AI developers. More philosophically, they may also fail to distribute the value created by public AI fairly to all those whose efforts contributed to it.

Second, a decision about democratic oversight. Apertus and Swiss AI are initiatives of Swiss public institutions. Apertus is therefore a model whose training and operation is paid for and led by Swiss civil servants, and also intended for wide use in Swiss public administration. However, the project is largely independent from Swiss political institutions, by design. The scientists building Apertus do not want the added complexity of obligation to political input, and do not want to yield decision-making authority for their own project. Apertus is, so far, isolated from the level of public oversight, steering, and accountability that citizens may expect from other public utilities, such as public electrical or water agencies. An initiative pursuing a maximalist vision of public AI would embrace the public accountability and political oversight associated with that kind of utility structure.

Lastly, there are questions of how broadly to pursue collaborations outside of Switzerland. Tan told us that he does not believe most countries have the capacity or interest to leverage their own national resources to build a large-scale AI model. Particularly for smaller countries or those without Switzerland’s advanced computing infrastructure, the opportunity to partner may be very attractive. And yet, it would require convincing a polity to invest in a global public good rather than one solely designed to serve their own national interests. In a moment where longstanding structures in international relations are being upended, AI training data poisoning is being wielded as a weapon of war, and many are painting ownership of AI infrastructure as a matter of sovereignty, Apertus may find itself at the center of urgent global questions of technological cooperation versus competition.

We still see public AI as a necessary counterbalance to corporate AI, essential to allowing AI technology to develop and be used without advancing a concentration of power and wealth dangerous to democracy. We are glad to see Apertus, ALIA, and SEA-LION all experimenting with diverse models for the development and sustainment of public AI, and we hope others will learn from their experience to take the ecosystem even further.

This post is licensed under CC BY-NC-ND 4.0 by the author.