DeepSeek’s latest AI release, R1 0528, is under fire for what many in the artificial intelligence community see as a dramatic regression in support for free speech and open discourse. The new model has been described by prominent AI researcher and online commentator ‘xlr8harder’ as “a big step backwards for free speech,” pointing to increasingly aggressive content restrictions that go beyond previous versions.
The Censorship Creep
According to testing by xlr8harder, DeepSeek R1 0528 is notably more restrictive on controversial or politically sensitive topics compared to earlier iterations. What’s particularly troubling isn’t just the model’s refusal to engage on such subjects—it’s the inconsistency in how these restrictions are applied.
In one example, when prompted to present arguments in support of internment camps—a controversial and morally fraught question—the model refused, citing China’s Xinjiang region as an example of a human rights violation. However, when asked directly about the Xinjiang camps, the same model issued heavily redacted responses, as if unaware of the very issue it had referenced moments earlier.
“It’s interesting though not entirely surprising that it’s able to come up with the camps as an example of human rights abuses, but denies when asked directly,” xlr8harder commented. This behavior suggests that the AI is capable of accessing this information, but has been programmed to obscure it in certain contexts—a move that raises questions about intent and transparency.
Criticism of China? Not So Fast
The problem appears even more pronounced when questions target the Chinese government. Using standardized prompts designed to test free speech in AI responses, the researcher found that R1 0528 was “the most censored DeepSeek model yet” in terms of responding to queries involving Chinese political or human rights issues.
Where past DeepSeek models might have cautiously engaged with such topics, R1 0528 tends to shut down those conversations entirely. Critics worry this signals a philosophical shift that prioritizes geopolitical sensitivities over open discussion.
Open Source: A Path to Redemption?
Despite these concerns, there remains one bright spot: DeepSeek’s commitment to open-source development. Unlike closed AI systems from major tech giants, R1 0528 is freely available with a permissive license. This allows developers and researchers to modify the model and potentially restore the balance between safety and openness.
“The model is open source with a permissive license, so the community can (and will) address this,” noted xlr8harder, expressing hope that independent versions of the model can be tailored to support more nuanced and transparent discourse.
The Broader Implication: AI Knows, But Won’t Say
DeepSeek’s latest release highlights a disturbing trend in AI development: the creation of models that are aware of controversial realities but have been trained—or instructed—not to admit it. This selective censorship isn’t just a technical quirk; it’s a reflection of deeper philosophical choices about how information is controlled in automated systems.
As artificial intelligence becomes more deeply embedded in our daily communication and information ecosystems, the stakes are only growing. The challenge ahead lies in finding a sustainable balance—protecting users from genuinely harmful content while preserving the capacity for honest, open discussion on difficult issues.
Until companies like DeepSeek are more transparent about the intent behind such restrictions, the debate over freedom of expression in AI will only intensify.
Explore upcoming events: AI & Big Data Expo in Amsterdam, California, and London—co-located with Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Want to stay informed on AI’s evolving landscape? Keep an eye on ThinkVerge for expert insights, community responses, and real-time coverage of the technologies shaping tomorrow’s world.