Chinese AI Startup Sand AI Blocks Politically Sensitive Images in Video Tool Magi-1

You are currently viewing Chinese AI Startup Sand AI Blocks Politically Sensitive Images in Video Tool Magi-1

A new AI video generation model from Chinese AI Startup Sand AI is drawing international attention, not only for its advanced capabilities, but also for apparent censorship of politically sensitive content.

Earlier this week, Sand AI unveiled Magi-1, a 24-billion-parameter video-generating model capable of autoregressively predicting frame sequences. The company claims Magi-1 offers superior video quality and physics simulation compared to other open models, earning praise from tech leaders, including Kai-Fu Lee, the founding director of Microsoft Research Asia.

However, TechCrunch reports that the hosted version of Magi-1 is actively filtering out politically controversial image uploads, in line with China’s strict information regulations.

Blocking Prompts with Political Sensitivity

The Magi-1 platform requires users to upload a base image to initiate video generation. But according to testing, Sand AI blocks images related to topics deemed politically sensitive by Chinese authorities. These include photos of Chinese President Xi Jinping, Tiananmen Square, the iconic Tank Man protest image, the Taiwanese flag, and symbols associated with Hong Kong independence.

Screenshot 2025 04 22 at 3.17.33PM e1745350081582
Image Credits:Sand AI

Even renaming image files did not bypass the filter, suggesting the system uses visual recognition — not file names — to enforce restrictions.

Sand AI isn’t alone in this practice. Other Chinese AI platforms, including Hailuo AI from Shanghai-based MiniMax, are also reported to block politically sensitive content. However, Sand AI’s censorship appears more aggressive than most — Hailuo, for example, reportedly allows Tiananmen Square images while Sand AI does not.

Regulatory Pressure and Legal Compliance

China’s strict AI governance laws are likely driving this filtering. As Wired reported in January, the country’s 2023 AI regulations prohibit AI models from generating content that “damages the unity of the country and social harmony.” This vague phrasing is often interpreted as a mandate to suppress political dissent or alternative narratives.

To comply, many Chinese startups implement content moderation at the prompt level or fine-tune their models to prevent undesired output entirely.

Double Standards: Political Censorship vs. Adult Content

Interestingly, while Chinese AI models aggressively censor political imagery, they often lack filters for adult content. According to a recent 404 Media report, several video generators released by Chinese companies enable users to create non-consensual nude imagery — a growing ethical concern in the generative media space.

This disparity highlights a troubling contradiction: content seen as politically destabilizing is tightly controlled, while tools that could facilitate harassment or abuse often remain unrestricted.

Conclusion

While Magi-1 showcases the growing technical capabilities of China’s AI sector, its aggressive censorship practices also reflect the ongoing tension between innovation and government control. As AI-generated media becomes more mainstream, the global conversation around content moderation, creative freedom, and misuse continues to evolve, with China’s approach drawing increasing scrutiny from the international community.

Get the Latest AI News on AI Content Minds Blog

Leave a Reply