Introduction

  • TL;DR: Chinese open-source AI models such as Qwen, DeepSeek and GLM are rapidly spreading across Southeast Asia, the Middle East, Latin America and beyond, helped by low cost, flexible deployment and strong multilingual capabilities. In contrast, many U.S. and European executives still favor proprietary frontier models from OpenAI, Anthropic and Google’s Gemini 3 because of their benchmark performance and mature safety tooling, but this “build for perfection” mindset may limit their reach in cost-sensitive markets.
  • Outside the U.S. and Europe, enterprises increasingly prioritize control over cost and data, which pushes them toward open-source Chinese models instead of top-tier proprietary systems.
  • Alibaba’s Qwen series and other Chinese LLMs now power hundreds of thousands of derivative models, making them de facto infrastructure for local AI applications.
  • Google’s Gemini 3 leads in reasoning, multimodal capabilities and agentic workflows, but its closed nature makes it harder for some regions to align with local data sovereignty and budget constraints.
  • “Bridge powers” in Asia and other middle-income regions are exploring a mixed approach: combine Chinese open-source stacks with U.S. proprietary models to avoid technological dependence on either side.

US Perfection vs Chinese Diffusion

At the 2025 Fortune Innovation Forum in Kuala Lumpur, investors and operators repeatedly contrasted U.S. AI firms that “build for perfection” with Chinese players that “build for diffusion.” U.S. and European executives tend to value even an 8% performance edge on coding or reasoning benchmarks, arguing that such margins can decide whether an AI system clears the bar for large-scale deployment.

Chinese labs instead focus on releasing many “good enough” models in open-source form, so developers and startups can fine-tune and deploy them freely across different environments. This shift in emphasis—from absolute performance to reach and adaptability—helps Chinese models spread quickly wherever budgets and infrastructure are constrained.

Why it matters:

  • The global AI race is no longer only about who has the single best model, but also about who can seed the largest, most adaptable ecosystem of “good enough” models.
  • Regions that cannot afford top-priced proprietary APIs may still compete effectively by standardizing on open-source stacks they can fully control.

Who is Actually Using Chinese Open-Source Models?

In Asia, the dominant concerns are control over data and costs rather than squeezing out a few extra benchmark points. SiliconFlow, a leading Chinese AI cloud provider, offers a marketplace of open-source LLMs and image models—including Qwen, DeepSeek, GLM, Yi, Mistral and LLaMA 3—optimizing them for low-cost inference so that switching between models or providers is relatively painless.

Executives from firms like Dyna.AI in Singapore say certain Chinese open models already perform better in local languages used in Southeast Asia, making them more attractive for customer-facing applications than English-first U.S. systems. Investors such as Vertex Ventures’ Chan Yip Pang advise AI-native startups that if an application is core to their competitive moat, they should build it on open-source foundations to keep tight control over their technology stack and margins.

Why it matters:

  • Adoption patterns show that in real markets, “control” often beats “raw performance,” especially in regulated sectors like finance and government.
  • This dynamic could limit the global reach of U.S. proprietary models if they remain expensive and tightly locked down.

Chinese Open-Source LLM Ecosystem in 2025

By late 2025, China’s open-source LLM landscape is both broad and deep. Key pillars include:

  • Alibaba Qwen (2.5, 3, Coder variants):

    • Model sizes from sub-billion to tens of billions of parameters, targeting general chat, coding, tool use and multilingual tasks.
    • Hundreds of millions of downloads and more than 100,000 derivative models reported on platforms like Hugging Face, making Qwen one of the most widely adopted open-source LLM families.
  • DeepSeek:

    • Emphasizes “thinking modes” and careful reasoning, with open models that compete on complex math and coding tasks.
  • GLM, Yi, Kimi, Wu Dao 3.0:

    • Provide bilingual or multilingual support with numerous lightweight variants suitable for on-device or edge deployments.

These ecosystems are backed by permissive licenses in many cases, enabling commercial use without onerous royalties, which is crucial for startups and SMEs in emerging markets.

Why it matters:

  • A rich catalog of open models lowers the barrier for countries and companies to build their own AI stacks rather than importing fully managed solutions.
  • The more derivative models and fine-tuned variants exist, the harder it becomes for any single proprietary model to dominate globally.

Gemini 3: The Proprietary Benchmark

Google’s Gemini 3 is positioned as its most intelligent AI model, featuring state-of-the-art reasoning, multimodal understanding, and strong support for agentic workflows across text, images, video, audio and code. Google is integrating Gemini 3 deeply into Search (AI Mode, AI Overviews), the Gemini app, Vertex AI and its new Antigravity development platform, so developers can build end-to-end agents that plan, code and validate applications through browser-based computer use.

Gemini 3 is also framed as Google’s most secure model so far, with extensive safety evaluations to reduce prompt injection, harmful content and misuse for cyberattacks, while an enhanced Deep Think mode is being rolled out cautiously to safety testers and premium subscribers. For enterprises prioritizing robust governance, compliance and integrated tooling, such a tightly managed proprietary model remains extremely attractive.

Why it matters:

  • Gemini 3 raises the technical bar, but its proprietary nature and pricing may make it less accessible for organizations that prioritize sovereignty and cost control.
  • The contrast with Chinese open-source stacks illustrates a trade-off between vertically integrated, high-performance platforms and more modular, remixable ecosystems.

Comparison: Gemini 3 vs Chinese Open-Source Stacks

AspectGemini 3 (Google)Chinese Open-Source LLMs (Qwen, DeepSeek, GLM, etc.)
Model typeProprietary frontier LLMOpen-source families with code + weights released
Performance focusState-of-the-art reasoning, multimodal and agentic workflows“Good enough” performance across many sizes and domains
Cost & controlUsage-based API pricing, limited infra controlCan self-host or use low-cost platforms like SiliconFlow
EcosystemTight integration with Google Search, Workspace, Vertex AILarge number of forks and derivatives on Hugging Face and local clouds
Data sovereigntyData largely processed in vendor’s cloudCan keep data on local infra / national clouds
Multilingual / localStrong, but tuned primarily for global mainstream marketsOften optimized for Chinese and Asian languages and dialects

Why it matters:

  • For many global developers, the practical question is less “which is best overall?” and more “which combination fits budget, data rules and product roadmap?”
  • Hybrid stacks—Gemini 3 for some tasks, Chinese open-source for others—are increasingly realistic and may become the norm in multi-polar AI ecosystems.

Infrastructure Build-Out and “Bridge Powers”

The Malaysian state of Johor plans around 5.8 GW of data center projects over the coming years, almost matching its current electricity generation, signaling an ambitious push to become a regional AI and data hub for Singapore and wider Southeast Asia. At the same time, concerns are rising over electricity bills and water consumption, leading officials to pause new water-cooled facilities until at least 2027.

Parallel to this, a coalition of experts from Mila, Oxford Martin and other institutions has called on middle-income “bridge powers” to cooperate on AI infrastructure, models and capacity, so they can remain independent from both U.S. and Chinese tech spheres. Many governments in Southeast Asia, MENA and Latin America are exploring a delicate balance: use technology from both sides without becoming “servants” to either.

Why it matters:

  • Infrastructure and geopolitics strongly shape which models win in each region; cheap, self-hostable open-source models align naturally with countries building their own AI data centers.
  • Policy choices today—around energy, water and data localization—will constrain or enable future AI strategies for decades.

Practical Playbook for Builders

For startups and enterprises deciding between Gemini 3 and Chinese open-source stacks, several pragmatic patterns are emerging:

  • Use open-source for core differentiation:

    • For mission-critical features that define your product’s edge, favor open-source models you can self-host and fine-tune, preserving IP and margin.
  • Use proprietary APIs for non-core or premium features:

    • Leverage Gemini 3 or similar frontier models for advanced reasoning, multimodal content or premium tiers where higher unit cost is acceptable.
  • Design for multi-model routing:

    • Implement infrastructure so that different workloads are automatically routed to the most cost-effective or capable model (e.g., Qwen for routine tasks, DeepSeek for reasoning, Gemini 3 for complex multimodal queries).

Why it matters:

  • This “best of both worlds” approach maximizes flexibility and minimizes vendor lock-in while still exploiting frontier capabilities where they add clear business value.
  • Teams that architect for multi-model, multi-cloud from day one will adapt faster as new models and providers emerge.

Conclusion

  • Chinese open-source AI models are quietly becoming the default choice across many non-Western markets by aligning with local constraints around cost, data sovereignty and language.
  • At the same time, models like Gemini 3 continue to push the frontier on reasoning and multimodal intelligence, ensuring that proprietary platforms retain a strong role at the high end of the market.
  • The most resilient strategies for companies and countries alike will likely combine both worlds, using open-source for control and reach while selectively tapping proprietary systems for cutting-edge performance.

Summary

  • Chinese open-source AI models (Qwen, DeepSeek, GLM) are gaining rapid adoption across Asia due to cost efficiency and data sovereignty benefits.
  • Google’s Gemini 3 leads in reasoning and multimodal capabilities but faces adoption barriers in cost-sensitive markets.
  • “Bridge powers” in Asia are exploring hybrid approaches combining both ecosystems.

#chinaAI #opensourceLLM #Qwen #DeepSeek #Gemini3 #AsiaAI #datasovereignty #AIinfrastructure #bridgepowers #AIstrategy

References