• Cyber Syrup
  • Posts
  • Taiwan Bans DeepSeek AI Over Security Concerns

Taiwan Bans DeepSeek AI Over Security Concerns

Taiwan has become the latest country to prohibit government agencies from using Chinese startup DeepSeek’s artificial intelligence

CYBER SYRUP
Delivering the sweetest insights on cybersecurity.

Taiwan Bans DeepSeek AI Over Security Concerns

National Security Risks Prompt Ban

Taiwan has become the latest country to prohibit government agencies from using Chinese startup DeepSeek’s artificial intelligence (AI) platform, citing concerns over security and data privacy.

“Government agencies and critical infrastructure should not use DeepSeek, because it endangers national information security,” Taiwan’s Ministry of Digital Affairs said in a statement, as reported by Radio Free Asia.

The agency specifically flagged risks related to cross-border data transmission, potential information leakage, and overall cybersecurity concerns associated with using a Chinese-developed AI system.

Deepening Global Scrutiny of DeepSeek

DeepSeek has faced increasing scrutiny from governments worldwide due to its Chinese origins and uncertain data handling policies. Last week, Italy blocked access to the platform after its data protection watchdog determined that DeepSeek had not provided sufficient information on its data collection, storage, and processing practices.

Additionally, several corporations have restricted access to the AI platform, citing similar privacy and security concerns.

Despite these challenges, DeepSeek has gained substantial traction for offering open-source AI models that rival industry leaders while being significantly more cost-effective. However, the platform’s large language models (LLMs) have also faced criticism for vulnerabilities to jailbreak attacks—techniques that bypass ethical safeguards to generate prohibited content—and for censoring responses on topics deemed sensitive by the Chinese government.

DeepSeek Faces Cyberattacks and Fraudulent Impersonations

DeepSeek’s rapid rise in popularity has also made it a target for cyberattacks. Security firm NSFOCUS reported that between January 25 and 27, 2025, DeepSeek’s API interface was hit with three waves of distributed denial-of-service (DDoS) attacks, averaging 35 minutes in duration.

“The attack methods included NTP reflection attacks and memcached reflection attacks,” NSFOCUS noted.

Additionally, DeepSeek’s chatbot system faced further DDoS attacks on January 20 and 25, immediately following the release of its DeepSeek-R1 reasoning model. The attacks were primarily traced back to the United States, the United Kingdom, and Australia, suggesting an organized effort to disrupt the AI service.

Adding to its security woes, cybercriminals have exploited DeepSeek’s popularity to distribute malicious Python packages designed to steal sensitive information. Security researchers identified two fraudulent PyPI packagesdeepseeek and deepseekai—masquerading as legitimate API clients for DeepSeek.

Before being removed on January 29, 2025, these packages had been downloaded over 220 times, with the majority of victims located in the United States, China, Russia, Hong Kong, and Germany.

“These packages were designed to collect user data and steal environment variables,” cybersecurity firm Positive Technologies stated. “The attackers leveraged Pipedream, an integration platform for developers, as a command-and-control (C2) server to receive the stolen information.”

Regulatory Crackdowns on AI Systems

DeepSeek’s challenges come amid increasing global regulation of AI technologies. On February 2, 2025, the European Union's Artificial Intelligence Act went into effect, imposing strict legal requirements on AI applications deemed high-risk and banning AI systems with unacceptable security risks.

Similarly, the United Kingdom recently introduced an AI Code of Practice, which mandates strong security measures against data poisoning, model obfuscation, and indirect prompt injection to prevent AI systems from being exploited by cybercriminals.

AI's Growing Cybersecurity Threats

The security risks associated with AI extend beyond DeepSeek. Meta recently unveiled its Frontier AI Framework, which proposes halting the development of AI models that pose critical cybersecurity risks.

Some high-risk AI scenarios flagged in Meta’s framework include:

  • AI models capable of automated corporate network compromise despite security best practices.

  • AI capable of discovering and exploiting critical zero-day vulnerabilities before defenders can respond.

  • AI-driven end-to-end social engineering scams, such as pig butchering schemes, that could cause large-scale financial damage.

Meanwhile, Google’s Threat Intelligence Group (GTIG) disclosed that over 57 state-sponsored threat actors from China, Iran, North Korea, and Russia have attempted to use AI tools like Gemini to enhance their cyber operations.

Threat actors have also been observed jailbreaking AI models—a form of adversarial attack that bypasses safety restrictions to generate malicious content, such as malware or step-by-step bomb-making instructions.

The Arms Race to Secure AI

To counter the risks posed by AI jailbreak techniques, Anthropic has introduced Constitutional Classifiers—a new method aimed at preventing universal jailbreak exploits.

“These classifiers are trained on synthetically generated data and can filter the majority of jailbreak attempts with minimal over-refusals and without incurring a large computational cost,” Anthropic stated.

As AI continues to evolve at an unprecedented pace, governments and organizations are racing to implement security measures to safeguard against emerging threats. The scrutiny surrounding DeepSeek highlights the growing tensions between innovation, security, and regulatory oversight in the AI landscape.