OpenAI Tightens Security Amid Espionage Concerns Over Chinese AI Firm DeepSeek

The San Francisco-based startup is now enforcing stricter access controls, vetting processes, and physical security.

OpenAI Tightens Security Amid Espionage Concerns Over Chinese AI Firm DeepSeek

OpenAI has significantly tightened its security protocols to guard against corporate espionage, particularly after concerns that Chinese AI firm DeepSeek may have copied its models using distillation techniques, the Financial Times reported.

The AI company behind ChatGPT is now enforcing stricter access controls, vetting processes, and physical security. Measures include fingerprint-only room access, restricted algorithm visibility through “information tenting,” and a deny-by-default egress policy blocking internet access unless specifically approved.

According to FT, these changes follow reports that Microsoft researchers suspect data linked to OpenAI may have been exfiltrated via its API by individuals connected to DeepSeek.

OpenAI confirmed it has seen “some evidence of distillation” of its models. It also now mandates government ID verification for developers seeking advanced model access.

DeepSeek, backed by China’s HighFlyer, gained attention with its open-source R1 reasoning model, drawing comparisons to OpenAI’s o1 model while costing far less to train.

Last month, DeepSeek released an updated version of its reasoning model R1, which demonstrated strong performance on math and coding benchmarks.

While the company has not disclosed its training data sources, some researchers suspect the model may have been trained using outputs from Google’s Gemini AI.