
OpenAI activates military-grade security to shield AI advancements
What's the story
OpenAI has revamped its security measures to prevent corporate espionage, the Financial Times reported. The decision comes after Chinese start-up DeepSeek launched a competing model in January. OpenAI accused DeepSeek of stealing its models through "distillation" techniques. The company has since accelerated an existing security clampdown, which includes new policies and measures to protect sensitive information and technology.
Enhanced protocols
'Information tenting' policies restrict employee access to sensitive algorithms
OpenAI's revamped security includes "information tenting" policies that restrict employee access to sensitive algorithms and new products. For instance, during the development of OpenAI's o1 model, only vetted team members who were briefed on the project could talk about it in shared office spaces. The company has also begun isolating proprietary technology in offline computer systems and introduced biometric access (fingerprint scans) for office areas.
Security upgrades
Changes part of wider concern about foreign adversaries
Along with the above measures, OpenAI has also implemented a "deny-by-default" internet policy that requires explicit approval for external connections. The company has also stepped up physical security at data centers and expanded its cybersecurity personnel. These changes are said to be part of a wider concern about foreign adversaries trying to steal OpenAI's intellectual property.