
Speed, safety, and secrecy: Ex-OpenAI engineer reveals company's work culture
What's the story
Calvin French-Owen, a former engineer who worked on OpenAI's innovative coding agent Codex, has shared his experiences of working at the company for a year. He left the firm to pursue his passion for being a start-up founder again. French-Owen had co-founded customer data start-up Segment, which was acquired by Twilio in 2020 for $3.2 billion. He had joined OpenAI back in May 2024 and left the ChatGPT maker in June 2025.
Expansion woes
OpenAI's workforce grew from 1,000 to 3,000 during his tenure
French-Owen revealed that OpenAI's workforce grew from 1,000 to 3,000 in the year he was there. He said this rapid expansion brought along a lot of chaos in terms of communication, reporting structures, product shipping, people management and organization as well as hiring processes. Despite the challenges, he noted that employees still have the freedom to act on their ideas with little-to-no red tape.
Skill disparity
Codex launch took just 7 weeks
French-Owen highlighted a wide range of coding skills among employees, from experienced Google engineers to fresh PhDs. He described the central code repository as "a bit of a dumping ground," because stuff frequently breaks or can take excessive time to run. However, he said top engineering managers are aware of these issues and are working on improvements.
Company culture
Culture of secrecy prevents public leaks
French-Owen compared OpenAI to early Meta, saying it doesn't realize its size and operates on Slack. He recalled how his senior team built and launched Codex in just seven weeks with almost no sleep. Despite being a highly scrutinized company, OpenAI has a culture of secrecy to prevent public leaks. The firm keeps an eye on X for viral posts that could prompt a response from them.
Safety concerns
OpenAI is more focused on practical safety issues
French-Owen addressed the misconception that OpenAI doesn't prioritize safety as much as it should. He said the company is more focused on practical safety issues like hate speech, abuse, political bias manipulation, bio-weapons crafting and self-harm prompt injection. He also noted that researchers are looking at long-term potential impacts of their work and are aware of hundreds of millions of people using their LLMs for various purposes today.