LOADING...
New ChatGPT features help you avoid data leaks
Lockdown mode restricts ChatGPT's external interactions

New ChatGPT features help you avoid data leaks

Feb 17, 2026
01:33 pm

What's the story

OpenAI has introduced two new security features, Lockdown Mode and Elevated Risk labels, in its AI chatbot, ChatGPT. The company says these tools are meant to warn users about potentially risky features and limit external connections. This way, the chances of data leaks from prompt injection attacks can be minimized.

Security risk

What is prompt injection attack?

Prompt injection is a technique where hackers embed malicious instructions in web pages or files to manipulate an AI system into revealing confidential information or performing unintended actions. As millions use AI chatbots like ChatGPT for tasks such as document reading, web browsing, and app connections, the threat from these tricks becomes more pronounced. OpenAI's new tools aim to shield users from these risks.

Feature details

Lockdown mode restricts ChatGPT's external interactions

Lockdown Mode is an optional feature that tightly controls how ChatGPT interacts with external systems. When activated, it can restrict or disable certain tools and connections like live web browsing or integrations sending/receiving data from third-party services. The idea is to reduce these outside interactions and minimize the "attack surface" hackers could exploit.

Advertisement

User focus

Who should use Lockdown Mode?

OpenAI says Lockdown Mode isn't needed by most everyday users. It's mainly targeted at those who deal with highly sensitive information or think they might be at an elevated risk, such as journalists, executives, researchers, or security professionals. The company hopes this feature will give these high-risk users more control over their data and privacy while using ChatGPT.

Advertisement

Label introduction

Elevated risk labels for added transparency

Along with Lockdown Mode, OpenAI is also introducing Elevated Risk labels in ChatGPT. These labels will appear next to tools or features that expose more outside content or systems. For instance, if a feature connects to external content or provides broader system access, ChatGPT will show a clear warning about the potential risks. This way, users can decide whether to proceed with caution or keep things private.

Advertisement