OpenAI, the company behind ChatGPT, has introduced a new ‘Advanced Account Security’ feature aimed at users who face a higher risk of digital attacks, including journalists, elected officials, political dissidents, researchers, and other security-conscious individuals.
Announced on April 30, the new feature allows users to secure their ChatGPT accounts using passkeys and physical hardware security keys, making it significantly harder for hackers to gain access through phishing attacks or stolen passwords.
- The company said the feature “brings together a set of heightened security measures that help safeguard against account takeover while making those protections easier to activate in one place.”
Once enabled, the protection also extends to Codex, OpenAI’s AI-powered coding assistant and software engineering tool that helps developers write, debug, and manage code.
Why OpenAI is introducing the feature
The move comes as AI tools increasingly become embedded in people’s personal and professional lives.
OpenAI said in February that more than 900 million people now visit ChatGPT weekly, underlining how quickly the chatbot has evolved from an experimental AI product launched in late 2022 into a major digital platform used for work, education, research, and even emotional support.
Many people — including younger users — are turning to AI chatbots not just for productivity tasks, but also for deeply personal conversations and advice. Reports of users developing emotional or even romantic attachments to AI chatbots have also become more widespread.
In workplaces, AI systems are increasingly being used to analyse highly sensitive information, including financial statements, legal documents, corporate strategy materials, research data, and government records.
Public officials and policymakers are also beginning to rely on AI tools for drafting speeches, press releases, reports, and policy documents. Last week, TechMedia Africa reported that South Africa’s National Artificial Intelligence Policy came under scrutiny after reports suggested AI tools were used during drafting, resulting in fabricated and unverifiable citations appearing in the document.
“People are turning to AI for deeply personal questions and increasingly high-stakes work,” OpenAI said in its statement.
“Over time, a ChatGPT account can hold sensitive personal and professional context, and sit at the center of connected tools and workflows.”
That growing concentration of sensitive information inside AI accounts means they are becoming more attractive targets for hackers, phishing campaigns, and state-backed cyberattacks.
How the Advanced Account Security works
According to OpenAI, the new system is designed to strengthen account protection by improving sign-in security, tightening account recovery, reducing exposure from compromised devices, and giving users more visibility into account activity.
The feature can be activated through the Security section of a user’s ChatGPT account on the web.
1. Passwords are replaced with stronger login methods
Under the new system, users can sign in using passkeys or physical hardware security keys instead of traditional passwords.
This means attackers can no longer simply steal or guess a password to access an account.
Hardware security keys — such as USB or NFC-enabled devices — work by physically verifying that the real account owner is present during login. Because they rely on cryptographic authentication, they are considered one of the strongest defences against phishing attacks.
2. Account recovery becomes stricter
One of the biggest weaknesses in many online accounts is password recovery through email or SMS.
If a hacker compromises a user’s email account or SIM card, they can often reset passwords and take over connected services.
To reduce that risk, OpenAI said Advanced Account Security disables account recovery through email and SMS entirely.
Instead, users must rely on stronger recovery methods such as:
- Backup passkeys
- Physical security keys
- Recovery keys
However, this also comes with a trade-off: users who lose access to all their recovery methods may permanently lose access to their accounts, because OpenAI Support will not be able to bypass the system.
3. Sessions are shortened and monitored
The company said login sessions will now expire faster in order to reduce the amount of time an attacker can remain logged into a compromised account.
Users will also receive alerts whenever a new login occurs and will be able to review and manage active sessions across all devices connected to their accounts.
4. Chats are automatically excluded from AI training
OpenAI also said that users enrolled in Advanced Account Security will automatically have their conversations excluded from model training.
Ordinarily, some users may manually opt out of allowing their conversations to be used to improve AI systems. Under the new security feature, that setting becomes automatic.
The move appears targeted at professionals and organisations working with highly confidential material.
Hardware keys and the push for stronger security
As part of the rollout, OpenAI has partnered with Yubico to make hardware-based authentication more accessible.
Hardware security keys — such as YubiKeys — are small physical devices that users plug into their computers or connect via NFC to verify their identity. They are widely considered one of the most effective defenses against phishing attacks because they cannot be easily duplicated or intercepted.
Through the partnership, users will be able to access discounted bundles of security keys, including options designed for everyday use and backup across multiple devices.
However, OpenAI says users are not limited to these devices. Any FIDO-compliant security key or even software-based passkeys can be used with the system.
OpenAI also signalled that stronger security will soon become mandatory for certain users.
From June 1, 2026, individuals enrolled in its Trusted Access for Cyber programme — which provides access to more powerful AI tools — will be required to enable Advanced Account Security.