ChatGPT health gets over half emergency cases wrong: Study
A new study found that ChatGPT Health, OpenAI's medical chatbot launched in January, got over half of emergency cases wrong—often telling people with serious issues like diabetic ketoacidosis or respiratory failure to seek non-urgent care.
While it handled straightforward problems like strokes pretty well, it struggled with more complicated situations.
ChatGPT is already used by more than 40 million people
OpenAI's ChatGPT is already used by more than 40 million people worldwide, especially for late-night health questions.
It's faced criticism and even lawsuits over how it handles mental health crises.
Even so, the tool keeps popping up in healthcare thanks to friendly regulations.
OpenAI says they're working on updates and improvements; for now, you'll have to join a waitlist if you want access while they try to make it safer.