OpenAI tightens ChatGPT usage policy, bans medical and legal advice
GH News Media

OpenAI has updated its ChatGPT usage policy, explicitly prohibiting the use of its AI models to deliver medical, legal, or other forms of advice that require professional licensing.
The revision, announced on October 29, comes amid increasing concern over the growing reliance on AI chatbots for expert guidance — particularly in healthcare. While ChatGPT’s accessibility and speed have made it a popular tool for quick answers, experts warn that depending on AI for professional consultations poses serious ethical and legal risks.
Read Also: How ChatGPT is transforming modern classroom
Under the updated Usage Policies, ChatGPT is now barred from being used for:
Providing consultations that require certification, such as medical or legal advice;
Conducting facial or personal recognition without consent;
Making critical decisions in areas like finance, education, housing, migration, or employment without human oversight;
Engaging in academic dishonesty or manipulating evaluation outcomes.
OpenAI stated that the update aims to “enhance user safety and prevent potential harm” by ensuring the technology is used within its intended scope. Analysts interpret the move as a proactive effort to limit legal exposure in a largely unregulated space where AI-generated professional advice could lead to liability.
Read Also: Massive AWS outage hits Amazon, Snapchat, Fortnite
The revision also addresses user attempts to bypass restrictions through “hypothetical” prompts — a tactic the company’s improved safety filters now detect and block more effectively.
In a related move, OpenAI announced that its latest model includes enhanced safeguards to better support users in moments of emotional or mental distress. These improvements focus on detecting and responding appropriately to issues involving psychosis, mania, self-harm, and emotional dependence on AI systems.
According to the company, future safety evaluations will now include “emotional reliance and non-suicidal mental health emergencies” as part of its standard testing framework, reinforcing OpenAI’s broader commitment to responsible AI development.



