After ChatGPT Health, Anthropic Adds Healthcare Features to Claude
The capabilities are now available to U.S. subscribers on Claude’s Pro and Max plans.
Artificial intelligence company Anthropic has introduced new healthcare and life sciences features to its flagship chatbot, Claude, allowing users to share medical records and fitness data to better understand their health. The capabilities are now available to U.S. subscribers on Claude’s Pro and Max plans.
With the update, users can connect official medical records and data from fitness platforms such as Apple’s iOS Health, enabling more personalised health-related conversations. The move comes shortly after rival OpenAI launched ChatGPT Health, highlighting growing competition among AI firms to tap healthcare as a major growth area.
“When connected, Claude can summarise users’ medical history, explain test results in plain language, detect patterns across fitness and health metrics, and prepare questions for appointments,” Anthropic said in a blog post. “The aim is to make patients’ conversations with doctors more productive, and to help users stay well-informed about their health.”
Anthropic’s head of life sciences, Eric Kauderer-Abrams, said the tools are designed to help patients better understand complex medical information, while stressing they are not intended for diagnosis or treatment decisions.
The startup emphasised privacy safeguards, noting that medical data shared with Claude will not be stored or used to train future models, and users can disconnect records at any time.
Beyond patients, Anthropic is expanding its Claude for Life Sciences offering for healthcare providers, adding what it described as a “HIPAA-ready infrastructure” to support tasks such as prior authorisation requests and insurance appeals.
Despite the promise, the launch is likely to draw scrutiny over the role of AI in healthcare. Anthropic cautioned that Claude can make mistakes and should not replace professional medical advice.
An executive in the health industry has warned that handing over sensitive patient data to AI platforms, even with strong safeguards, raises unresolved concerns around long-term data control, consent, and accountability if systems fail or are misused.
He told The Left Shift, "In the case of medical, it's okay to consult, but that should be it; it's like asking your friend about it over a cup of coffee, not more."