Google Pulls Some AI Health Summaries After Probe Finds Risk of Harmful Misinformation

The investigation found that some AI Overviews delivered incorrect or overly simplified health data.

Google Pulls Some AI Health Summaries After Probe Finds Risk of Harmful Misinformation

Google has removed several artificial intelligence-generated health summaries from its search results after an investigation found that inaccurate information could put users at risk of serious harm, according to The Guardian.

The move follows concerns that Google’s AI Overviews, which appear prominently at the top of search results, were providing misleading medical guidance despite the company’s claims that the feature is “helpful” and “reliable”.

The investigation found that some AI Overviews delivered incorrect or overly simplified health data. In one case described by experts as “dangerous” and “alarming”, Google presented misleading information about liver function blood tests.

Searches such as “what is the normal range for liver blood tests” returned extensive numerical data without sufficient context, failing to account for differences in age, sex, ethnicity, or nationality.

Experts warned this could lead people with serious liver disease to believe their results were normal and delay seeking medical care.

Following the report, Google removed AI Overviews for specific queries related to liver function tests. "We do not comment on individual removals within Search. In cases where AI Overviews miss some context, we work to make broad improvements, and we also take action under our policies where appropriate,” a Google spokesperson said.

Health groups welcomed the change but cautioned that risks remain. Vanessa Hebditch of the British Liver Trust told the Guardian, “This is excellent news, and we’re pleased to see the removal of the Google AI Overviews in these instances,” but warned that similar queries could still trigger misleading summaries.

Google said it is reviewing further examples and reiterated that AI Overviews are shown only when it has high confidence in response quality, though critics argue more safeguards are needed for health-related searches.