Researchers at University College London, UK, have developed a new tool designed to identify misleading diet and nutrition information online, according to a study published in Scientific Reports. Crucially, the authors suggest that it can help judge just how dangerous that misinformation might be.
Most fact-checking tools currently available simply give a thumbs-up or a thumbs-down for true or false. But the UCL team recognised that much harmful health advice isn’t necessarily false. It might be technically accurate in some parts, but it omits important warnings, distorts the information, or targets particularly vulnerable people. That kind of subtle, misleading content often slips past standard fact-checkers undetected.
Their tool, called Diet-MisRAT, scores content on a sliding risk scale (green, amber, or red) based on three key danger signs: factual inaccuracies, important omissions, and manipulative framing. It also considers the context: who is likely to read the content, and how likely are they to act on it?
To build it, the team drew on the World Health Organization’s methods for assessing physical health hazards, adapting that framework for the digital world. They then tested and refined the tool across several rounds of checks, including input from 60 experts in dietetics, nutrition, and public health.
The WHO considers health misinformation a major public health threat. Dangerous dietary trends, ranging from extreme fasting to misuse of supplements, can cause serious harm. Supplement misuse alone accounts for around one in five drug-related liver injuries in the United States. Recent examples include a man who developed skin lesions after following a carnivore diet promoted heavily on social media, and another person was hospitalised after AI-generated advice suggested substituting ordinary salt with sodium bromide, a toxic compound with no role in the human diet.
The researchers hope the tool will help governments, health regulators, and social media platforms prioritise their responses and focus on the most dangerous content, rather than treating all misinformation equally.
They also argue it has a role in AI safety. As chatbots become a go-to source of health advice, the ability to measure how misleading and potentially harmful a response is could help developers build better safeguards into these systems before people get hurt.
Ruani, A., Reiss, M.J. & Kalea, A.Z. Development and validation of a tool for detecting misinformation risk in diet, nutrition, and health content (Diet-MisRAT). Sci Rep 16, 9207 (2026). https://doi.org/10.1038/s41598-026-40534-2