“The risk arises when an LLM recommends something nonstandard and you choose to follow it,” says Nigam Shah, MBBS, PhD, chief data scientist for Stanford Health Care in Stanford, California. ChatGPT and other large language models could pose a medical malpractice risk if physicians rely on incomplete information.
As more people turn to chat-based AIs for medical advice, it remains to be seen how these tools stack up against—or could complement—human doctors
What is race norming? The league announced in June that it would stop using the practice, following a public outcry that race norming was discriminatory for Black players suffering from dementia costing them hundreds of thousands of dollars.