“The risk arises when an LLM recommends something nonstandard and you choose to follow it,” says Nigam Shah, MBBS, PhD, chief data scientist for Stanford Health Care in Stanford, California. ChatGPT and other large language models could pose a medical malpractice risk if physicians rely on incomplete information.