It has been a year and a half now since the first Large Language Model (LLM) AI app was introduced to the public in November of 2022, with the release of Microsoft's ChatGPT, developed by OpenAI. Google, Elon Musk, and many others have also now developed or are in the process of developing their own versions of these AI programs, but after 18 months now, the #1 problem for these LLM AI programs remains the fact that they still lie and make stuff up when asked questions too difficult for them to answer. It is called "hallucination" in the tech world, and while there was great hope when Microsoft introduced the first version of this class of AI back in 2022 that it would soon render accurate results, that accuracy remains illusive, as they continue to "hallucinate." Many are beginning to understand this limitation in LLM AI, and are realizing that there are no real solutions to this problem, because it is an inherent limitation of artificial computer-based "
Those of us who have been educated in Western Culture today have been taught to believe that the center of a person's consciousness and thoughts is our brain. Like a computer that contains a microprocessor that allows the computer to accomplish all of its various tasks, the idea that a human being also works like a computer with a brain that functions like a microprocessor, is something that almost all people raised in Western Culture assume is a fact. But it is not a fact. It is a belief system. And as a belief system, would it surprise you to learn that this is a fairly modern belief, and that historically it has not been the brain that has been considered the driving force in humans that allows them to think and reason, but that historically it has been the heart that is attributed to human consciousness, including our mind, speech, and thoughts? When we look at the ancient literature written in non-Western cultures, the idea that the brain controls human thought and consciousn