We need to make sure we look at AI functions to look at their provenance, observe their state and status, watch their behavior and scrutinize the validity of the decisions they take.
Chinese regulators granted approvals to a total of 14 large language models (LLM) for public use last week, Chinese state-backed Securities Times reported. It marks the fourth batch of approvals China has granted, which counts Xiaomi Corp , 4Paradigm and 01.AI among the recipients.
Despite its spectacular potential, generative AI isn’t without its shortcomings. In fact, you could argue that its arrival has rather made a mess of things for now.
The widespread use of generative AI raises concerns about bias, particularly when it comes to training on human-generated content. The biases inherent in the data used to train language models (LLMs) can influence their outputs, impacting interpretations and opinions. Issues such as access restrictions, delayed deployment in specific languages, and intellectual property concerns also contribute to bias. To ensure responsible AI, transparency in product and process development is essential. Lawmakers can play a role in promoting transparency, but global cooperation and harmonized rules are crucial for addressing these challenges in the AI landscape.