Artificial intelligence is used in a host of algorithms in medicine, banking and other major industries. But as it has proliferated, studies have shown that AI can be biased against people of color.
A new bill would require AI developers to evaluate privacy risks, assess the potential for discriminatory decisions and the state’s Department of Technology would need to approve the software before its use in the public sector.
Table of Contents
Programmers, Lawmakers Want AI to Eliminate Bias, Not Promote It
Community activist Ashton P. Woods checks his phone in a Houston neighborhood. A Dallas-area entrepreneur has developed a housing assistance app that he hopes will use artificial intelligence to connect renters and help them avoid discriminatory practices. Artificial intelligence can provide racially biased results, but some states are considering legislation to address the problem.
David J. Phillip
The Associated Press
DALLAS When software engineer Bejoy Narayana was developing Bob.ai, an application to help automate Dallas-Fort Worth’s Section 8 voucher program, he stopped and asked himself, ‘‘Could this system be used to help some people more than others?”
The rapid rise in cyber incidents particularly during the pandemic has put significant pressure on insurers to update how they are underwriting cyber risk.