Bias in Artificial Intelligence: Can AI be Trusted?

Artificial intelligence is more artificial than intelligent.

In June 2022, Microsoft released the Microsoft Responsible AI Standard, v2 (PDF). Its stated purpose is to “define product development requirements for responsible AI”. Perhaps surprisingly, the document contains only one mention of bias in artificial intelligence (AI): algorithm developers need to be aware of the potential for users to over-rely on AI outputs (known as ‘automation bias’).

In short, Microsoft seems more concerned with bias from users aimed at its products, than bias from within its products adversely affecting users. This is good commercial responsibility (don’t say anything negative about our products), but poor social responsibility (there are many examples of algorithmic bias having a negative effect on individuals or groups of individuals).

Bias is one of three primary concerns about artificial intelligence in business that have not yet been solved: hidden bias creating false results; the potential for misuse (by users) and abuse (by attackers); and algorithms returning so many false positives that their use as part of automation is ineffective.

Academic concerns

When AI was first introduced into cybersecurity products it was described as a defensive silver bullet. There’s no doubt that it has some value, but there is a growing reaction against faulty algorithms, hidden bias, false positives, abuse of privacy, and potential for abuse by criminals, law enforcement and intelligence agencies.

According to Gary Marcus, professor of psychology and neural science at New York University (writing in Scientific American, June 6, 2022), the problem lies in the commercialization of a still developing science:

“The subplot here is that the biggest teams of researchers in AI are no longer to be found in the academy, where peer review used to be coin of the realm, but in corporations. And corporations, unlike universities, have no incentive to

Read More