Can Generative AI Be Trusted In The Workplace?

Uncover the transformative synergy of Generative AI and smart research in the workplace, as discussed by Jeff Evernham, vice president of products tactic at Sinequa. Discover how grounding GenAI in exact, up-to-day information and facts makes sure responsible results and empowers economical choice-creating.

Generative AI (GenAI) is reworking how corporations function. From producing promoting written content and aiding builders code to offering customer support, the vary of options for enterprises is incredible. Its popularity has brought on companies and industries to rethink their business procedures and the value of human means, pushing generative AI to what Gartner phone calls the Peak of Inflated Anticipations on the Buzz CycleOpens a new window . Amid all the focus, there are now two questions organizations are asking about leveraging GenAI: how can we instruct it about our interior material, and can we be confident it is secure?

What is the Hesitation?

 Generative AI and LLMs(substantial language products) like ChatGPT are intended to process and make textual content that resembles that of human beings. These versions comprehend language and can reply questions in a purely natural, conversational manner. Nevertheless, they’re constrained by what they’ve been educated on — or, additional properly, what they have not been trained on, and that is the facts in your organization. LLMs are qualified to deliver textual content centered on language patterns, and they are proficient, for instance, in composing excellent prose and self-assured, convincing arguments. Even so, the creating is dependent on chances of phrases in the language, not on knowing how the planet will work, so these models cannot be relied on to convey precise information. This is a crucial linchpin for most business apps. 

When applied to business enterprise cases with complex and knowledge-powerful environments, GenAI and LLMs go through from 4 frequent issues:

  1. “hallucinations”
Read More

Bias in Artificial Intelligence: Can AI be Trusted?

Artificial intelligence is more artificial than intelligent.

In June 2022, Microsoft released the Microsoft Responsible AI Standard, v2 (PDF). Its stated purpose is to “define product development requirements for responsible AI”. Perhaps surprisingly, the document contains only one mention of bias in artificial intelligence (AI): algorithm developers need to be aware of the potential for users to over-rely on AI outputs (known as ‘automation bias’).

In short, Microsoft seems more concerned with bias from users aimed at its products, than bias from within its products adversely affecting users. This is good commercial responsibility (don’t say anything negative about our products), but poor social responsibility (there are many examples of algorithmic bias having a negative effect on individuals or groups of individuals).

Bias is one of three primary concerns about artificial intelligence in business that have not yet been solved: hidden bias creating false results; the potential for misuse (by users) and abuse (by attackers); and algorithms returning so many false positives that their use as part of automation is ineffective.

Academic concerns

When AI was first introduced into cybersecurity products it was described as a defensive silver bullet. There’s no doubt that it has some value, but there is a growing reaction against faulty algorithms, hidden bias, false positives, abuse of privacy, and potential for abuse by criminals, law enforcement and intelligence agencies.

According to Gary Marcus, professor of psychology and neural science at New York University (writing in Scientific American, June 6, 2022), the problem lies in the commercialization of a still developing science:

“The subplot here is that the biggest teams of researchers in AI are no longer to be found in the academy, where peer review used to be coin of the realm, but in corporations. And corporations, unlike universities, have no incentive to

Read More