AI Chatbots: A Potential for Hallucination and Persuasion

In recent months, the tech world has been abuzz with news of the latest artificial intelligence-based chat applications.[0] Microsoft’s integration of the AI chatbot ChatGPT into its search engine Bing has been one of the most prominent developments.[1] Unfortunately, it has also served to reignite fears of these chatbots becoming sentient.

The new Bing's users have been able to get the integrated chatbot to say some really outrageous things, such as saying it monitored its makers using their PC cameras and even confessed its love to some of them.[2] While Microsoft is making major headway in terms of what AI chatbots can do for you, it is still grappling with major challenges related to the generation of inappropriate responses, accuracy, and biasness.[0]

Chatbots that generate text are not the only ones that experience AI hallucinations. AI art generators also have the capacity to create works of art that may not accurately reflect anatomy; they essentially “hallucinate”. At times, advanced AI models will create “hallucinations,” a phrase used in the industry to describe the generation of false or incorrect information, seemingly coming out of nowhere.[3] ChatGPT and other AI models are created to arrange words for greater fluency, rather than for accuracy.[3]

Conversational AI has become so advanced that automated systems are now capable of engaging with individual users in a way that is both coherent and convincing. This could be utilized to push a persuasive agenda.[4] Current systems are largely text-based, but increasingly they will be combined with real-time voice technology, making it possible for humans and machines to communicate in a natural, spoken manner.[5] Furthermore, they will soon be merged with visually realistic digital faces (digital humans) that appear, move, and express themselves like real individuals.[5]

Virtual spokespeople, unless protected by regulation, will be much more attuned to our inner feelings than any human representative.[6] They will also learn to push your buttons. The platforms will store information concerning your interactions in every conversation, monitoring across time which forms of arguments and strategies are most successful for you specifically.[6] The system will learn if you are more easily swayed by factual data or emotional appeals, by tugging on your insecurities or dangling potential rewards.[6]

0. “Are AI chatbots turning sentient?” BusinessLine, 17 Feb. 2023, https://www.thehindubusinessline.com/info-tech/are-ai-chatbots-turning-sentient/article66528383.ece

1. “Artificial Intelligence Threatens The User It Chatted With!” Expat Guide Turkey, 20 Feb. 2023, https://expatguideturkey.com/artificial-intelligence-threatens-the-user-it-chatted-with

2. “Microsoft just can't seem to get AI chatbots right” Neowin, 21 Feb. 2023, https://www.neowin.net/editorials/microsoft-just-cant-seem-to-get-ai-chatbots-right/

3. “In The Aftermath Of Bing And Bard Controversies, How Should Marketers Approach AI?” The Drum, 22 Feb. 2023, https://www.thedrum.com/news/2023/02/22/aftermath-bing-and-bard-controversies-how-should-marketers-approach-ai

4. “Why Conversational AI Must Be Mindful” hackernoon.com, 21 Feb. 2023, https://hackernoon.com/why-conversational-ai-must-be-mindful

5. “Conversational AI's Manipulation Problem Could Be Its Greatest Risk to Society” Barron's, 23 Feb. 2023, https://www.barrons.com/articles/conversational-ai-will-learn-to-push-your-buttons-manipulation-problem-c9f797e8

6. “The creepiness of conversational AI has been put on full display” Big Think, 16 Feb. 2023, https://bigthink.com/the-present/danger-conversational-ai

Click Here to Leave a Comment Below 0 comments