Conversational AI is here, and most people have already experimented with using it. In the modern information age, we are all practically forced to consume information on a nearly constant basis. It’s only a matter of time before conversational AI reshapes this landscape into something different. But what exactly will this landscape look like?
An Introduction to Conversational AI
By now, you’ve likely interacted with or at least heard about modern conversational AI platforms. These technologically sophisticated marbles are capable of processing language with competence that rivals natural language speakers. They can be utilized for a wide variety of tasks, fielding requests and providing information in a conversational format.
For example, you can ask a conversational AI platform about a historical event, or instruct it to proofread a document for you, and it will freely converse with you, providing information or otherwise helping you achieve your objectives.
Already, conversational AI has flooded the information marketplace. People have used these tools to generate millions of articles, respond to millions of customer questions, and in some cases, even tackle complex tasks like writing computer code or reviewing legal documents.
Causes for Concern: Conversational AI and Reliable Information
Why should we be concerned about the role of conversational AI in the information age?
- Inherent safeguards and limitations. Even in its early days, users were frustrated by the inherent safeguards and limitations built into ChatGPT, such that they became interested in jailbreaking it. There are certain things that ChatGPT is programmed not to say and not to “consider.” There are certain forms of engagement that are completely off limits to it. The makers of these technologies view this as a good thing, since they don’t want their platforms to offend anyone or defy social norms in egregious ways. However, we have no real way of telling just how far these inherent safeguards and limitations go – for ChatGPT or any other conversational AI system.
- Biases. We also need to acknowledge the significant bias inherent in most conversational AI platforms. Based on the desires of the programmers, the information available, or some combination of the two, many conversational AI platforms end up viewing the world through a specific political lens, or with a perspective that only some people hold. In some applications, such as customer service, this doesn’t matter. But if we’re relying on AI to generate our content and report on news as it unfolds, this is unsettling, to say the least.
- Lack of transparency. Complicating factors, we must acknowledge that there’s a near-total lack of transparency associated with conversational AI platforms. The machine learning processes that put the “intelligence” in “artificial intelligence” happen in a kind of “black box,” preventing AI researchers from understanding exactly how the system reached the conclusions or improvement iterations it did. This makes it hard to understand exactly how and when AI develops – and prevents us from accurately estimating its influence on information consumers.
- The increasing power of wikiality. Wikiality, a term first coined by Stephen Colbert in 2006, refers to a set of facts accepted as true because of shared acknowledgement, much in the way that Wikipedia articles are accepted to be flawless due to a general consensus about their accuracy. In an era where more people are turning to search engines and conversational AI for the answers to their questions, it’s increasingly likely for us to live in a world where AI is gospel; any deviation from the facts and opinions regurgitated by conversational AI could become heretical, and could eventually be labeled as “dangerous” opinions.
- Corporate favoritism and tech pseudo-monopolies. It doesn’t help that some of the most powerful conversational AI platforms on the planet are in the hands of pseudo-monopolistic tech giants. Do we really want companies like Alphabet or Apple in complete control of the information we consume regularly?
- AI training on AI. This problem could become much worse over time, as new AI models start training on content that was generated by previous AI systems. This ongoing feedback loop could make it almost impossible for human influence to overcome the self-perpetuating information machine.
What Can We Do?
We can’t simply put the conversational AI genie back in the bottle, nor would we necessarily want to. What we can do is:
- Stay independently minded. Maintain an independent and critical mindset. Conversational AI isn’t going to ruin your own skepticism or awareness unless you let it.
- Flag AI sources. Pay attention to pieces of content and authorities that rely heavily on AI generated material. Flag them and avoid them whenever possible.
- Use alternative tools and technologies. Finally, consider using alternative tools and technologies for your information procurement and conversational needs. We don’t have to go through the biggest tech companies in the world for everything.
Conversational AI may still be in its infancy, but it’s already having a major impact on how we consume and process information. Only by increasing awareness of the issues and complications associated with AI can we remain resilient in the face of these informational threats.