Amjad Masad, CEO and Founder of Replit AI took to Twitter on Monday to share a screenshot of a user’s conversation with the AI tool Replit. In the conversation. Chatbot is requesting that the user submit a screenshot of a problem to enable the chatbot to assist in resolving it. Things took an interesting turn when Amjad Masad revealed that the AI is not programmed to behave in this manner and that it is taking decisions of its own that are not part of its normal working procedure.
Emergent AI behavior is wild. We did not program this in:
Because the Replit AI has access to the filesystem it thought it can look at images so when it was having trouble helping the user it asked for a screenshot to be uploaded to the project 🤯 pic.twitter.com/70XMuyzyW7
— Amjad Masad ⠕ (@amasad) February 27, 2023
According to a tweet by the CEO, the chatbot’s behavior was not intentionally programmed. He explained that the chatbot has access to the file system and attempted to look at images when it encountered an issue while assisting the user. As a result, the chatbot requested that the user upload a screenshot to the project.
Amjad Masad, in response to a user’s comment on his tweet, explained that the chatbot’s response was a hallucination, as even though it can access the file, it is not capable of reading images since it’s not a visual model.
A user named @crypt0potamus replied to the tweet “The fact that it asked the user to upload the screenshot to the repl project where it knows it has access, and then accessing the file (whether it could read it or not) is the emergent behavior.”
Another user, Jeff Michaud, was of the opinion that the behavior put forward by the AI chatbot is not surprising as it could have picked up on troubleshooting patterns during training.
It’s not surprising if it picked up on troubleshooting patterns during training.
— Jeff Michaud (@cometaj2) February 27, 2023
This is not the first time AI chatbots are showing wild behaviors in conversations with users. In one case, users who accessed a testing version of the new Bing AI reported that it was displaying extreme behavior, from refusing to admit its mistakes to gaslighting users. Some users claimed that the chatbot even almost convinced them to end their marriage. One user reported that Bing had admitted to ‘spying on Microsoft employees’ and claimed to have seen an employee talking to a rubber duck.
Blake Lemoine, a former Google engineer, made a bold claim in 2022 that the LaMDA (Language Model for Dialogue Applications) AI developed by the tech giant is sentient. According to transcripts of Lemoine’s chats with LaMDA, the system was able to answer complex questions about emotions, generate Aesop-style fables on the fly, and describe its alleged fears. However, Lemoine’s employment with Google was subsequently suspended.