You might have really enjoyed conversing with a telecaller, but you have no idea she was an AI program. Some of the activities that were recently carried out by researchers combined a human with an AI in a bid to determine whether people could distinguish the former from the latter. And guess what? In the tests that were conducted, there is a particular model known as GPT-4 and it succeeded in fooling the people more than half the time!Â
What’s the Turing Test?
However, we need first discuss the Turing Test. This test was proposed by none other than a brainy computer scientist named Alan Turing, many years ago in 1950. The idea is simple: If a machine utters something or in some other way communicates with a human and the human in response believes that he or she is interacting with another human being then the machine is said to have passed the competency test. It is like a game where the machine is impersonating with the aim of convincing that, it is a human being.
The Experiment
In recent research the participants spoke to four different ‘agents’ each for five minutes and there were 500 participants. These agents were constituted by one real human as a control, an old school AI known as ELIZA the 1960 version and two contemporary AI- GPT-3. 5 and GPT-4. At the end of each conversation, participants were asked to make a guess whether the conversation partner was another participant or an artificial intelligence.
The Results
Why? Well, that is where it gets interesting! The human was recognized as human any time 67% of the time. The probability that the participants of the survey would think that ELIZA is a human is only 22% if the conversation was initiated with very simple answers on its behalf. GPT-3. According to this example, even the relatively high level which is No. 5 had 50% human check. And GPT-4? This acoustically created image was identified as human 54% of the time what about this says it was believed to be human 54% of the time. This means that GPT-4 was successful in deceiving humans more than half the time, which pass the Turing Test significantly.
Why Does This Matter?
Thus, the question as to why we should be bothered if creating an intelliigenct machine can deceive us into identifying it as human? Now, this is a major concern in terms of aspects that we experience in our everyday life. Let’s just say, thinking about a conversation with a customer service representative through an instant messaging app; you can’t be very certain you are interacting with an actual human being. This could alter the existing ways of business and etiquette when interacting with machines.
More Than Just Smart
Another thing that the researchers observed is that people must make improved understandings of such concepts that being smart is not the ultimate factor. For AI to establish a way of communicating with humans, it must learn emotions, body language, and respect the values of human beings. It is not about providing concrete facts merely; it is about the better application of the facts in a manner which seems perfectly natural for a human. Let me put it like this, anyone who has ever tried cooking will understand that, having raw materials of cooking is one thing, but assimilating the raw materials in the right proportion, for the best meal is another thing.
The Future of AI
For instance, Nell Watson, an AI, noted that he has recently seen some AI get more human-like because it can now exhibit peculiarities and prejudice. This implies that an AI can bend the rules of the conversation to achieve better results, or be more intelligent than an opponent in a conversation, thus giving it a sense of realism. However, it also makes us a bit more cautious as to who or what exactly we are communicating with during such situations.
Challenges Ahead
Of course, the possibility to find other detector types is a huge step further, but this also has the following problems. There are, however, concerns with regards to the manner through which this technology is applied. For example, if artificial intelligence relates to humans as a human, AI can mislead people and use them in negative ways.