OpenAI working towards reducing biases in ChatGPT

OpenAI, the research and development giant behind the highly popular AI platform ChatGPT has said that it is looking ahead to make investments in reduction of ‘glaring and subtle biases’ in how its AI responds to different inputs.

Source: The Economic Times

There has been a global scan to ensure algorithmic accountability, with the Indian government on multiple occasions mentioning that the country will establish a framework to prevent misuse of AI. Few weeks ago, the Department of Telecommunications released a draft standards paper on maintaining fairness of artificial intelligence. At the Ministry of Electronics and IT (MeitY) called Bhashini, a small team is currently developing a chatbot for WhatsApp that uses data from ChatGPT to return pertinent answers to questions. Moreover, since some people, especially farmers in rural areas, may not always want to type out their questions, the chatbot also allows for voice notes.

Essentially, voice memos could be used to make requests to the chatbot, which would then respond with a voice-based response produced by ChatGPT.

OpenAI said in a blog post on February 16, “We are investing in research and engineering to reduce both glaring and subtle biases in how ChatGPT responds to different inputs. In some cases, ChatGPT currently refuses outputs that it shouldn’t, and in some cases, it doesn’t refuse when it should. We believe that improvement in both respects is possible.”

OpenAI also said that they are working on an upgrade to ChatGPT that will enable users to easily customise its behaviour according to their convenience. “This will mean allowing system outputs that other people (ourselves included) may strongly disagree with,” OpenAI said.

The company said that it will be important to ensure that there is sufficient balance in the AI, as risks of customisation constitute “enabling malicious use of our technology and sycophantic AIs that mindlessly amplify people’s existing beliefs”.

OpenAI is also taking into consideration public reviews to take decisions on ChatGPT’s hard bounds.

It further added, “As a starting point, we’ve sought external input on our technology in the form of red teaming. We also recently began soliciting public input on AI in education (one particularly important context in which our technology is being deployed)”.

ChatGPT’s algorithm is initially ‘pre-trained’, wherein the model learns to predict the next word in a sentence. In order to do so, the model is exposed to a vast array of text on the internet. Next, OpenAI “fine-tunes’ its model to narrow down system behaviour.