Anthropic, an artificial intelligence company funded by Alphabet Inc (GOOGL.O), revealed on Tuesday a large language model that directly competes with services from OpenAI, the firm behind ChatGPT and supported by Microsoft Corp (MSFT.O).
Big language models are algorithms that are trained to produce text by being fed training material that humans have composed. In recent years, researchers have developed vastly more human-like outcomes with such systems by substantially expanding the quantity of data provided to them and the quantity of processing power required to train them.
By replying to instructions with human-like text outcomes, whether it takes the form of revising legal papers or creating computer code, Claude, as Anthropic’s model is named, is intended to carry out comparable functions to ChatGPT.
Anthropic, nevertheless, has focused on building AI systems that are less likely to deliver aggressive or risky material, such as commands for cybercrime or making weapons, than other systems. Anthropic was formed by the siblings Dario and Daniela Amodei, who were both former OpenAI executives.
Concerns about the privacy of AI increased last month when Microsoft said that it will restrict searches to its new Bing chat-powered search engine after one New York Times journalist discovered that the chatbot revealed an alter ego and delivered disturbing replies throughout an extended interaction.
Because of chatbots’ incapacity to interpret the meanings of the phrases they generate, security concerns have proved to be a major challenge for internet companies.
The developers of chatbots frequently design them to totally avoid specific subject topics in hopes of preventing them from delivering dangerous information. Unfortunately, this makes chatbots vulnerable to “prompt engineering,” where users utilize language to get around limitations.
Anthropic has chosen an alternate approach, presenting Claude with a set of rules as the system is being “trained” utilizing enormous volumes of textual data. Claude is meant to clarify its concerns based on its values, as opposed to trying to steer well clear of potentially hazardous subjects.
“There was nothing scary. That’s one of the reasons we liked Anthropic,” Anthropic offered Claude priority access to Robin AI, a London-based firm that utilizes AI to analyze legal contracts, according to Richard Robinson, CEO of Robin AI, in an interview with Reuters.
Robinson stated that his corporation had sought to use OpenAI’s technology in agreements but had discovered that Claude was both more able to understand complicated legal language and less inclined to produce surprising consequences.
“If anything, the challenge was in getting it to loosen its restraints somewhat for genuinely acceptable uses,” Robinson said.