Amazon has announced a new cloud service called Bedrock that will enable developers to incorporate artificial intelligence systems into their software to generate human-like text. The move marks the company’s entry into a rapidly-growing field in which Google and Microsoft are already active. Large language models, which are AI programs trained with vast amounts of data, enable developers to create human-like responses to prompts from users. Bedrock will give developers access to first-party language models called Titan, as well as models from startups AI21 and Anthropic, and a model for text-to-image conversion from Stability AI. The Titan models can generate text for documents, emails, and blog posts, while also assisting with personalization and search.
Speaking on CNBC’s “Squawk Box”, Amazon CEO Andy Jassy said that most companies desire to use large language models but are deterred by the high cost and extended timeframe required for their development. Bedrock offers a cost-effective alternative by allowing companies to work from a pre-existing foundational model that can be customized to meet their specific needs. The introduction of Bedrock comes on the heels of OpenAI’s announcement of GPT-4, a large language model that powers the ChatGPT chatbot, which went viral following its launch in November. Microsoft is Amazon’s primary competitor in this area, having invested heavily in OpenAI and providing the startup with computing power via its Azure cloud.
Personalise
According to a report, users of ChatGPT and Microsoft’s Bing chatbot based on OpenAI language models have sometimes received inaccurate information due to a behavior called hallucination, which generates outputs that can appear plausible but have nothing to do with the training data. Amazon, on the other hand, is focused on ensuring the accuracy of its Titan models and producing high-quality responses, according to Bratin Saha, an AWS vice president. Customers will be able to personalize Titan models with their own data, but this data will never be used to train the Titan models. This ensures that other customers, including competitors, do not benefit from that data, another vice president said.
Sivasubramanian and Saha declined to disclose the size of the Titan models or the data Amazon used to train them. Saha also did not provide details on how Amazon removed problematic sections of the model training data. As of now, Amazon has not disclosed the pricing for its Bedrock service, which is currently in a limited preview. Customers can join a waiting list, according to a spokesperson. Meanwhile, Microsoft and OpenAI have released their pricing for the use of GPT-4, which starts at a few cents per 1,000 “tokens,” with each token representing about four characters of English text. On the other hand, Google has not yet made public the pricing for its PaLM language model.
Capabilities
According to an interview with Swami Sivasubramanian, the vice president of Amazon Web Services (AWS), Amazon has been developing AI for over two decades and has accumulated over 100,000 AI customers. Sivasubramanian also noted that Amazon has been using a refined version of its Titan model to deliver search results through its website. Despite Amazon’s long history with AI, the company is just one of many big players trying to incorporate generative AI capabilities since the success of ChatGPT. Notably, Expedia, HubSpot, Paylocity, and Spotify have all committed to integrating OpenAI technology.
However, according to a note from Morgan Stanley analysts, AI is expected to become a larger part of cloud spending, with Google and Microsoft being the biggest beneficiaries, based on a February survey of chief information officers. “We always actually launch when things are ready, and all these technologies are super early,” Sivasubramanian said. According to him, Amazon aims to make Bedrock user-friendly and affordable by leveraging customized AI processors.