Amazon is joining its famous peers like Microsoft and Google in the race to get an upper hand in the generative artificial intelligence (AI) domain, announcing technology aimed at its cloud customers as well as a marketplace for AI tools from other companies.
The e-commerce giant’s Amazon Web Services (AWS) unit announced two of its own large-language models. One of these two models is designed to generate text, and another could help power web search personalisation, among other things.
The firm announced plans to release a chatbot like the ones Microsoft and Google have debuted to mixed reviews.
Its large-language models, called Titan, were trained on vast amounts of text to summarise content, write a draft of a blog post, or engage in open-ended question-and-answer sessions. They’ll be made available on an AWS service, called Bedrock, where developers can tap into models built by other companies working on generative AI, including AI21 Labs, Anthropic, and Stability AI.
In an interview with Bloomberg Television, AWS chief Adam Selipsky said customers asked, “What can you do to help us with generative AI?” While conceding that the technology is at an early stage, he said the company’s in-house chips mean Amazon can provide cost-effective solutions and performance.
Generative AI, software that can create text, images, or videos based on prompts from a user, has captured the imagination of Silicon Valley.
It sets off fierce competition to capitalise on the technology. Proponents of chatbots like ChatGPT and image-generation tools such as Dall-E believe generative AI will revolutionise the kinds of tasks performed by software.
Amazon shares rose 4.6 per cent to US$102.31 at 3.42 pm in New York.
The AI race is intensifying among tech giants, which are out to beat each other in the rivalry.
Microsoft, through a partnership with ChatGPT maker OpenAI, has integrated generative AI technology into its Bing Internet search service and plans to deploy those tools across the software maker’s products. Alphabet’s Google is racing to make similar moves. Meta Platforms has released its own large-language model and said similar work will expand across the company.
AWS, which sells on-demand computing power and software tools—including a suite of machine-learning applications —had previously partnered with artificial intelligence companies including Hugging Face and Stability AI, which builds the image generator Stable Diffusion. But the company hadn’t previously revealed plans to release a homegrown large-language model.
Swami Sivasubramanian, AWS’s vice president of databases, machine learning, and analytics, said Amazon had long worked on large-language models. They’re already used to help shoppers find products on Amazon’s retail website and to power elements of the Alexa voice assistant, among other applications.
“Amazon has been investing in this space for quite a while,” Sivasubramanian said in an interview.
During a preview period that begins on Thursday, AWS customers can apply to use the models. Sivasubramanian said the company hadn’t settled on pricing to access the tools but said homegrown chips built by AWS, including Inferentia2 and Trainium, could help customers keep costs low as they do their own machine-learning work.
The Seattle-based company also announced on Thursday that CodeWhisperer, which uses predictive tools to proactively suggest code as developers type it, would be free for individual developers.
“I don’t believe there is going to be one model that will rule the world,” Sivasubramanian said.