AWS Drives Generative AI Evolution with New Offerings

By Greg Tavarez, TMCnet Editor  |  October 06, 2023

We have witnessed an extraordinary surge in the field of generative AI over the past year. As with any surge, there are several factors that contribute to it.

First, the general proliferation of data has become a valuable resource for the training and fine-tuning of generative AI models. Secondly, the availability of scalable compute resources (especially through cloud computing platforms) has democratized the access to the high-performance hardware needed for training complex AI models. This accessibility has lowered the barrier to entry for researchers, startups and businesses looking to harness the power of generative AI. Then, there’s the advancements in machine learning techniques that developed sophisticated generative models, such as GPT-3 and GPT-4.

I expect, as many others do, that this convergence of abundant data, scalable compute and cutting-edge ML techniques will continue to ignite a wave of creativity and innovation, inspiring new ideas.

While on the topic of generative AI and innovation, AWS announced new offerings that are set to accelerate generative AI innovation. These offerings include Amazon Bedrock, Amazon Titan Embeddings, Llama 2, Amazon CodeWhisperer, Generative Business Intelligence and Amazon QuickSight.

Amazon Bedrock is a fully managed service designed to simplify the adoption of generative AI for businesses. It offers a variety of high-performing foundational models, or FMs, from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Stability AI and Amazon. These FMs can be customized with a customer's proprietary data, ensuring privacy and security. This customization capability is crucial because it allows businesses to create unique generative AI applications tailored to their specific needs, from content creation to drug discovery.

One of the key challenges in adopting generative AI is finding the right FMs and integrating them seamlessly into applications. Amazon Bedrock addresses this by providing an easy way to access and experiment with FMs, eliminating the need to manage complex infrastructure. It also enables businesses to create managed agents that perform various tasks without the need for extensive coding. Amazon Bedrock prioritizes data security and privacy, offering features like AWS PrivateLink to establish secure connections and ensuring compliance with regulations like HIPAA and GDPR.

Overall, Amazon Bedrock offers a comprehensive solution for businesses looking to harness the power of generative AI while addressing common adoption challenges.

Amazon Bedrock continues to broaden its selection of FMs with access to new models, Amazon Titan Embeddings and Llama 2.

Amazon Titan FMs are a family of pre-trained models by AWS, with the first model being Amazon Titan Embeddings, a large language model, or LLM. Titan Embeddings converts text into numerical embeddings, enabling powerful applications in search, personalization, and retrieval-augmented generation, or RAG. While FMs are versatile, they have limitations in responding to complex questions or data requiring real-time or proprietary information. To enhance FM responses, organizations often turn to RAG, where the FM connects to a knowledge source for added context.

Amazon Titan Embeddings also simplifies the RAG process by providing pre-built embeddings models, eliminating the need for customers to create their own, which demands vast data, resources and expertise. This accessibility makes RAG more achievable for a broader range of organizations. Amazon Titan Embeddings supports over 25 languages, handles various data lengths, and delivers high-accuracy output vectors with low-latency, cost-effective results.

The Llama 2 models represent an upgrade from the original Llama models. They have been trained on 40% more data and boast an extended context length of 4,000 tokens, enabling them to handle larger documents effectively. These models are optimized for rapid response times when deployed on AWS infrastructure, making them particularly suitable for dialogue-based applications. Customers can leverage the power of the 13 billion and 70 billion parameter Llama 2 models through Amazon Bedrock without the hassle of setting up or maintaining any infrastructure, simplifying the development of generative AI applications.

Amazon CodeWhisperer is an AI-powered coding assistant designed to enhance developer productivity. It has the capability to be customized with an organization's internal, private codebase, such as APIs, libraries and classes, which are not part of its standard training data. This customization process enables CodeWhisperer to provide tailored code recommendations that align with the organization's specific requirements, saving developers valuable time and improving code relevancy across various tasks.

Administrators can securely connect their private code repository, use customization techniques to enhance real-time suggestions, and centrally manage these customizations via the AWS Console. This ensures that code suggestions are aligned with the organization's quality and security standards while maintaining the privacy of proprietary code. The customization capability will be available soon as part of the CodeWhisperer Enterprise Tier, offering enhanced security and privacy features to protect customers' intellectual property.

AWS prioritizes security and privacy, and CodeWhisperer's customization feature is built with enterprise-grade security measures. The underlying FM powering CodeWhisperer does not use customer customizations for training, ensuring the protection of sensitive information.

Additionally, AWS does not store or log customer content when handling requests from developers using the Amazon CodeWhisperer Professional Tier or Enterprise Tier, further safeguarding customer data. Overall, this customization capability empowers organizations to make the most of generative AI-powered coding while maintaining strict control over their internal codebase and ensuring data privacy and security.

Amazon QuickSight is a cloud-based Business Intelligence service offering dashboards, reports, embedded analytics and natural-language querying through QuickSight Q. Business analysts often spend significant time manually crafting visualizations and calculations using traditional BI tools.

QuickSight introduces Generative BI authoring capabilities to streamline this process. Analysts can describe their desired outcome in plain language, and QuickSight generates customizable visuals with a single click.

For instance, analysts can request a "monthly trend for sneaker sales in 2022 and 2023," and QuickSight automatically selects data and chart types, providing clarifications when needed. This accelerates the creation of compelling visuals and empowers analysts to deliver data-driven insights more efficiently.

“With enterprise-grade security and privacy, a choice of leading FMs, a data-first approach, and our high-performance, cost-effective infrastructure, organizations trust AWS to power their businesses with generative AI solutions at every layer of the stack,” said Swami Sivasubramanian, Vice President of Data and AI at AWS. “With powerful, new innovations, AWS is bringing greater security, choice, and performance to customers, while also helping them to tightly align their data strategy across their organization, so they can make the most of the transformative potential of generative AI.”

Organizations across industries aim to embrace generative AI for operational transformation and problem-solving but often face security and privacy concerns. They seek flexibility in choosing foundational models, the ability to customize models for unique experiences, and tools for rapid market deployment. Leading companies like adidas, BMW Group and Intuit (News - Alert) continue to choose AWS for generative AI solutions to address these challenges and drive innovation.

Be part of the discussion about the latest trends and developments in the Generative AI space at Generative AI Expo, taking place February 13-15, 2024, in Fort Lauderdale, Florida. Generative AI Expo discusses the evolution of GenAI and feature conversations focused on the potential for GenAI across industries and how the technology is already being used to create new opportunities for businesses to improve operations, enhance customer experiences, and create new growth opportunities.

Edited by Alex Passett
Get stories like this delivered straight to your inbox. [Free eNews Subscription]