Register for Holidaze, win $20,000+ in prizes!

banner

Generative AI & ML: AWS Security in Q4 2023 – Part 6

We hope the previous blogs around AWS security series offered valuable insights into the latest releases of AWS Security features/services. Ending the series, Part-6 on features around Generative AI & ML in AWS that were released in the last quarter. 

Gaining technological advantage keeps your business at scale, making it more agile. To the inconsistent market changes, integrate generative AI and machine learning models that Amazon provides to create a great impact. Here’s an overview of how Amazon is helping organizations with AI models.

Amazon Bedrock is now generally available

 

Amazon Bedrock, the easiest way to build and scale generative AI applications with foundation models (FMs), is now generally available. It is a fully managed service that offers a choice of high-performing FMs from leading AI companies along with a broad set of capabilities. In addition, these capabilities help you build generative AI applications, simplifying development while maintaining privacy and security.

Amazon Bedrock is the first to offer Llama 2, Meta’s large language models (LLMs), in fine-tuned 13B and 70B parameter versions. It comes as a fully managed API. Llama models are ideal for dialogue use cases.

To help you accelerate deploying generative AI into production, provisioned throughput is available in Amazon Bedrock. Consequently, it provides you the flexibility and control to reserve throughput (Input/Output tokens per minute). And maintains a consistent user experience even during peak traffic times.

For customers building in highly regulated industries, Amazon Bedrock has achieved HIPAA eligibility and GDPR compliance. Additionally, Amazon Bedrock is integrated with Amazon CloudWatch, to help you track usage metrics and build customized dashboards for audit purposes. With AWS CloudTrail, you can monitor and troubleshoot API activity as you integrate other systems into your generative AI applications.

Amazon SageMaker Model Registry announces support for private model repositories

 

Amazon SageMaker Model Registry now allows you to register machine learning (ML) models that are stored in private Docker repositories. This capability enables you to track all your ML models across multiple private AWS and non-AWS model repositories in one central service. In short, this is to simplify ML operations (MLOps) and ML governance at scale.

Amazon SageMaker Model Registry is a purpose-built metadata store to manage the entire lifecycle of ML models from training to inference. Whether you prefer to store your model artifacts in AWS or outside of AWS in any third party Docker repository, you can now track them all in Amazon SageMaker Model Registry.

You also have the flexibility to register a model without read/write permissions to the associated container image. If you want to track an ML model in a private repository, set the optional ‘SkipModelValidation’ parameter to ‘All’ at the time of registration. Later you can also deploy these models for inference in Amazon SageMaker.

Safeguard generative AI applications with Guardrails for Amazon Bedrock

 

Guardrails for Amazon Bedrock in preview that enables customers to implement safeguards across foundation models (FMs) based on their use cases and responsible AI policies. Customers can create multiple guardrails tailored to different use cases and apply them across multiple FMs. Moreover, providing a consistent user experience and standardizing safety controls across generative AI applications.

Customers need to safeguard their generative AI applications for a relevant and safe user experience. Many FMs have built-in protections to filter undesirable and harmful content. But customers may want to further tailor interactions specific to their use cases and adhering to their responsible AI policies.

For example, a bank might want to configure its online assistant to refrain from providing investment advice and limit harmful content. With guardrails, customers can define a set of denied topics that are undesirable within the context of their application. Then they can configure thresholds to filter harmful content across categories such as hate, insults, sexual, and violence.

Guardrails evaluate user queries and FM responses against the denied topics and content filters, helping to prevent content that falls into restricted categories. This allows customers to closely manage user experiences based on application-specific requirements and policies.

Guardrails are supported for English content across text-based FMs, and fine-tuned models on Amazon Bedrock as well as Agents for Amazon Bedrock.

Try out the demo

We will instantly email you the invitation.
The demo is 100% free – no strings attached.