Today's AI/ML headlines are brought to you by ThreatPerspective

AWS Machine Learning Blog

Improve LLM application robustness with Amazon Bedrock Guardrails and Amazon Bedrock Agents

In this post, we demonstrate how Amazon Bedrock Guardrails can improve the robustness of the agent framework. We are able to stop our chatbot from responding to non-relevant queries and protect personal information from our customers, ultimately improving the robustness of our agentic implementation with Amazon Bedrock Agents. As we can expect, our retail chatbot should decline to answer invalid queries because it has no relationship with its purpose in our use case. Cost considerations The following are important cost considerations: There are no separate charges for building resources using Amazon Bedrock Agents. You will incur charges for embedding model and text model invocation on Amazon Bedrock. Generation of text and embeddings with the agent in Amazon Bedrock incurs charges according to the cost of each FM. For more details, refer to Amazon Bedrock pricing. You will incur charges for Amazon Bedrock Guardrails. For more details, see Amazon Bedrock pricing. You will incur charges for storing files in Amazon Simple Storage Service (Amazon S3). For more details, see Amazon S3 pricing. You will incur charges for your SageMaker instance, Lambda function, and AWS CloudFormation usage. For more details, see Amazon SageMaker pricing, AWS Lambda pricing, and AWS CloudFormation pricing. Clean up For the Part 1b and Part 1c notebooks, to avoid incurring recurring costs, the implementation automatically cleans up resources after an entire run of the notebook. You can check the notebook instructions in the Clean-up Resources section on how to avoid the automatic cleanup and experiment with different prompts. The order of cleanup is as follows: Disable the action group. Delete the action group. Delete the alias. Delete the agent. Delete the Lambda function. Empty the S3 bucket. Delete the S3 bucket. Delete IAM roles and policies. You can delete guardrails from the Amazon Bedrock console or API. Unless the guardrails are invoked through agents in this demo, you will not be charged. For more details, see Delete a guardrail. Conclusion In this post, we demonstrated how Amazon Bedrock Guardrails can improve the robustness of the agent framework. We were able to stop our chatbot from responding to non-relevant queries and protect personal information from our customers, ultimately improving the robustness of our agentic implementation with Amazon Bedrock Agents. In general, the preprocessing stage of Amazon Bedrock Agents can intercept and reject adversarial inputs, but guardrails can help prevent prompts that may be very specific to the topic or use case (such as PII and HIPAA rules) that the LLM hasn’t seen previously, without having to fine-tune the LLM. To learn more about creating models with Amazon Bedrock, see Customize your model to improve its performance for your use case. To learn more about using agents to orchestrate workflows, see Automate tasks in your application using conversational agents. For details about using guardrails to safeguard your generative AI applications, refer to Stop harmful content in models using Amazon Bedrock Guardrails. Acknowledgements The author thanks all the reviewers for their valuable feedback. About the Author Shayan Ray is an Applied Scientist at Amazon Web Services. His area of research is all things natural language (like NLP, NLU, and NLG). His work has been focused on conversational AI, task-oriented dialogue systems, and LLM-based agents. His research publications are on natural language processing, personalization, and reinforcement learning.

Published: 2024-10-11T19:09:40











© Digital Event Horizon . All rights reserved.

Privacy | Terms of Use | Contact Us