AWS Machine Learning Blog

Generate synthetic counterparty (CR) risk data with generative AI using Amazon Bedrock LLMs and RAG

In this post, we explore how you can use LLMs with advanced Retrieval Augmented Generation (RAG) to generate high-quality synthetic data for a finance domain use case. You can use the same technique for synthetic data for other business domain use cases as well. For this post, we demonstrate how to generate counterparty risk (CR) data, which would be beneficial for over-the-counter (OTC) derivatives that are traded directly between two parties, without going through a formal exchange.

Turbocharging premium audit capabilities with the power of generative AI: Verisk’s journey toward a sophisticated conversational chat platform to enhance customer support

Verisk’s Premium Audit Advisory Service is the leading source of technical information and training for premium auditors and underwriters. In this post, we describe the development of the customer support process in PAAS, incorporating generative AI, the data, the architecture, and the evaluation of the results. Conversational AI assistants are rapidly transforming customer and employee support.

Best practices for Amazon SageMaker HyperPod task governance

In this post, we provide best practices to maximize the value of SageMaker HyperPod task governance and make the administration and data science experiences seamless. We also discuss common governance scenarios when administering and running generative AI development tasks.

How Formula 1® uses generative AI to accelerate race-day issue resolution

In this post, we explain how F1 and AWS have developed a root cause analysis (RCA) assistant powered by Amazon Bedrock to reduce manual intervention and accelerate the resolution of recurrent operational issues during races from weeks to minutes. The RCA assistant enables the F1 team to spend more time on innovation and improving its services, ultimately delivering an exceptional experience for fans and partners. The successful collaboration between F1 and AWS showcases the transformative potential of generative AI in empowering teams to accomplish more in less time.

A red bounding box identifies a vehicle, while a green bounding box identifies the location of the bicycle. The boxes overlap, showing the vehicle is too close to the bicycle.

Using Amazon Rekognition to improve bicycle safety

To better protect themselves, many cyclists are starting to ride with cameras mounted to the front or back of their bicycle. In this blog post, I will demonstrate a machine learning solution that cyclists can use to better identify close calls. The architecture of the solution uses Amazon Rekognition to detect vehicles in recorded bike ride videos. It then analyzes the video to determine if any vehicles are passing too close to the cyclist, within the 3-foot safe distance required by law. The solution automatically generates video clips of these dangerous passing events, which can then be shared with authorities to help improve cyclist safety.

Use language embeddings for zero-shot classification and semantic search with Amazon Bedrock

In this post, we explore what language embeddings are and how they can be used to enhance your application. We show how, by using the properties of embeddings, we can implement a real-time zero-shot classifier and can add powerful features such as semantic search.

Fine-tune LLMs with synthetic data for context-based Q&A using Amazon Bedrock

In this post, we explore how to use Amazon Bedrock to generate synthetic training data to fine-tune an LLM. Additionally, we provide concrete evaluation results that showcase the power of synthetic data in fine-tuning when data is scarce.

Achieve ~2x speed-up in LLM inference with Medusa-1 on Amazon SageMaker AI

Researchers developed Medusa, a framework to speed up LLM inference by adding extra heads to predict multiple tokens simultaneously. This post demonstrates how to use Medusa-1, the first version of the framework, to speed up an LLM by fine-tuning it on Amazon SageMaker AI and confirms the speed up with deployment and a simple load test. Medusa-1 achieves an inference speedup of around two times without sacrificing model quality, with the exact improvement varying based on model size and data used. In this post, we demonstrate its effectiveness with a 1.8 times speedup observed on a sample dataset.