Live online events
The Developing Generative AI Applications on AWS course introduces participants to building generative AI applications using AWS services like Amazon SageMaker, AWS Lambda, and other AI-driven tools for real-time solutions.
This course is designed to introduce generative artificial intelligence (AI) to software developers interested in using large language models (LLMs) without fine-tuning.
The course provides an overview of generative AI, planning a generative AI project, getting started with Amazon Bedrock, the foundations of prompt engineering, and the architecture patterns to build generative AI applications using Amazon Bedrock and LangChain.
• Describe generative AI and how it aligns to machine learning
• Define the importance of generative AI and explain its potential risks and benefits
• Identify business value from generative AI use cases
• Discuss the technical foundations and key terminology for generative AI
• Explain the steps for planning a generative AI project
• Identify some of the risks and mitigations when using generative AI
• Understand how Amazon Bedrock works
• Familiarize yourself with basic concepts of Amazon Bedrock
• Recognize the benefits of Amazon Bedrock
• List typical use cases for Amazon Bedrock
• Describe the typical architecture associated with an Amazon Bedrock solution
• Understand the cost structure of Amazon Bedrock
• Implement a demonstration of Amazon Bedrock in the AWS Management Console
• Define prompt engineering and apply general best practices when interacting with foundation models (FMs)
• Identify the basic types of prompt techniques, including zero-shot and few-shot learning
• Apply advanced prompt techniques when necessary for your use case
• Identify which prompt techniques are best suited for specific models
• Identify potential prompt misuses
• Analyze potential bias in FM responses and design prompts that mitigate that bias
• Identify the components of a generative AI application and how to customize an FM
• Describe Amazon Bedrock foundation models, inference parameters, and key Amazon Bedrock APIs
• Identify Amazon Web Services (AWS) offerings that help with monitoring, securing, and governing your Amazon Bedrock applications
• Describe how to integrate LangChain with LLMs, prompt templates, chains, chat models, text embeddings models, document loaders, retrievers, and Agents
for Amazon Bedrock
• Describe architecture patterns that you can implement with Amazon Bedrock for building generative AI applications
• Apply the concepts to build and test sample use cases that use the various Amazon Bedrock models, LangChain, and the Retrieval Augmented Generation
(RAG) approach
1) Introduction to Generative AI Art of the Possible
2) Planning a Generative AI Project
3) Getting Started with Amazon Bedrock
4) Foundations of Prompt Engineering
5) Amazon Bedrock Application Components
6) Amazon Bedrock Foundation Models
7) LangChain
8) Architecture Patterns