In the era of artificial intelligence, the ability to create intelligent, context-aware text interactions is essential. Large Language Model (LLM)–powered conversational agents are revolutionizing how we interact with technology and access information. This intensive course will guide you “A to Z” in developing robust text conversational agents using two of today’s most powerful tools: LangChain and the OpenAI API (with coverage of additional LLMs). All hands-on in Python. Across four theory-practice sessions and a final intensive workshop, you will explore LLM fundamentals, understand how LangChain orchestrates complex components (memory, chains, tools), and integrate the OpenAI API to bring your agents to life. You will learn to give your agents memory of past conversations, access external knowledge (RAG), and use external tools for action. The course is highly practical, with code examples and exercises designed to reinforce your learning at each stage. You will culminate in a 3-hour workshop challenge where you apply everything learned to build a complete conversational agent under the instructor’s guidance. A certificate is available for attendees who complete all challenges, with a score indicator from 0 to 100 points. If you are a developer, data scientist, or simply someone interested in building intelligent text applications with the latest tools, this course will provide the practical skills and knowledge you need.
You will learn what text-based conversational agents are, how they interact using natural language, and how they transform customer service by automating processes and delivering fast responses.
You will discover what LangChain is, its main components, and how this Python library makes it easy to integrate language models and external tools to build powerful conversational agents.
You will explore how the OpenAI API works, GPT’s text-generation capabilities, key parameters, and how to use it to get intelligent responses in your applications.
Step by step, you will create your first basic text conversational agent using LangChain and the OpenAI API, learning the key structure and how to interact with users via text.
You will learn essential techniques for designing effective prompts, improving the accuracy and quality of your agents’ responses through clear, specific instructions.
You will discover how to manage context and memory in LangChain, enabling your agent to maintain coherent, relevant conversations by remembering key information across interactions.
You will explore how to build and connect Chains in LangChain, chaining multiple components to perform complex tasks in a structured, scalable, and efficient way in your agents.
You will learn about the LangChain Tools system: how to wrap APIs, databases, or Python functions as actions the agent can invoke dynamically, allowing it to reason and execute complex tasks.
You will learn to combine embeddings, semantic search, and text generation to implement RAG: locating relevant snippets and merging them with GPT to deliver precise, cited, up-to-date answers.
You will deepen your understanding of ChromaDB: bulk loading, vector indexing, metadata, filters, and persistence modes. You will configure efficient collections to scale queries over millions of documents in your agents.
You will explore how to inject representative examples into prompts via Few-Shot: dynamic case selection, style control, and guiding model output with minimal training data.
You will learn to use OutputParser and StructuredOutputParser to transform GPT’s free-form response into JSON, Pydantic, or other formats, ensuring reliable, typed, easy-to-consume integrations.
You will dive into embedding generation and selection, model parameters (temperature, top-p), and iterative prompt-tuning to achieve more precise, coherent, and controlled responses.
You will review patterns like hierarchical agents, routing, automatic tool-calling, streaming, custom callbacks, and error handling to boost your agents’ robustness and capabilities.
We will implement various LLMs—Mistral AI, Claude, Gemini, DeepSeek, and Grok—and cover how to deploy Ollama for local LLMs.
We will review different monitoring and fine-tuning approaches for our agents, as well as best testing practices.
We will explain different deployment options—web, Slack, WhatsApp, and Facebook Messenger—and discuss architectures for production-level agent availability.
Copyright © 2025 IA Chile. All Right Reserved