Portada

LLM DESIGN PATTERNS IBD

PACKT PUBLISHING
05 / 2025
9781836207030
Inglés

Sinopsis

Explore reusable design patterns, including data-centric approaches, model development, model fine-tuning, and RAG for LLM application development and advanced prompting techniquesKey Features:- Learn comprehensive LLM development, including data prep, training pipelines, and optimization- Explore advanced prompting techniques, such as chain-of-thought, tree-of-thought, RAG, and AI agents- Implement evaluation metrics, interpretability, and bias detection for fair, reliable models- Print or Kindle purchase includes a free PDF eBookBook Description:This practical guide for AI professionals enables you to build on the power of design patterns to develop robust, scalable, and efficient large language models (LLMs). Written by a global AI expert and popular author driving standards and innovation in Generative AI, security, and strategy, this book covers the end-to-end lifecycle of LLM development and introduces reusable architectural and engineering solutions to common challenges in data handling, model training, evaluation, and deployment.You?ll learn to clean, augment, and annotate large-scale datasets, architect modular training pipelines, and optimize models using hyperparameter tuning, pruning, and quantization. The chapters help you explore regularization, checkpointing, fine-tuning, and advanced prompting methods, such as reason-and-act, as well as implement reflection, multi-step reasoning, and tool use for intelligent task completion. The book also highlights Retrieval-Augmented Generation (RAG), graph-based retrieval, interpretability, fairness, and RLHF, culminating in the creation of agentic LLM systems.By the end of this book, you?ll be equipped with the knowledge and tools to build next-generation LLMs that are adaptable, efficient, safe, and aligned with human values.What You Will Learn:- Implement efficient data prep techniques, including cleaning and augmentation- Design scalable training pipelines with tuning, regularization, and checkpointing- Optimize LLMs via pruning, quantization, and fine-tuning- Evaluate models with metrics, cross-validation, and interpretability- Understand fairness and detect bias in outputs- Develop RLHF strategies to build secure, agentic AI systemsWho this book is for:This book is essential for AI engineers, architects, data scientists, and software engineers responsible for developing and deploying AI systems powered by large language models. A basic understanding of machine learning concepts and experience in Python programming is a must.Table of Contents- Introduction to LLM Design Patterns- Data Cleaning for LLM Training- Data Augmentation- Handling Large Datasets for LLM Training- Data Versioning- Dataset Annotation and Labeling- Training Pipeline- Hyperparameter Tuning- Regularization- Checkpointing and Recovery- Fine-Tuning- Model Pruning- Quantization- Evaluation Metrics- Cross-Validation- Interpretability- Fairness and Bias Detection- Adversarial Robustness- Reinforcement Learning from Human Feedback- Chain-of-Thought Prompting- Tree-of-Thoughts Prompting- Reasoning and Acting- Reasoning WithOut Observation- Reflection Techniques- Automatic Multi-Step Reasoning and Tool Use- Retrieval-Augmented Generation- Graph-Based RAG- Advanced RAG- Evaluating RAG Systems- Agentic Patterns