72,99 €
inkl. MwSt.
Versandkostenfrei*
Versandfertig in 1-2 Wochen
  • Broschiertes Buch

Minimize AI hallucinations and build accurate, custom generative AI pipelines with RAG using embedded vector databases and integrated human feedback Purchase of the print or Kindle book includes a free eBook in PDF format Key Features: - Implement RAG's traceable outputs, linking each response to its source document to build reliable multimodal conversational agents - Deliver accurate generative AI models in pipelines integrating RAG, real-time human feedback improvements, and knowledge graphs - Balance cost and performance between dynamic retrieval datasets and fine-tuning static data Book…mehr

Produktbeschreibung
Minimize AI hallucinations and build accurate, custom generative AI pipelines with RAG using embedded vector databases and integrated human feedback Purchase of the print or Kindle book includes a free eBook in PDF format Key Features: - Implement RAG's traceable outputs, linking each response to its source document to build reliable multimodal conversational agents - Deliver accurate generative AI models in pipelines integrating RAG, real-time human feedback improvements, and knowledge graphs - Balance cost and performance between dynamic retrieval datasets and fine-tuning static data Book Description: RAG-Driven Generative AI provides a roadmap for building effective LLM, computer vision, and generative AI systems that balance performance and costs. This book offers a detailed exploration of RAG and how to design, manage, and control multimodal AI pipelines. By connecting outputs to traceable source documents, RAG improves output accuracy and contextual relevance, offering a dynamic approach to managing large volumes of information. This AI book shows you how to build a RAG framework, providing practical knowledge on vector stores, chunking, indexing, and ranking. You'll discover techniques to optimize your project's performance and better understand your data, including using adaptive RAG and human feedback to refine retrieval accuracy, balancing RAG with fine-tuning, implementing dynamic RAG to enhance real-time decision-making, and visualizing complex data with knowledge graphs. You'll be exposed to a hands-on blend of frameworks like LlamaIndex and Deep Lake, vector databases such as Pinecone and Chroma, and models from Hugging Face and OpenAI. By the end of this book, you will have acquired the skills to implement intelligent solutions, keeping you competitive in fields from production to customer service across any project. What You Will Learn: - Scale RAG pipelines to handle large datasets efficiently - Employ techniques that minimize hallucinations and ensure accurate responses - Implement indexing techniques to improve AI accuracy with traceable and transparent outputs - Customize and scale RAG-driven generative AI systems across domains - Find out how to use Deep Lake and Pinecone for efficient and fast data retrieval - Control and build robust generative AI systems grounded in real-world data - Combine text and image data for richer, more informative AI responses Who this book is for: This book is ideal for data scientists, AI engineers, machine learning engineers, and MLOps engineers. If you are a solutions architect, software developer, product manager, or project manager looking to enhance the decision-making process of building RAG applications, then you'll find this book useful. Table of Contents - Why Retrieval Augmented Generation(RAG)? - RAG Embeddings Vector Stores with Activeloop and OpenAI - Indexed-based RAG with LlamaIndex and Langchain - Multimodal Modular RAG with Pincecone - Boosting RAG Performance with Expert Human Feedback - All in One with Meta RAG - Organizing RAG with Llamaindex Knowledge Graphs - Exploring the Scaling Limits of RAG - Empowering AI Models: Fine-tuning RAG Data and Human Feedback - Building the RAG Pipeline from Data Collection to Generative AI
Hinweis: Dieser Artikel kann nur an eine deutsche Lieferadresse ausgeliefert werden.
Autorenporträt
Denis Rothman graduated from Sorbonne University and Paris-Diderot University, and as a student, he wrote and registered a patent for one of the earliest word2vector embeddings and word piece tokenization solutions. He started a company focused on deploying AI and went on to author one of the first AI cognitive NLP chatbots, applied as a language teaching tool for Moët et Chandon (part of LVMH) and more. Denis rapidly became an expert in explainable AI, incorporating interpretable, acceptance-based explanation data and interfaces into solutions implemented for major corporate projects in the aerospace, apparel, and supply chain sectors. His core belief is that you only really know something once you have taught somebody how to do it.