Bargains! PROMOTIONS

Close Notification

Your cart does not contain any items

Building Natural Language and LLM Pipelines

Build production-grade RAG, tool contracts, and context engineering with Haystack and LangGraph...

Laura Funderburk

$112.95   $90.51

Paperback

Not in-store but you can order this
How long will it take?

QTY:

English
Packt Publishing Limited
30 December 2025
Stop LLM applications from breaking in production. Build deterministic pipelines, enforce strict tool contracts, engineer high-signal context for RAG, and orchestrate resilient multi-agent workflows using two foundational frameworks: Haystack for pipelines and LangGraph for low-level agent orchestration.

Free with your book: DRM-free PDF version + access to Packt's next-gen Reader
*

Key Features

Design reproducible LLM pipelines using typed components and strict tool contracts Build resilient multi-agent systems with LangGraph and modular microservices Evaluate and monitor pipeline performance with Ragas and Weights & Biases

Book DescriptionModern LLM applications often break in production due to brittle pipelines, loose tool definitions, and noisy context. This book shows you how to build production-ready, context-aware systems using Haystack and LangGraph. You’ll learn to design deterministic pipelines with strict tool contracts and deploy them as microservices. Through structured context engineering, you’ll orchestrate reliable agent workflows and move beyond simple prompt-based interactions.

You'll start by understanding LLM behavior—tokens, embeddings, and transformer models—and see how prompt engineering has evolved into a full context engineering discipline. Then, you'll build retrieval-augmented generation (RAG) pipelines with retrievers, rankers, and custom components using Haystack’s graph-based architecture. You’ll also create knowledge graphs, synthesize unstructured data, and evaluate system behavior using Ragas and Weights & Biases. In LangGraph, you’ll orchestrate agents with supervisor-worker patterns, typed state machines, retries, fallbacks, and safety guardrails.

By the end of the book, you’ll have the skills to design scalable, testable LLM pipelines and multi-agent systems that remain robust as the AI ecosystem evolves.
*Email sign-up and proof of purchase required

What you will learn

Build structured retrieval pipelines with Haystack Apply context engineering to improve agent performance Serve pipelines as LangGraph-compatible microservices Use LangGraph to orchestrate multi-agent workflows Deploy REST APIs using FastAPI and Hayhooks Track cost and quality with Ragas and Weights & Biases Implement retries, circuit breakers, and observability Design sovereign agents for high-volume local execution

Who this book is forLLM engineers, NLP developers, and data scientists looking to build production-grade pipelines, agentic workflows, or RAG systems. Ideal for tech leads looking to move beyond prototypes to scalable, testable solutions, as well as teams modernizing legacy NLP pipelines into orchestration-ready microservices. Proficiency in Python and familiarity with core NLP concepts are recommended.
By:  
Imprint:   Packt Publishing Limited
Country of Publication:   United Kingdom
Dimensions:   Height: 235mm,  Width: 191mm, 
ISBN:   9781835467992
ISBN 10:   1835467997
Pages:   338
Publication Date:  
Audience:   General/trade ,  ELT Advanced
Format:   Paperback
Publisher's Status:   Active

Laura Funderburk is a leading figure in AI and data science, specializing in LLM applications, RAG systems, and agentic workflows. She serves as the developer relations and community lead at AI Makerspace, where she empowers engineers to build production-ready AI through open-source initiatives. With a background as a data scientist and DevOps engineer, Laura brings her skills as a Python developer into her work as an author. She holds a Bachelor of Mathematics from Simon Fraser University, where she was awarded the Terry Fox Gold Medal for courage in adversity. A dedicated mentor, Laura remains committed to teaching and outreach, helping the next generation of engineers master machine learning and AI operations.

See Also