OUR STORE IS CLOSED ON ANZAC DAY: THURSDAY 25 APRIL

Close Notification

Your cart does not contain any items

$300

Hardback

Not in-store but you can order this
How long will it take?

QTY:

English
MIT Press
31 July 2009
A general framework for constructing and using probabilistic models of complex systems that would enable a computer to use available information for making decisions.

Most tasks require a person or an automated system to reason-to reach conclusions based on available information. The framework of probabilistic graphical models, presented in this book, provides a general approach for this task. The approach is model-based, allowing interpretable models to be constructed and then manipulated by reasoning algorithms. These models can also be learned automatically from data, allowing the approach to be used in cases where manually constructing a model is difficult or even impossible. Because uncertainty is an inescapable aspect of most real-world applications, the book focuses on probabilistic models, which make the uncertainty explicit and provide models that are more faithful to reality.

Probabilistic Graphical Models discusses a variety of models, spanning Bayesian networks, undirected Markov networks, discrete and continuous models, and extensions to deal with dynamical systems and relational data. For each class of models, the text describes the three fundamental cornerstones- representation, inference, and learning, presenting both basic concepts and advanced techniques. Finally, the book considers the use of the proposed framework for causal reasoning and decision making under uncertainty. The main text in each chapter provides the detailed technical development of the key ideas. Most chapters also include boxes with additional material- skill boxes, which describe techniques; case study boxes, which discuss empirical cases related to the approach described in the text, including applications in computer vision, robotics, natural language understanding, and computational biology; and concept boxes, which present significant concepts drawn from the material in the chapter. Instructors (and readers) can group chapters in various combinations, from core topics to more technically advanced material, to suit their particular needs.

By:   ,
Series edited by:  
Imprint:   MIT Press
Country of Publication:   United States
Dimensions:   Height: 229mm,  Width: 203mm,  Spine: 43mm
Weight:   2.132kg
ISBN:   9780262013192
ISBN 10:   0262013193
Series:   Adaptive Computation and Machine Learning series
Pages:   1270
Publication Date:  
Recommended Age:   From 18 years
Audience:   College/higher education ,  Primary
Format:   Hardback
Publisher's Status:   Active
ABRIDGED CONTENTS: Part I: Representation / Part II: Inference / Part III: Learning / Part IV: Actions and DecisionsComplete Table of Contents Sample Chapter - Download PDF (186 KB) Acknowledgments Sample Chapter - Download PDF (153 KB) xxiii List of Figures xxv List of Algorithms xxxi List of Boxes xxxiii 1 Introduction Sample Chapter - Download PDF (221 KB) 1 1.1 Motivation 1 1.2 Structured Probabilistic Models 2 1.3 Overview and Roadmap 6 1.4 Historical Notes 12 2 Foundations 15 2.1 Probability Theory 15 2.2 Graphs 34 2.3 Relevant Literature 39 2.4 Exercises 39 I Representation 43 3 The Bayesian Network Representation 45 3.1 Exploiting Independence Properties 45 3.2 Bayesian Networks 51 3.3 Independencies in Graphs 68 3.4 From Distributions to Graphs 78 3.5 Summary 92 3.6 Relevant Literature 93 3.7 Exercises 96 4 Undirected Graphical Models 103 4.1 The Misconception Example 103 4.2 Parameterization 106 4.3 Markov Network Independencies 114 4.4 Parameterization Revisited 122 4.5 Bayesian Networks and Markov Networks 134 4.6 Partially Directed Models 142 4.7 Summary and Discussion 151 4.8 Relevant Literature 152 4.9 Exercises 153 5 Local Probabilistic Models 157 5.1 Tabular CPDs 157 5.2 Deterministic CPDs 158 5.3 Context-Specific CPDs 162 5.4 Independence of Causal Influence 175 5.5 Continuous Variables 185 5.6 Conditional Bayesian Networks 191 5.7 Summary 193 5.8 Relevant Literature 194 5.9 Exercises 195 6 Template-Based Representations 199 6.1 Introduction 199 6.2 Temporal Models 200 6.3 Template Variables and Template Factors 212 6.4 Directed Probabilistic Models for Object-Relational Domains 216 6.5 Undirected Representation 228 6.6 Structural Uncertainty 232 6.7 Summary 240 6.8 Relevant Literature 242 6.9 Exercises 243 7 Gaussian Network Models 247 7.1 Multivariate Gaussians 247 7.2 Gaussian Bayesian Networks 251 7.3 Gaussian Markov Random Fields 254 7.4 Summary 257 7.5 Relevant Literature 258 7.6 Exercises 258 8 The Exponential Family 261 8.1 Introduction 261 8.2 Exponential Families 261 8.3 Factored Exponential Families 266 8.4 Entropy and Relative Entropy 269 8.5 Projections 273 8.6 Summary 282 8.7 Relevant Literature 283 8.8 Exercises 283 II Inference 285 9 Exact Inference: Variable Elimination 287 9.1 Analysis of Complexity 288 9.2 Variable Elimination: The Basic Ideas 292 9.3 Variable Elimination 296 9.4 Complexity and Graph Structure: Variable Elimination 306 9.5 Conditioning 315 9.6 Inference with Structured CPDs 325 9.7 Summary and Discussion 336 9.8 Relevant Literature 337 9.9 Exercises 338 10 Exact Inference: Clique Trees 345 10.1 Variable Elimination and Clique Trees 345 10.2 Message Passing: Sum Product 348 10.3 Message Passing: Belief Update 364 10.4 Constructing a Clique Tree 372 10.5 Summary 376 10.6 Relevant Literature 377 10.7 Exercises 378 11 Inference as Optimization 381 11.1 Introduction 381 11.2 Exact Inference as Optimization 386 11.3 Propagation-Based Approximation 391 11.4 Propagation with Approximate Messages 430 11.5 Structured Variational Approximations 448 11.6 Summary and Discussion 473 11.7 Relevant Literature 475 11.8 Exercises 477 12 Particle-Based Approximate Inference 487 12.1 Forward Sampling 488 12.2 Likelihood Weighting and Importance Sampling 492 12.3 Markov Chain Monte Carlo Methods 505 12.4 Collapsed Particles 526 12.5 Deterministic Search Methods 536 12.6 Summary 540 12.7 Relevant Literature 541 12.8 Exercises 544 13 MAP Inference 551 13.1 Overview 551 13.2 Variable Elimination for (Marginal) MAP 554 13.3 Max-Product in Clique Trees 562 13.4 Max-Product Belief Propagation in Loopy Cluster Graphs 567 13.5 MAP as a Linear Optimization Problem 577 13.6 Using Graph Cuts for MAP 588 13.7 Local Search Algorithms 595 13.8 Summary 597 13.9 Relevant Literature 598 13.10 Exercises 601 14 Inference in Hybrid Networks 605 14.1 Introduction 605 14.2 Variable Elimination in Gaussian Networks 608 14.3 Hybrid Networks 615 14.4 Nonlinear Dependencies 630 14.5 Particle-Based Approximation Methods 642 14.6 Summary and Discussion 646 14.7 Relevant Literature 647 14.8 Exercises 649 15 Inference in Temporal Models 651 15.1 Inference Tasks 652 15.2 Exact Inference 653 15.3 Approximate Inference 660 15.4 Hybrid DBNs 675 15.5 Summary 688 15.6 Relevant Literature 690 15.7 Exercises 692 III Learning 695 16 Learning Graphical Models: Overview 697 16.1 Motivation 697 16.2 Goals of Learning 698 16.3 Learning as Optimization 702 16.4 Learning Tasks 711 16.5 Relevant Literature 715 17 Parameter Estimation 717 17.1 Maximum Likelihood Estimation 717 17.2 MLE for Bayesian Networks 722 17.3 Bayesian Parameter Estimation 733 17.4 Bayesian Parameter Estimation in Bayesian Networks 741 17.5 Learning Models with Shared Parameters 754 17.6 Generalization Analysis 769 17.7 Summary 776 17.8 Relevant Literature 777 17.9 Exercises 778 18 Structure Learning in Bayesian Networks 783 18.1 Introduction 783 18.2 Constraint-Based Approaches 786 18.3 Structure Scores 790 18.4 Structure Search 807 18.5 Bayesian Model Averaging 824 18.6 Learning Models with Additional Structure 832 18.7 Summary and Discussion 838 18.8 Relevant Literature 840 18.9 Exercises 843 19 Partially Observed Data 849 19.1 Foundations 849 19.2 Parameter Estimation 862 19.3 Bayesian Learning with Incomplete Data 897 19.4 Structure Learning 908 19.5 Learning Models with Hidden Variables 925 19.6 Summary 933 19.7 Relevant Literature 934 19.8 Exercises 935 20 Learning Undirected Models 943 20.1 Overview 943 20.2 The Likelihood Function 944 20.3 Maximum (Conditional) Likelihood Parameter Estimation 949 20.4 Parameter Priors and Regularization 958 20.5 Learning with Approximate Inference 961 20.6 Alternative Objectives 969 20.7 Structure Learning 978 20.8 Summary 996 20.9 Relevant Literature 998 20.10 Exercises 1001 IV Actions and Decisions 1007 21 Causality 1009 21.1 Motivation and Overview 1009 21.2 Causal Models 1014 21.3 Structural Causal Identifiability 1017 21.4 Mechanisms and Response Variables 1026 21.5 Partial Identifiability in Functional Causal Models 1031 21.6 Counterfactual Queries 1034 21.7 Learning Causal Models 1039 21.8 Summary 1052 21.9 Relevant Literature 1053 21.10 Exercises 1054 22 Utilities and Decisions 1057 22.1 Foundations: Maximizing Expected Utility 1057 22.2 Utility Curves 1062 22.3 Utility Elicitation 1066 22.4 Utilities of Complex Outcomes 1069 22.5 Summary 1079 22.6 Relevant Literature 1080 22.7 Exercises 1082 23 Structured Decision Problems 1083 23.1 Decision Trees 1083 23.2 Influence Diagrams 1086 23.3 Backward Induction in Influence Diagrams 1093 23.4 Computing Expected Utilities 1098 23.5 Optimization in Influence Diagrams 1105 23.6 Ignoring Irrelevant Information 1117 23.7 Value of Information 1119 23.8 Summary 1124 23.9 Relevant Literature 1125 23.10 Exercises 1128 24 Epilogue 1131 A Background Material 1135 A.1 Information Theory 1135 A.2 Convergence Bounds 1141 A.3 Algorithms and Algorithmic Complexity 1144 A.4 Combinatorial Optimization and Search 1152 A.5 Continuous Optimization 1159 Bibliography 1171 Notation Index Sample Chapter - Download PDF (193 KB) 1209 Subject Index

Daphne Koller is Professor in the Department of Computer Science at Stanford University. Nir Friedman is Professor in the Department of Computer Science and Engineering at Hebrew University.

Reviews for Probabilistic Graphical Models: Principles and Techniques

This landmark book provides a very extensive coverage of the field, ranging from basic representational issues to the latest techniques for approximate inference and learning. As such, it is likely to become a definitive reference for all those who work in this area. Detailed worked examples and case studies also make the book accessible to students. --Kevin Murphy, Department of Computer Science, University of British Columbia


See Inside

See Also