FREIGHT DELAYS IN AND OUT: MORE INFO

Close Notification

Your cart does not contain any items

Why Programs Fail: A Guide to Systematic Debugging

Andreas Zeller (Saarland University, Saarbruecken, Germany)

$78.00

Paperback

We can order this in for you
How long will it take?

QTY:

Elsevier
12 June 2009
Software testing & verification
Why Programs Fail: A Guide to Systematic Debugging is proof that debugging has graduated from a black art to a systematic discipline. It demystifies one of the toughest aspects of software programming, showing clearly how to discover what caused software failures, and fix them with minimal muss and fuss.

The fully updated second edition includes 100+ pages of new material, including new chapters on Verifying Code, Predicting Erors, and Preventing Errors. Cutting-edge tools such as FindBUGS and AGITAR are explained, techniques from integrated environments like Jazz.net are highlighted, and all-new demos with ESC/Java and Spec#, Eclipse and Mozilla are included.

This complete and pragmatic overview of debugging is authored by Andreas Zeller, the talented researcher who developed the GNU Data Display Debugger(DDD), a tool that over 250,000 professionals use to visualize the data structures of programs while they are running. Unlike other books on debugging, Zeller's text is product agnostic, appropriate for all programming languages and skill levels.

The book explains best practices ranging from systematically tracking error reports, to observing symptoms, reproducing errors, and correcting defects. It covers a wide range of tools and techniques from hands-on observation to fully automated diagnoses, and also explores the author's innovative techniques for isolating minimal input to reproduce an error and for tracking cause and effect through a program. It even includes instructions on how to create automated debugging tools.

The text includes exercises and extensive references for further study, and a companion website with source code for all examples and additional debugging resources is available.
By:   Andreas Zeller (Saarland University Saarbruecken Germany)
Imprint:   Elsevier
Country of Publication:   United States
Edition:   2nd edition
Dimensions:   Height: 235mm,  Width: 191mm,  Spine: 23mm
Weight:   710g
ISBN:   9780123745156
ISBN 10:   0123745152
Pages:   424
Publication Date:   12 June 2009
Audience:   Professional and scholarly ,  Undergraduate
Format:   Paperback
Publisher's Status:   Active
Amended Table of Contents The * denotes additions/changes for the proposed second edition. For brevity, second-level sections are omitted from the list. Please note that there are also recurring end-of-chapter sections: Concepts, Tools, Further Reading, and Exercises. Table of Contents * Include a list of How To's as indicated in appropriate chapters About the Author Preface * What's new in the second edition1 How Failures Come to Be 1.1 My Program Does Not Work! * New section Facts on Bugs - highlighting recent empirical findings1.2 From Defects to Failures 1.3 Lost in Time and Space 1.4 From Failures to Fixes 1.5 Automated Debugging Techniques 1.6 Bugs, Faults, or Defects? * New section Learning From Mistakes - pointing to the later chapter2 Tracking Problems 2.1 Oh! All These Problems 2.2 Reporting Problems 2.3 Managing Problems2.4 Classifying Problems 2.5 Processing Problems 2.6 Managing Problem Tracking 2.7 Requirements as Problems 2.8 Managing Duplicates * New section Collecting Problem Data - laying the foundation for later investigation2.9 Relating Problems and Fixes 2.10 Relating Problems and Tests * 2.9 and 2.10 will be merged into a new section A Concert of Activities , focusing on integrated environments like Jazz.net3 Making Programs Fail 3.1 Testing for Debugging 3.2 Controlling the Program 3.3 Testing at the Presentation Layer 3.4 Testing at the Functionality Layer 3.5 Testing at the Unit Layer 3.6 Isolating Units 3.7 Designing for Debugging * Expand on design for diagnosability , esp. for embedded systems3.8 Preventing Unknown Problems * This section will be deleted and replaced with a whole new chapter 184 Reproducing Problems 4.1 The First Task in Debugging 4.2 Reproducing the Problem Environment 4.3 Reproducing Program Execution 4.4 Reproducing System Interaction 4.5 Focusing on Units * Expand reflecting latest research results5 Simplifying Problems 5.1 Simplifying the Problem 5.2 The Gecko BugAThon 5.3 Manual Simplification 5.4 Automatic Simplification 5.5 A Simplification Algorithm 5.6 Simplifying User Interaction 5.7 Random Input Simplified 5.8 Simplifying Faster 6 Scientific Debugging 6.1 How to Become a Debugging Guru 6.2 The Scientific Method 6.3 Applying the Scientific Method 6.4 Explicit Debugging 6.5 Keeping a Logbook 6.6 Debugging Quick-and-Dirty 6.7 Algorithmic Debugging 6.8 Deriving a Hypothesis 6.9 Reasoning About Programs 7 Deducing Errors * This chapter will be renamed to Tracking Dependences 7.1 Isolating Value Origins 7.2 Understanding Control Flow 7.3 Tracking Dependences 7.4 Slicing Programs 7.5 Deducing Code Smells * Move to new chapter 11 Verifying Code 7.6 Limits of Static Analysis * Move to new chapter 11 Verifying Code 8 Observing Facts 8.1 Observing State 8.2 Logging Execution 8.3 Using Debuggers 8.4 Querying Events 8.5 Visualizing State 9 Tracking Origins 9.1 Reasoning Backwards * Update with recent commercial tools9.2 Exploring Execution History 9.3 Dynamic Slicing 9.4 Leveraging Origins * Expand to use latest tools by Ko et al. as well as Gupta et al.9.5 Tracking Down Infections 10 Asserting Expectations 10.1 Automating Observation 10.2 Basic Assertions * Explain design by contract and its principles 10.3 Asserting Invariants * Expand on integrating contracts with inheritance10.4 Asserting Correctness 10.5 Assertions as Specifications10.6 From Assertions to Verification * Move to its own chapter Verifying Code 10.7 Reference Runs * Move to Verifying Code 10.8 System Assertions 10.9 Checking Production Code * Expand discussion; consider checking preconditions only * New Chapter 11 Verifying Code, Why does my Code smell? * Highlight tools like FindBUGS * Defects as Abnormal Behavior * Discuss work by Engler et al. Assertions as SpecificationsFrom Assertions to Verification- moved from 10.6 * Show the integration of ESC/Java and Spec# (with demos)Reference Runs0 moved from 10.7 * Limits of Static Analysis 12 Detecting Anomalies 12.1 Capturing Normal Behavior 12.2 Comparing Coverage 12.3 Statistical Debugging * Include and reflect recent work * Integrate machine learning approaches * Refer to the iBugs library12.4 Collecting Data in the Field 12.5 Dynamic Invariants * Discuss the AGITAR tool12.6 Invariants on the Fly 12.7 From Anomalies to Defects 13 Causes and Effects 13.1 Causes and Alternate Worlds 13.2 Verifying Causes 13.3 Causality in Practice 13.4 Finding Actual Causes 13.5 Narrowing Down Causes 13.6 A Narrowing Example 13.7 The Common Context 13.8 Causes in Debugging 14 Isolating Failure Causes 14.1 Isolating Causes Automatically 14.2 Isolating versus Simplifying 14.3 An Isolation Algorithm 14.4 Implementing Isolation 14.5 Isolating Failure-inducing Input 14.6 Isolating Failure-inducing Schedules 14.7 Isolating Failure-inducing Changes * Update to recent tools and screenshots14.8 Problems and Limitations 15 Isolating Cause-Effect Chains 15.1 Useless Causes 15.2 Capturing Program States 15.3 Comparing Program States 15.4 Isolating Relevant Program States 15.5 Isolating Cause-Effect Chains 15.6 Isolating Failure-inducing Code 15.7 Issues and Risks * Discuss how to recreate state via method calls* New project in Python16 Fixing the Defect 16.1 Locating the Defect 16.2 Focusing on the Most Likely Errors 16.3 Validating the Defect 16.4 Correcting the Defect 16.5 Workarounds 16.6 Learning from Mistakes * This becomes its own chapter 17* New chapter 17 Learning from Mistakes*17.1 Measuring effort and damage We want to know how much effort and cost went into each problem *17.2 Leveraging software archives Collect data from problem and change databases; access more of them *17.3 Mapping errors Which components have had the most errors in the past? Demonstrate using Eclipse and Mozilla data *17.4 Predicting errors Which components will have the most errors in the future? *17.5 What is it that makes software complex? Complexity of code; lack of quality assurance; changing requirements... and how to measure this *17.6 Digging for more data Goal-Question-Metric approach; experience factory *17.7 Continuous Improvement Space Shuttle Software * New chapter 18 Preventing Errors 18.1 Keep Things Simple General principles of good design and coding 18.2 Know what to do Pragmatic specification (design by contract, assertions) 18.3 Know how to check General principles of quality assurance 18.4 Learn from mistakes As laid out in (new) Section 16; integrated with earlier principles 18.5 Improve process and product- keep on challenging yourself Appendix: Formal Definitions A.1 Delta Debugging A.2 Memory Graphs A.3 Cause-Effect ChainsGlossary Bibliography Index

Andreas Zeller is a full professor for Software Engineering at Saarland University in Saarbruecken, Germany. His research concerns the analysis of large software systems and their development process; his students are funded by companies like Google, Microsoft, or SAP. In 2010, Zeller was inducted as Fellow of the ACM for his contributions to automated debugging and mining software archives. In 2011, he received an ERC Advanced Grant, Europe's highest and most prestigious individual research grant, for work on specification mining and test case generation. His book Why programs fail , the standard reference on debugging , obtained the 2006 Software Development Jolt Productivity Award.

Reviews for Why Programs Fail: A Guide to Systematic Debugging

Praise from the experts for the first edition: <br> In this book, Andreas Zeller does an excellent job introducing useful debugging techniques and tools invented in both academia and industry. The book is easy to read and actually very fun as well. It will not only help you discover a new perspective on debugging, but it will also teach you some fundamental static and dynamic program analysis techniques in plain language. <br>-Miryung Kim, Software Developer, Motorola Korea <br> Today every computer program written is also debugged, but debugging is not a widely studied or taught skill. Few books beyond this one present a systematic approach to finding and fixing programming errors. <br>-James Larus, Microsoft Research <br> From the author of ODD, the famous data display debugger, now comes the definitive book on debugging. Zeller's book is chock-full with advice, insight, and tools to track down defects in programs, for all levels of experience and any programming language. The book


See Also