Alle Infos zum eBook verschenken
- Format: ePub
- Merkliste
- Auf die Merkliste
- Bewerten Bewerten
- Teilen
- Produkt teilen
- Produkterinnerung
- Produkterinnerung
Hier können Sie sich einloggen
Bitte loggen Sie sich zunächst in Ihr Kundenkonto ein oder registrieren Sie sich bei bücher.de, um das eBook-Abo tolino select nutzen zu können.
Many claims are made about how certain tools, technologies, and practices improve software development. But which claims are verifiable, and which are merely wishful thinking? In this book, leading thinkers such as Steve McConnell, Barry Boehm, and Barbara Kitchenham offer essays that uncover the truth and unmask myths commonly held among the software development community. Their insights may surprise you.Are some programmers really ten times more productive than others?Does writing tests first help you develop better code faster?Can code metrics predict the number of bugs in a piece of…mehr
- Geräte: eReader
- mit Kopierschutz
- eBook Hilfe
- Größe: 5.48MB
- FamilySharing(5)
- Dan PiloneHead First Software Development (eBook, ePUB)24,95 €
- John Ferguson SmartJenkins: The Definitive Guide (eBook, ePUB)21,95 €
- Markus VölterModel-Driven Software Development (eBook, ePUB)40,99 €
- Richard Monson-Haefel97 Things Every Software Architect Should Know (eBook, ePUB)17,95 €
- Tom StuartUnderstanding Computation (eBook, ePUB)24,95 €
- Elecia WhiteMaking Embedded Systems (eBook, ePUB)19,95 €
- Max Kanat-AlexanderCode Simplicity (eBook, ePUB)14,95 €
-
-
-
Dieser Download kann aus rechtlichen Gründen nur mit Rechnungsadresse in A, B, BG, CY, CZ, D, DK, EW, E, FIN, F, GR, HR, H, IRL, I, LT, L, LR, M, NL, PL, P, R, S, SLO, SK ausgeliefert werden.
- Produktdetails
- Verlag: O'Reilly Media
- Seitenzahl: 624
- Erscheinungstermin: 14. Oktober 2010
- Englisch
- ISBN-13: 9781449397760
- Artikelnr.: 37904099
- Verlag: O'Reilly Media
- Seitenzahl: 624
- Erscheinungstermin: 14. Oktober 2010
- Englisch
- ISBN-13: 9781449397760
- Artikelnr.: 37904099
- Herstellerkennzeichnung Die Herstellerinformationen sind derzeit nicht verfügbar.
Organization of This Book
Conventions Used in This Book
Safari® Books Online
Using Code Examples
How to Contact Us
General Principles of Searching For and Using Evidence
Chapter 1: The Quest for Convincing Evidence
1.1 In the Beginning
1.2 The State of Evidence Today
1.3 Change We Can Believe In
1.4 The Effect of Context
1.5 Looking Toward the Future
1.6 References
Chapter 2: Credibility, or Why Should I Insist on Being Convinced?
2.1 How Evidence Turns Up in Software Engineering
2.2 Credibility and Relevance
2.3 Aggregating Evidence
2.4 Types of Evidence and Their Strengths and Weaknesses
2.5 Society, Culture, Software Engineering, and You
2.6 Acknowledgments
2.7 References
Chapter 3: What We Can Learn from Systematic Reviews
3.1 An Overview of Systematic Reviews
3.2 The Strengths and Weaknesses of Systematic Reviews
3.3 Systematic Reviews in Software Engineering
3.4 Conclusion
3.5 References
Chapter 4: Understanding Software Engineering Through Qualitative Methods
4.1 What Are Qualitative Methods?
4.2 Reading Qualitative Research
4.3 Using Qualitative Methods in Practice
4.4 Generalizing from Qualitative Results
4.5 Qualitative Methods Are Systematic
4.6 References
Chapter 5: Learning Through Application: The Maturing of the QIP in the SEL
5.1 What Makes Software Engineering Uniquely Hard to Research
5.2 A Realistic Approach to Empirical Research
5.3 The NASA Software Engineering Laboratory: A Vibrant Testbed for Empirical Research
5.4 The Quality Improvement Paradigm
5.5 Conclusion
5.6 References
Chapter 6: Personality, Intelligence, and Expertise: Impacts on Software Development
6.1 How to Recognize Good Programmers
6.2 Individual or Environment
6.3 Concluding Remarks
6.4 References
Chapter 7: Why Is It So Hard to Learn to Program?
7.1 Do Students Have Difficulty Learning to Program?
7.2 What Do People Understand Naturally About Programming?
7.3 Making the Tools Better by Shifting to Visual Programming
7.4 Contextualizing for Motivation
7.5 Conclusion: A Fledgling Field
7.6 References
Chapter 8: Beyond Lines of Code: Do We Need More Complexity Metrics?
8.1 Surveying Software
8.2 Measuring the Source Code
8.3 A Sample Measurement
8.4 Statistical Analysis
8.5 Some Comments on the Statistical Methodology
8.6 So Do We Need More Complexity Metrics?
8.7 References
Specific Topics in Software Engineering
Chapter 9: An Automated Fault Prediction System
9.1 Fault Distribution
9.2 Characteristics of Faulty Files
9.3 Overview of the Prediction Model
9.4 Replication and Variations of the Prediction Model
9.5 Building a Tool
9.6 The Warning Label
9.7 References
Chapter 10: Architecting: How Much and When?
10.1 Does the Cost of Fixing Software Increase over the Project Life Cycle?
10.2 How Much Architecting Is Enough?
10.3 Using What We Can Learn from Cost-to-Fix Data About the Value of Architecting
10.4 So How Much Architecting Is Enough?
10.5 Does the Architecting Need to Be Done Up Front?
10.6 Conclusions
10.7 References
Chapter 11: Conway's Corollary
11.1 Conway's Law
11.2 Coordination, Congruence, and Productivity
11.3 Organizational Complexity Within Microsoft
11.4 Chapels in the Bazaar of Open Source Software
11.5 Conclusions
11.6 References
Chapter 12: How Effective Is Test-Driven Development?
12.1 The TDD Pill-What Is It?
12.2 Summary of Clinical TDD Trials
12.3 The Effectiveness of TDD
12.4 Enforcing Correct TDD Dosage in Trials
12.5 Cautions and Side Effects
12.6 Conclusions
12.7 Acknowledgments
12.8 General References
12.9 Clinical TDD Trial References
Chapter 13: Why Aren't More Women in Computer Science?
13.1 Why So Few Women?
13.2 Should We Care?
13.3 Conclusion
13.4 References
Chapter 14: Two Comparisons of Programming Languages
14.1 A Language Shoot-Out over a Peculiar Search Algorithm
14.2 Plat_Forms: Web Development Technologies and Cultures
14.3 So What?
14.4 References
Chapter 15: Quality Wars: Open Source Versus Proprietary Software
15.1 Past Skirmishes
15.2 The Battlefield
15.3 Into the Battle
15.4 Outcome and Aftermath
15.5 Acknowledgments and Disclosure of Interest
15.6 References
Chapter 16: Code Talkers
16.1 A Day in the Life of a Programmer
16.2 What Is All This Talk About?
16.3 A Model for Thinking About Communication
16.4 References
Chapter 17: Pair Programming
17.1 A History of Pair Programming
17.2 Pair Programming in an Industrial Setting
17.3 Pair Programming in an Educational Setting
17.4 Distributed Pair Programming
17.5 Challenges
17.6 Lessons Learned
17.7 Acknowledgments
17.8 References
Chapter 18: Modern Code Review
18.1 Common Sense
18.2 A Developer Does a Little Code Review
18.3 Group Dynamics
18.4 Conclusion
18.5 References
Chapter 19: A Communal Workshop or Doors That Close?
19.1 Doors That Close
19.2 A Communal Workshop
19.3 Work Patterns
19.4 One More Thing...
19.5 References
Chapter 20: Identifying and Managing Dependencies in Global Software Development
20.1 Why Is Coordination a Challenge in GSD?
20.2 Dependencies and Their Socio-Technical Duality
20.3 From Research to Practice
20.4 Future Directions
20.5 References
Chapter 21: How Effective Is Modularization?
21.1 The Systems
21.2 What Is a Change?
21.3 What Is a Module?
21.4 The Results
21.5 Threats to Validity
21.6 Summary
21.7 References
Chapter 22: The Evidence for Design Patterns
22.1 Design Pattern Examples
22.2 Why Might Design Patterns Work?
22.3 The First Experiment: Testing Pattern Documentation
22.4 The Second Experiment: Comparing Pattern Solutions to Simpler Ones
22.5 The Third Experiment: Patterns in Team Communication
22.6 Lessons Learned
22.7 Conclusions
22.8 Acknowledgments
22.9 References
Chapter 23: Evidence-Based Failure Prediction
23.1 Introduction
23.2 Code Coverage
23.3 Code Churn
23.4 Code Complexity
23.5 Code Dependencies
23.6 People and Organizational Measures
23.7 Integrated Approach for Prediction of Failures
23.8 Summary
23.9 Acknowledgments
23.10 References
Chapter 24: The Art of Collecting Bug Reports
24.1 Good and Bad Bug Reports
24.2 What Makes a Good Bug Report?
24.3 Survey Results
24.4 Evidence for an Information Mismatch
24.5 Problems with Bug Reports
24.6 The Value of Duplicate Bug Reports
24.7 Not All Bug Reports Get Fixed
24.8 Conclusions
24.9 Acknowledgments
24.10 References
Chapter 25: Where Do Most Software Flaws Come From?
25.1 Studying Software Flaws
25.2 Context of the Study
25.3 Phase 1: Overall Survey
25.4 Phase 2: Design/Code Fault Survey
25.5 What Should You Believe About These Results?
25.6 What Have We Learned?
25.7 Acknowledgments
25.8 References
Chapter 26: Novice Professionals: Recent Graduates in a First Software Engineering Job
26.1 Study Methodology
26.2 Software Development Task
26.3 Strengths and Weaknesses of Novice Software Developers
26.4 Reflections
26.5 Misconceptions That Hinder Learning
26.6 Reflecting on Pedagogy
26.7 Implications for Change
26.8 References
Chapter 27: Mining Your Own Evidence
27.1 What Is There to Mine?
27.2 Designing a Study
27.3 A Mining Primer
27.4 Where to Go from Here
27.5 Acknowledgments
27.6 References
Chapter 28: Copy-Paste as a Principled Engineering Tool
28.1 An Example of Code Cloning
28.2 Detecting Clones in Software
28.3 Investigating the Practice of Code Cloning
28.4 Our Study
28.5 Conclusions
28.6 References
Chapter 29: How Usable Are Your APIs?
29.1 Why Is It Important to Study API Usability?
29.2 First Attempts at Studying API Usability
29.3 If At First You Don't Succeed...
29.4 Adapting to Different Work Styles
29.5 Conclusion
29.6 References
Chapter 30: What Does 10x Mean? Measuring Variations in Programmer Productivity
30.1 Individual Productivity Variation in Software Development
30.2 Issues in Measuring Productivity of Individual Programmers
30.3 Team Productivity Variation in Software Development
30.4 References
Contributors
Colophon
Organization of This Book
Conventions Used in This Book
Safari® Books Online
Using Code Examples
How to Contact Us
General Principles of Searching For and Using Evidence
Chapter 1: The Quest for Convincing Evidence
1.1 In the Beginning
1.2 The State of Evidence Today
1.3 Change We Can Believe In
1.4 The Effect of Context
1.5 Looking Toward the Future
1.6 References
Chapter 2: Credibility, or Why Should I Insist on Being Convinced?
2.1 How Evidence Turns Up in Software Engineering
2.2 Credibility and Relevance
2.3 Aggregating Evidence
2.4 Types of Evidence and Their Strengths and Weaknesses
2.5 Society, Culture, Software Engineering, and You
2.6 Acknowledgments
2.7 References
Chapter 3: What We Can Learn from Systematic Reviews
3.1 An Overview of Systematic Reviews
3.2 The Strengths and Weaknesses of Systematic Reviews
3.3 Systematic Reviews in Software Engineering
3.4 Conclusion
3.5 References
Chapter 4: Understanding Software Engineering Through Qualitative Methods
4.1 What Are Qualitative Methods?
4.2 Reading Qualitative Research
4.3 Using Qualitative Methods in Practice
4.4 Generalizing from Qualitative Results
4.5 Qualitative Methods Are Systematic
4.6 References
Chapter 5: Learning Through Application: The Maturing of the QIP in the SEL
5.1 What Makes Software Engineering Uniquely Hard to Research
5.2 A Realistic Approach to Empirical Research
5.3 The NASA Software Engineering Laboratory: A Vibrant Testbed for Empirical Research
5.4 The Quality Improvement Paradigm
5.5 Conclusion
5.6 References
Chapter 6: Personality, Intelligence, and Expertise: Impacts on Software Development
6.1 How to Recognize Good Programmers
6.2 Individual or Environment
6.3 Concluding Remarks
6.4 References
Chapter 7: Why Is It So Hard to Learn to Program?
7.1 Do Students Have Difficulty Learning to Program?
7.2 What Do People Understand Naturally About Programming?
7.3 Making the Tools Better by Shifting to Visual Programming
7.4 Contextualizing for Motivation
7.5 Conclusion: A Fledgling Field
7.6 References
Chapter 8: Beyond Lines of Code: Do We Need More Complexity Metrics?
8.1 Surveying Software
8.2 Measuring the Source Code
8.3 A Sample Measurement
8.4 Statistical Analysis
8.5 Some Comments on the Statistical Methodology
8.6 So Do We Need More Complexity Metrics?
8.7 References
Specific Topics in Software Engineering
Chapter 9: An Automated Fault Prediction System
9.1 Fault Distribution
9.2 Characteristics of Faulty Files
9.3 Overview of the Prediction Model
9.4 Replication and Variations of the Prediction Model
9.5 Building a Tool
9.6 The Warning Label
9.7 References
Chapter 10: Architecting: How Much and When?
10.1 Does the Cost of Fixing Software Increase over the Project Life Cycle?
10.2 How Much Architecting Is Enough?
10.3 Using What We Can Learn from Cost-to-Fix Data About the Value of Architecting
10.4 So How Much Architecting Is Enough?
10.5 Does the Architecting Need to Be Done Up Front?
10.6 Conclusions
10.7 References
Chapter 11: Conway's Corollary
11.1 Conway's Law
11.2 Coordination, Congruence, and Productivity
11.3 Organizational Complexity Within Microsoft
11.4 Chapels in the Bazaar of Open Source Software
11.5 Conclusions
11.6 References
Chapter 12: How Effective Is Test-Driven Development?
12.1 The TDD Pill-What Is It?
12.2 Summary of Clinical TDD Trials
12.3 The Effectiveness of TDD
12.4 Enforcing Correct TDD Dosage in Trials
12.5 Cautions and Side Effects
12.6 Conclusions
12.7 Acknowledgments
12.8 General References
12.9 Clinical TDD Trial References
Chapter 13: Why Aren't More Women in Computer Science?
13.1 Why So Few Women?
13.2 Should We Care?
13.3 Conclusion
13.4 References
Chapter 14: Two Comparisons of Programming Languages
14.1 A Language Shoot-Out over a Peculiar Search Algorithm
14.2 Plat_Forms: Web Development Technologies and Cultures
14.3 So What?
14.4 References
Chapter 15: Quality Wars: Open Source Versus Proprietary Software
15.1 Past Skirmishes
15.2 The Battlefield
15.3 Into the Battle
15.4 Outcome and Aftermath
15.5 Acknowledgments and Disclosure of Interest
15.6 References
Chapter 16: Code Talkers
16.1 A Day in the Life of a Programmer
16.2 What Is All This Talk About?
16.3 A Model for Thinking About Communication
16.4 References
Chapter 17: Pair Programming
17.1 A History of Pair Programming
17.2 Pair Programming in an Industrial Setting
17.3 Pair Programming in an Educational Setting
17.4 Distributed Pair Programming
17.5 Challenges
17.6 Lessons Learned
17.7 Acknowledgments
17.8 References
Chapter 18: Modern Code Review
18.1 Common Sense
18.2 A Developer Does a Little Code Review
18.3 Group Dynamics
18.4 Conclusion
18.5 References
Chapter 19: A Communal Workshop or Doors That Close?
19.1 Doors That Close
19.2 A Communal Workshop
19.3 Work Patterns
19.4 One More Thing...
19.5 References
Chapter 20: Identifying and Managing Dependencies in Global Software Development
20.1 Why Is Coordination a Challenge in GSD?
20.2 Dependencies and Their Socio-Technical Duality
20.3 From Research to Practice
20.4 Future Directions
20.5 References
Chapter 21: How Effective Is Modularization?
21.1 The Systems
21.2 What Is a Change?
21.3 What Is a Module?
21.4 The Results
21.5 Threats to Validity
21.6 Summary
21.7 References
Chapter 22: The Evidence for Design Patterns
22.1 Design Pattern Examples
22.2 Why Might Design Patterns Work?
22.3 The First Experiment: Testing Pattern Documentation
22.4 The Second Experiment: Comparing Pattern Solutions to Simpler Ones
22.5 The Third Experiment: Patterns in Team Communication
22.6 Lessons Learned
22.7 Conclusions
22.8 Acknowledgments
22.9 References
Chapter 23: Evidence-Based Failure Prediction
23.1 Introduction
23.2 Code Coverage
23.3 Code Churn
23.4 Code Complexity
23.5 Code Dependencies
23.6 People and Organizational Measures
23.7 Integrated Approach for Prediction of Failures
23.8 Summary
23.9 Acknowledgments
23.10 References
Chapter 24: The Art of Collecting Bug Reports
24.1 Good and Bad Bug Reports
24.2 What Makes a Good Bug Report?
24.3 Survey Results
24.4 Evidence for an Information Mismatch
24.5 Problems with Bug Reports
24.6 The Value of Duplicate Bug Reports
24.7 Not All Bug Reports Get Fixed
24.8 Conclusions
24.9 Acknowledgments
24.10 References
Chapter 25: Where Do Most Software Flaws Come From?
25.1 Studying Software Flaws
25.2 Context of the Study
25.3 Phase 1: Overall Survey
25.4 Phase 2: Design/Code Fault Survey
25.5 What Should You Believe About These Results?
25.6 What Have We Learned?
25.7 Acknowledgments
25.8 References
Chapter 26: Novice Professionals: Recent Graduates in a First Software Engineering Job
26.1 Study Methodology
26.2 Software Development Task
26.3 Strengths and Weaknesses of Novice Software Developers
26.4 Reflections
26.5 Misconceptions That Hinder Learning
26.6 Reflecting on Pedagogy
26.7 Implications for Change
26.8 References
Chapter 27: Mining Your Own Evidence
27.1 What Is There to Mine?
27.2 Designing a Study
27.3 A Mining Primer
27.4 Where to Go from Here
27.5 Acknowledgments
27.6 References
Chapter 28: Copy-Paste as a Principled Engineering Tool
28.1 An Example of Code Cloning
28.2 Detecting Clones in Software
28.3 Investigating the Practice of Code Cloning
28.4 Our Study
28.5 Conclusions
28.6 References
Chapter 29: How Usable Are Your APIs?
29.1 Why Is It Important to Study API Usability?
29.2 First Attempts at Studying API Usability
29.3 If At First You Don't Succeed...
29.4 Adapting to Different Work Styles
29.5 Conclusion
29.6 References
Chapter 30: What Does 10x Mean? Measuring Variations in Programmer Productivity
30.1 Individual Productivity Variation in Software Development
30.2 Issues in Measuring Productivity of Individual Programmers
30.3 Team Productivity Variation in Software Development
30.4 References
Contributors
Colophon