- Gebundenes Buch
- Merkliste
- Auf die Merkliste
- Bewerten Bewerten
- Teilen
- Produkt teilen
- Produkterinnerung
- Produkterinnerung
Trustworthy Systems Through Quantitative Software Engineering provides quantitative analysis for software engineering practices in order to build reliable software products. Readers learn from discussions of real on-the-job experiences how important it is to plan, measure, and assess each stage of development. Illuminated with case studies, the book concentrates on problem analysis. By emphasizing the importance of fitting the software engineering structure to the problem, readers learn to produce products that are on schedule, within budget, and satisfactory to the customer. The book also…mehr
Andere Kunden interessierten sich auch für
- Susan M. LandPractical Support for CMMI-SW Software Project Documentation Using IEEE Software Engineering Standards169,99 €
- Rex BlackPragmatic Software Testing56,99 €
- Karl M. FantComputer Science Reconsidered154,99 €
- Richard W. SelbySoftware Engineering143,99 €
- Albert Endres / Herbert Weber (eds.)Software Development Environments and Case Technology42,99 €
- Linda M. LairdSoftware Measurement and Estimation154,99 €
- Witold SurynSoftware Quality Engineering128,99 €
-
-
-
Trustworthy Systems Through Quantitative Software Engineering provides quantitative analysis for software engineering practices in order to build reliable software products. Readers learn from discussions of real on-the-job experiences how important it is to plan, measure, and assess each stage of development. Illuminated with case studies, the book concentrates on problem analysis. By emphasizing the importance of fitting the software engineering structure to the problem, readers learn to produce products that are on schedule, within budget, and satisfactory to the customer. The book also stresses the concepts of simplification, trustworthiness, risk assessment, and architecture.
A benchmark text on software development and quantitative software engineering
"We all trust software. All too frequently, this trust is misplaced. Larry Bernstein has created and applied quantitative techniques to develop trustworthy software systems. He and C. M. Yuhas have organized this quantitative experience into a book of great value to make software trustworthy for all of us."
-Barry Boehm
Trustworthy Systems Through Quantitative Software Engineering proposes a novel, reliability-driven software engineering approach, and discusses human factors in software engineering and how these affect team dynamics. This practical approach gives software engineering students and professionals a solid foundation in problem analysis, allowing them to meet customers' changing needs by tailoring their projects to meet specific challenges, and complete projects on schedule and within budget.
Specifically, it helps developers identify customer requirements, develop software designs, manage a software development team, and evaluate software products to customer specifications. Students learn "magic numbers of software engineering," rules of thumb that show how to simplify architecture, design, and implementation.
Case histories and exercises clearly present successful software engineers' experiences and illustrate potential problems, results, and trade-offs. Also featuring an accompanying Web site with additional and related material, Trustworthy Systems Through Quantitative Software Engineering is a hands-on, project-oriented resource for upper-level software and computer science students, engineers, professional developers, managers, and professionals involved in software engineering projects.
Hinweis: Dieser Artikel kann nur an eine deutsche Lieferadresse ausgeliefert werden.
A benchmark text on software development and quantitative software engineering
"We all trust software. All too frequently, this trust is misplaced. Larry Bernstein has created and applied quantitative techniques to develop trustworthy software systems. He and C. M. Yuhas have organized this quantitative experience into a book of great value to make software trustworthy for all of us."
-Barry Boehm
Trustworthy Systems Through Quantitative Software Engineering proposes a novel, reliability-driven software engineering approach, and discusses human factors in software engineering and how these affect team dynamics. This practical approach gives software engineering students and professionals a solid foundation in problem analysis, allowing them to meet customers' changing needs by tailoring their projects to meet specific challenges, and complete projects on schedule and within budget.
Specifically, it helps developers identify customer requirements, develop software designs, manage a software development team, and evaluate software products to customer specifications. Students learn "magic numbers of software engineering," rules of thumb that show how to simplify architecture, design, and implementation.
Case histories and exercises clearly present successful software engineers' experiences and illustrate potential problems, results, and trade-offs. Also featuring an accompanying Web site with additional and related material, Trustworthy Systems Through Quantitative Software Engineering is a hands-on, project-oriented resource for upper-level software and computer science students, engineers, professional developers, managers, and professionals involved in software engineering projects.
Hinweis: Dieser Artikel kann nur an eine deutsche Lieferadresse ausgeliefert werden.
Produktdetails
- Produktdetails
- Verlag: Wiley & Sons
- 1. Auflage
- Seitenzahl: 464
- Erscheinungstermin: 1. September 2005
- Englisch
- Abmessung: 242mm x 164mm x 28mm
- Gewicht: 750g
- ISBN-13: 9780471696919
- ISBN-10: 0471696919
- Artikelnr.: 14834793
- Verlag: Wiley & Sons
- 1. Auflage
- Seitenzahl: 464
- Erscheinungstermin: 1. September 2005
- Englisch
- Abmessung: 242mm x 164mm x 28mm
- Gewicht: 750g
- ISBN-13: 9780471696919
- ISBN-10: 0471696919
- Artikelnr.: 14834793
LAWRENCE BERNSTEIN is the Series Editor for the Quantitative Software Engineering Series, published by Wiley. Professor Bernstein is currently Industry Research Professor at the Stevens Institute of Technology. He previously pursued a distinguished executive career at Bell Laboratories. He is a Fellow of IEEE and ACM. C. M. YUHAS is a freelance writer who has published articles on network management in the IEEE Journal on Selected Areas in Communication and IEEE Network. She has a BA in English from Douglass College and an MA in communications from New York University.
Preface xvii
Acknowledgment xxv
Part 1 Getting Started 1
1. Think Like an Engineer-Especially for Software 3
1.1 Making a Judgment 4
1.2 The Software Engineer's Responsibilities 6
1.3 Ethics 6
1.4 Software Development Processes 11
1.5 Choosing a Process 12
1.5.1 No-Method "Code and Fix" Approach 15
1.5.2 Waterfall Model 16
1.5.3 Planned Incremental Development Process 18
1.5.4 Spiral Model: Planned Risk Assessment-Driven Process 18
1.5.5 Development Plan Approach 23
1.5.6 Agile Process: an Apparent Oxymoron 25
1.6 Reemergence of Model-Based Software Development 26
1.7 Process Evolution 27
1.8 Organization Structure 29
1.9 Principles of Sound Organizations 31
1.10 Short Projects-4 to 6 Weeks 33
1.10.1 Project 1: Automating Library Overdue Book Notices 33
1.10.2 Project 2: Ajax Transporters, Inc. Maintenance Project 34
1.11 Problems 35
2. People, Product, Process, Project-The Big Four 39
2.1 People: Cultivate the Guru and Support the Majority 40
2.1.1 How to Recognize a Guru 41
2.1.2 How to Attract a Guru to Your Project 42
2.1.3 How to Keep Your Gurus Working 43
2.1.4 How to Support the Majority 43
2.2 Product: "Buy Me!" 45
2.2.1 Reliable Software Products 46
2.2.2 Useful Software Products 47
2.2.3 Good User Experience 48
2.3 Process: "OK, How Will We Build This?" 49
2.3.1 Agile Processes 49
2.3.2 Object-Oriented Opportunities 53
2.3.3 Meaningful Metrics 60
2.4 Project: Making It Work 61
2.5 Problems 65
2.6 Additional Problems Based on Case Studies 67
Part 2 Ethics and Professionalism 73
3. Software Requirements 75
3.1 What Can Go Wrong With Requirements 75
3.2 The Formal Processes 76
3.3 Robust Requirements 81
3.4 Requirements Synthesis 84
3.5 Requirements Specification 86
3.6 Quantitative Software Engineering Gates 87
3.7 sQFD 88
3.8 ICED-T Metrics 91
3.8.1 ICED-T Insights 92
3.8.2 Using the ICED-T Model 94
3.9 Development Sizing and Scheduling With Function Points 95
3.9.1 Function Point Analysis Experience 95
3.9.2 NCSLOC vs Function Points 96
3.9.3 Computing Simplified Function Points (sFP) 97
3.10 Case Study: The Case of the Emergency No-Show Service 98
3.11 Problems 103
4. Prototyping 107
4.1 Make It Work; Then Make It Work Right 107
4.1.1 How to Get at the Governing Requirements 108
4.1.2 Rapid Application Prototype 108
4.1.3 What's Soft Is Hard 110
4.2 So What Happens Monday Morning? 111
4.2.1 What Needs to Be Prototyped? 111
4.2.2 How Do You Build a Prototype? 112
4.2.3 How Is the Prototype Used? 112
4.2.4 What Happens to the Prototype? 114
4.3 It Works, But Will It Continue to Work? 116
4.4 Case Study: The Case of the Driven Development 116
4.4.1 Significant Results 119
4.4.2 Lessons Learned 122
4.4.3 Additional Business Histories 123
4.5 Why Is Prototyping So Important? 128
4.6 Prototyping Deficiencies 130
4.7 Iterative Prototyping 130
4.8 Case Study: The Case of the Famished Fish 131
4.9 Problems 133
5. Architecture 137
5.1 Architecture Is a System's DNA 137
5.2 Pity the Poor System Administrator 139
5.3 Software Architecture Experience 141
5.4 Process and Model 142
5.5 Components 144
5.5.1 Components as COTS 144
5.5.2 Encapsulation and Abstraction 145
5.5.3 Ready or Not, Objects Are Here 146
5.6 UNIX 148
5.7 Tl1 149
5.7.1 Mission 150
5.7.2 Comparative Analysis 151
5.7.3 Message Formatting 152
5.7.4 TL1 Message Formulation 152
5.7.5 Industry Support of TL1 152
5.8 Documenting the Architecture 153
5.8.1 Debriefing Report 154
5.8.2 Lessons Learned 154
5.8.3 Users of Architecture Documentation 154
5.9 Architecture Reviews 155
5.10 Middleware 156
5.11 How Many Times Before We Learn? 158
5.11.1 Comair Cancels 1100 Flights on Christmas 2004 158
5.11.2 Air Traffic Shutdown in September 2004 159
5.11.3 NASA Crashes into Mars, 2004 159
5.11.4 Case Study: The Case of the Preempted Priorities 160
5.12 Financial Systems Architecture 163
5.12.1 Typical Business Processes 163
5.12.2 Product-Related Layer in the Architecture 164
5.12.3 Finding Simple Components 165
5.13 Design and Architectural Process 166
5.14 Problems 170
6. Estimation, Planning, and Investment 173
6.1 Software Size Estimation 174
6.1.1 Pitfalls and Pratfalls 174
6.1.2 Software Size Metrics 175
6.2 Function Points 176
6.2.1 Fundamentals of FPA 176
6.2.2 Brief History 176
6.2.3 Objectives of FPA 177
6.2.4 Characteristics of Quality FPA 177
6.3 Five Major Elements of Function Point Counting 177
6.3.1 EI 177
6.3.2 EO 178
6.3.3 EQ 178
6.3.4 ILF 178
6.3.5 EIF 179
6.4 Each Element Can Be Simple, Average, or Complex 179
6.5 Sizing an Automation Project With FPA 182
6.5.1 Advantages of Function Point Measurement 183
6.5.2 Disadvantages of Function Point Measurement 184
6.5.3 Results Common to FPA 184
6.5.4 FPA Accuracy 185
6.6 NCSLOC Metric 186
6.6.1 Company Statistics 187
6.6.2 Reuse 187
6.6.3 Wideband Delphi 189
6.6.4 Disadvantages of SLOC 190
6.7 Production Planning 192
6.7.1 Productivity 192
6.7.2 Mediating Culture 192
6.7.3 Customer Relations 193
6.7.4 Centralized Support Functions 193
6.8 Investment 195
6.8.1 Cost Estimation Models 195
6.8.2 COCOMO 197
6.8.3 Scheduling Tools-PERT, Gantt 205
6.8.4 Project Manager's Job 207
6.9 Example: Apply the Process to a Problem 208
6.9.1 Prospectus 208
6.9.2 Measurable Operational Value (MOV) 209
6.9.3 Requirements Specification 209
6.9.4 Schedule, Resources, Features-What to Change? 214
6.10 Additional Problems 216
7. Design for Trustworthiness 223
7.1 Why Trustworthiness Matters 224
7.2 Software Reliability Overview 225
7.3 Design Reviews 228
7.3.1 Topics for Design Reviews 229
7.3.2 Modules, Interfaces, and Components 230
7.3.3 Interfaces 234
7.3.4 Software Structure Influences Reliability 236
7.3.5 Components 238
7.3.6 Open&Closed Principle 238
7.3.7 The Liskov Substitution Principle 239
7.3.8 Comparing Object-Oriented Programming With Componentry 240
7.3.9 Politics of Reuse 240
7.4 Design Principles 243
7.4.1 Strong Cohesion 243
7.4.2 Weak Coupling 243
7.4.3 Information Hiding 244
7.4.4 Inheritance 244
7.4.5 Generalization/Abstraction 244
7.4.6 Separation of Concerns 245
7.4.7 Removal of Context 245
7.5 Documentation 246
7.6 Design Constraints That Make Software Trustworthy 248
7.6.1 Simplify the Design 248
7.6.2 Software Fault Tolerance 249
7.6.3 Software Rejuvenation 251
7.6.4 Hire Good People and Keep Them 254
7.6.5 Limit the Language Features Used 254
7.6.6 Limit Module Size and Initialize Memory 255
7.6.7 Check the Design Stability 255
7.6.8 Bound the Execution Domain 259
7.6.9 Engineer to Performance Budgets 260
7.6.10 Reduce Algorithm Complexity 263
7.6.11 Factor and Refactor 266
7.7 Problems 268
Part 3 Taking the Measure of the System 275
8. Identifying and Managing Risk 277
8.1 Risk Potential 278
8.2 Risk Management Paradigm 279
8.3 Functions of Risk Management 279
8.4 Risk Analysis 280
8.5 Calculating Risk 282
8.6 Using Risk Assessment in Project Development: The Spiral Model 286
8.7 Containing Risks 289
8.7.1 Incomplete and Fuzzy Requirements 289
8.7.2 Schedule Too Short 290
8.7.3 Not Enough Staff 291
8.7.4 Morale of Key Staff Is Poor 292
8.7.5 Stakeholders Are Losing Interest 295
8.7.6 Untrustworthy Design 295
8.7.7 Feature Set Is Not Economically Viable 296
8.7.8 Feature Set Is Too Large 296
8.7.9 Technology Is Immature 296
8.7.10 Late Planned Deliveries of Hardware and Operating System 298
8.8 Manage the Cost Risk to Avoid Outsourcing 299
8.8.1 Technology Selection 300
8.8.2 Tools 300
8.8.3 Software Manufacturing 300
8.8.4 Integration, Reliability, and Stress Testing 301
8.8.5 Computer Facilities 301
8.8.6 Human Interaction Design and Documentation 301
8.9 Software Project Management Audits 303
8.10 Running an Audit 304
8.11 Risks with Risk Management 304
8.12 Problems 305
9. Human Factors in Software Engineering 309
9.1 A Click in the Right Direction 309
9.2 Managing Things, Managing People 312
9.2.1 Knowledge Workers 313
9.2.2 Collaborative Management 313
9.3 FAA Rationale for Human Factors Design 316
9.4 Reach Out and Touch Something 319
9.4.1 Maddening Counterintuitive Cues 319
9.4.2 GUI 319
9.4.3 Customer Care and Web Agents 319
9.5 System Effectiveness in Human Factors Terms 320
9.5.1 What to Look for in COTS 320
9.5.2 Simple Guidelines for Managing Development 322
9.6 How Much Should the System Do? 323
9.6.1 Screen Icon Design 324
9.6.2 Short- and Long-Term Memory 326
9.7 Emerging Technology 327
9.8 Applying the Principles to Developers 334
9.9 The Bell Laboratories Philosophy 336
9.10 So You Want to Be a Manager 338
9.11 Problems 338
10. Implementation Details 344
10.1 Structured Programming 345
10.2 Rational Unified Process and Unified Modeling Language 346
10.3 Measuring Complexity 353
10.4 Coding Styles 360
10.4.1 Data Structures 360
10.4.2 Team Coding 363
10.4.3 Code Reading 364
10.4.4 Code Review 364
10.4.5 Code Inspections 364
10.5 A Must Read for Trustworthy Software Engineers 365
10.6 Coding for Parallelism 366
10.7 Threats 366
10.8 Open-Source Software 368
10.9 Problems 369
11. Testing and Configuration Management 372
11.1 The Price of Quality 373
11.1.1 Unit Testing 373
11.1.2 Integration Testing 373
11.1.3 System Testing 373
11.1.4 Reliability Testing 374
11.1.5 Stress Testing 374
11.2 Robust Testing 374
11.2.1 Robust Design 374
11.2.2 Prototypes 375
11.2.3 Identify Expected Results 375
11.2.4 Orthogonal Array Test Sets (OATS) 376
11.3 Testing Techniques 376
11.3.1 One-Factor-at-a-Time 377
11.3.2 Exhaustive 377
11.3.3 Deductive Analytical Method 377
11.3.4 Random/Intuitive Method 377
11.3.5 Orthogonal Array-Based Method 377
11.3.6 Defect Analysis 378
11.4 Case Study: The Case of the Impossible Overtime 379
11.5 Cooperative Testing 380
11.6 Graphic Footprint 382
11.7 Testing Strategy 384
11.7.1 Test Incrementally 384
11.7.2 Test Under No-Load 384
11.7.3 Test Under Expected-Load 384
11.7.4 Test Under Heavy-Load 384
11.7.5 Test Under Overload 385
11.7.6 Reject Insufficiently Tested Code 385
11.7.7 Diabolic Testing 385
11.7.8 Reliability Tests 385
11.7.9 Footprint 385
11.7.10 Regression Tests 385
11.8 Software Hot Spots 386
11.9 Software Manufacturing Defined 392
11.10 Configuration Management 393
11.11 Outsourcing 398
11.11.1 Test Models 398
11.11.2 Faster Iteration 400
11.11.3 Meaningful Test Process Metrics 400
11.12 Problems 400
12. The Final Project: By Students, For Students 404
12.1 How to Make the Course Work for You 404
12.2 Sample Call for Projects 405
12.3 A Real Student Project 407
12.4 The Rest of the Story 428
12.5 Our Hope 428
Index 429
Acknowledgment xxv
Part 1 Getting Started 1
1. Think Like an Engineer-Especially for Software 3
1.1 Making a Judgment 4
1.2 The Software Engineer's Responsibilities 6
1.3 Ethics 6
1.4 Software Development Processes 11
1.5 Choosing a Process 12
1.5.1 No-Method "Code and Fix" Approach 15
1.5.2 Waterfall Model 16
1.5.3 Planned Incremental Development Process 18
1.5.4 Spiral Model: Planned Risk Assessment-Driven Process 18
1.5.5 Development Plan Approach 23
1.5.6 Agile Process: an Apparent Oxymoron 25
1.6 Reemergence of Model-Based Software Development 26
1.7 Process Evolution 27
1.8 Organization Structure 29
1.9 Principles of Sound Organizations 31
1.10 Short Projects-4 to 6 Weeks 33
1.10.1 Project 1: Automating Library Overdue Book Notices 33
1.10.2 Project 2: Ajax Transporters, Inc. Maintenance Project 34
1.11 Problems 35
2. People, Product, Process, Project-The Big Four 39
2.1 People: Cultivate the Guru and Support the Majority 40
2.1.1 How to Recognize a Guru 41
2.1.2 How to Attract a Guru to Your Project 42
2.1.3 How to Keep Your Gurus Working 43
2.1.4 How to Support the Majority 43
2.2 Product: "Buy Me!" 45
2.2.1 Reliable Software Products 46
2.2.2 Useful Software Products 47
2.2.3 Good User Experience 48
2.3 Process: "OK, How Will We Build This?" 49
2.3.1 Agile Processes 49
2.3.2 Object-Oriented Opportunities 53
2.3.3 Meaningful Metrics 60
2.4 Project: Making It Work 61
2.5 Problems 65
2.6 Additional Problems Based on Case Studies 67
Part 2 Ethics and Professionalism 73
3. Software Requirements 75
3.1 What Can Go Wrong With Requirements 75
3.2 The Formal Processes 76
3.3 Robust Requirements 81
3.4 Requirements Synthesis 84
3.5 Requirements Specification 86
3.6 Quantitative Software Engineering Gates 87
3.7 sQFD 88
3.8 ICED-T Metrics 91
3.8.1 ICED-T Insights 92
3.8.2 Using the ICED-T Model 94
3.9 Development Sizing and Scheduling With Function Points 95
3.9.1 Function Point Analysis Experience 95
3.9.2 NCSLOC vs Function Points 96
3.9.3 Computing Simplified Function Points (sFP) 97
3.10 Case Study: The Case of the Emergency No-Show Service 98
3.11 Problems 103
4. Prototyping 107
4.1 Make It Work; Then Make It Work Right 107
4.1.1 How to Get at the Governing Requirements 108
4.1.2 Rapid Application Prototype 108
4.1.3 What's Soft Is Hard 110
4.2 So What Happens Monday Morning? 111
4.2.1 What Needs to Be Prototyped? 111
4.2.2 How Do You Build a Prototype? 112
4.2.3 How Is the Prototype Used? 112
4.2.4 What Happens to the Prototype? 114
4.3 It Works, But Will It Continue to Work? 116
4.4 Case Study: The Case of the Driven Development 116
4.4.1 Significant Results 119
4.4.2 Lessons Learned 122
4.4.3 Additional Business Histories 123
4.5 Why Is Prototyping So Important? 128
4.6 Prototyping Deficiencies 130
4.7 Iterative Prototyping 130
4.8 Case Study: The Case of the Famished Fish 131
4.9 Problems 133
5. Architecture 137
5.1 Architecture Is a System's DNA 137
5.2 Pity the Poor System Administrator 139
5.3 Software Architecture Experience 141
5.4 Process and Model 142
5.5 Components 144
5.5.1 Components as COTS 144
5.5.2 Encapsulation and Abstraction 145
5.5.3 Ready or Not, Objects Are Here 146
5.6 UNIX 148
5.7 Tl1 149
5.7.1 Mission 150
5.7.2 Comparative Analysis 151
5.7.3 Message Formatting 152
5.7.4 TL1 Message Formulation 152
5.7.5 Industry Support of TL1 152
5.8 Documenting the Architecture 153
5.8.1 Debriefing Report 154
5.8.2 Lessons Learned 154
5.8.3 Users of Architecture Documentation 154
5.9 Architecture Reviews 155
5.10 Middleware 156
5.11 How Many Times Before We Learn? 158
5.11.1 Comair Cancels 1100 Flights on Christmas 2004 158
5.11.2 Air Traffic Shutdown in September 2004 159
5.11.3 NASA Crashes into Mars, 2004 159
5.11.4 Case Study: The Case of the Preempted Priorities 160
5.12 Financial Systems Architecture 163
5.12.1 Typical Business Processes 163
5.12.2 Product-Related Layer in the Architecture 164
5.12.3 Finding Simple Components 165
5.13 Design and Architectural Process 166
5.14 Problems 170
6. Estimation, Planning, and Investment 173
6.1 Software Size Estimation 174
6.1.1 Pitfalls and Pratfalls 174
6.1.2 Software Size Metrics 175
6.2 Function Points 176
6.2.1 Fundamentals of FPA 176
6.2.2 Brief History 176
6.2.3 Objectives of FPA 177
6.2.4 Characteristics of Quality FPA 177
6.3 Five Major Elements of Function Point Counting 177
6.3.1 EI 177
6.3.2 EO 178
6.3.3 EQ 178
6.3.4 ILF 178
6.3.5 EIF 179
6.4 Each Element Can Be Simple, Average, or Complex 179
6.5 Sizing an Automation Project With FPA 182
6.5.1 Advantages of Function Point Measurement 183
6.5.2 Disadvantages of Function Point Measurement 184
6.5.3 Results Common to FPA 184
6.5.4 FPA Accuracy 185
6.6 NCSLOC Metric 186
6.6.1 Company Statistics 187
6.6.2 Reuse 187
6.6.3 Wideband Delphi 189
6.6.4 Disadvantages of SLOC 190
6.7 Production Planning 192
6.7.1 Productivity 192
6.7.2 Mediating Culture 192
6.7.3 Customer Relations 193
6.7.4 Centralized Support Functions 193
6.8 Investment 195
6.8.1 Cost Estimation Models 195
6.8.2 COCOMO 197
6.8.3 Scheduling Tools-PERT, Gantt 205
6.8.4 Project Manager's Job 207
6.9 Example: Apply the Process to a Problem 208
6.9.1 Prospectus 208
6.9.2 Measurable Operational Value (MOV) 209
6.9.3 Requirements Specification 209
6.9.4 Schedule, Resources, Features-What to Change? 214
6.10 Additional Problems 216
7. Design for Trustworthiness 223
7.1 Why Trustworthiness Matters 224
7.2 Software Reliability Overview 225
7.3 Design Reviews 228
7.3.1 Topics for Design Reviews 229
7.3.2 Modules, Interfaces, and Components 230
7.3.3 Interfaces 234
7.3.4 Software Structure Influences Reliability 236
7.3.5 Components 238
7.3.6 Open&Closed Principle 238
7.3.7 The Liskov Substitution Principle 239
7.3.8 Comparing Object-Oriented Programming With Componentry 240
7.3.9 Politics of Reuse 240
7.4 Design Principles 243
7.4.1 Strong Cohesion 243
7.4.2 Weak Coupling 243
7.4.3 Information Hiding 244
7.4.4 Inheritance 244
7.4.5 Generalization/Abstraction 244
7.4.6 Separation of Concerns 245
7.4.7 Removal of Context 245
7.5 Documentation 246
7.6 Design Constraints That Make Software Trustworthy 248
7.6.1 Simplify the Design 248
7.6.2 Software Fault Tolerance 249
7.6.3 Software Rejuvenation 251
7.6.4 Hire Good People and Keep Them 254
7.6.5 Limit the Language Features Used 254
7.6.6 Limit Module Size and Initialize Memory 255
7.6.7 Check the Design Stability 255
7.6.8 Bound the Execution Domain 259
7.6.9 Engineer to Performance Budgets 260
7.6.10 Reduce Algorithm Complexity 263
7.6.11 Factor and Refactor 266
7.7 Problems 268
Part 3 Taking the Measure of the System 275
8. Identifying and Managing Risk 277
8.1 Risk Potential 278
8.2 Risk Management Paradigm 279
8.3 Functions of Risk Management 279
8.4 Risk Analysis 280
8.5 Calculating Risk 282
8.6 Using Risk Assessment in Project Development: The Spiral Model 286
8.7 Containing Risks 289
8.7.1 Incomplete and Fuzzy Requirements 289
8.7.2 Schedule Too Short 290
8.7.3 Not Enough Staff 291
8.7.4 Morale of Key Staff Is Poor 292
8.7.5 Stakeholders Are Losing Interest 295
8.7.6 Untrustworthy Design 295
8.7.7 Feature Set Is Not Economically Viable 296
8.7.8 Feature Set Is Too Large 296
8.7.9 Technology Is Immature 296
8.7.10 Late Planned Deliveries of Hardware and Operating System 298
8.8 Manage the Cost Risk to Avoid Outsourcing 299
8.8.1 Technology Selection 300
8.8.2 Tools 300
8.8.3 Software Manufacturing 300
8.8.4 Integration, Reliability, and Stress Testing 301
8.8.5 Computer Facilities 301
8.8.6 Human Interaction Design and Documentation 301
8.9 Software Project Management Audits 303
8.10 Running an Audit 304
8.11 Risks with Risk Management 304
8.12 Problems 305
9. Human Factors in Software Engineering 309
9.1 A Click in the Right Direction 309
9.2 Managing Things, Managing People 312
9.2.1 Knowledge Workers 313
9.2.2 Collaborative Management 313
9.3 FAA Rationale for Human Factors Design 316
9.4 Reach Out and Touch Something 319
9.4.1 Maddening Counterintuitive Cues 319
9.4.2 GUI 319
9.4.3 Customer Care and Web Agents 319
9.5 System Effectiveness in Human Factors Terms 320
9.5.1 What to Look for in COTS 320
9.5.2 Simple Guidelines for Managing Development 322
9.6 How Much Should the System Do? 323
9.6.1 Screen Icon Design 324
9.6.2 Short- and Long-Term Memory 326
9.7 Emerging Technology 327
9.8 Applying the Principles to Developers 334
9.9 The Bell Laboratories Philosophy 336
9.10 So You Want to Be a Manager 338
9.11 Problems 338
10. Implementation Details 344
10.1 Structured Programming 345
10.2 Rational Unified Process and Unified Modeling Language 346
10.3 Measuring Complexity 353
10.4 Coding Styles 360
10.4.1 Data Structures 360
10.4.2 Team Coding 363
10.4.3 Code Reading 364
10.4.4 Code Review 364
10.4.5 Code Inspections 364
10.5 A Must Read for Trustworthy Software Engineers 365
10.6 Coding for Parallelism 366
10.7 Threats 366
10.8 Open-Source Software 368
10.9 Problems 369
11. Testing and Configuration Management 372
11.1 The Price of Quality 373
11.1.1 Unit Testing 373
11.1.2 Integration Testing 373
11.1.3 System Testing 373
11.1.4 Reliability Testing 374
11.1.5 Stress Testing 374
11.2 Robust Testing 374
11.2.1 Robust Design 374
11.2.2 Prototypes 375
11.2.3 Identify Expected Results 375
11.2.4 Orthogonal Array Test Sets (OATS) 376
11.3 Testing Techniques 376
11.3.1 One-Factor-at-a-Time 377
11.3.2 Exhaustive 377
11.3.3 Deductive Analytical Method 377
11.3.4 Random/Intuitive Method 377
11.3.5 Orthogonal Array-Based Method 377
11.3.6 Defect Analysis 378
11.4 Case Study: The Case of the Impossible Overtime 379
11.5 Cooperative Testing 380
11.6 Graphic Footprint 382
11.7 Testing Strategy 384
11.7.1 Test Incrementally 384
11.7.2 Test Under No-Load 384
11.7.3 Test Under Expected-Load 384
11.7.4 Test Under Heavy-Load 384
11.7.5 Test Under Overload 385
11.7.6 Reject Insufficiently Tested Code 385
11.7.7 Diabolic Testing 385
11.7.8 Reliability Tests 385
11.7.9 Footprint 385
11.7.10 Regression Tests 385
11.8 Software Hot Spots 386
11.9 Software Manufacturing Defined 392
11.10 Configuration Management 393
11.11 Outsourcing 398
11.11.1 Test Models 398
11.11.2 Faster Iteration 400
11.11.3 Meaningful Test Process Metrics 400
11.12 Problems 400
12. The Final Project: By Students, For Students 404
12.1 How to Make the Course Work for You 404
12.2 Sample Call for Projects 405
12.3 A Real Student Project 407
12.4 The Rest of the Story 428
12.5 Our Hope 428
Index 429
Preface xvii
Acknowledgment xxv
Part 1 Getting Started 1
1. Think Like an Engineer-Especially for Software 3
1.1 Making a Judgment 4
1.2 The Software Engineer's Responsibilities 6
1.3 Ethics 6
1.4 Software Development Processes 11
1.5 Choosing a Process 12
1.5.1 No-Method "Code and Fix" Approach 15
1.5.2 Waterfall Model 16
1.5.3 Planned Incremental Development Process 18
1.5.4 Spiral Model: Planned Risk Assessment-Driven Process 18
1.5.5 Development Plan Approach 23
1.5.6 Agile Process: an Apparent Oxymoron 25
1.6 Reemergence of Model-Based Software Development 26
1.7 Process Evolution 27
1.8 Organization Structure 29
1.9 Principles of Sound Organizations 31
1.10 Short Projects-4 to 6 Weeks 33
1.10.1 Project 1: Automating Library Overdue Book Notices 33
1.10.2 Project 2: Ajax Transporters, Inc. Maintenance Project 34
1.11 Problems 35
2. People, Product, Process, Project-The Big Four 39
2.1 People: Cultivate the Guru and Support the Majority 40
2.1.1 How to Recognize a Guru 41
2.1.2 How to Attract a Guru to Your Project 42
2.1.3 How to Keep Your Gurus Working 43
2.1.4 How to Support the Majority 43
2.2 Product: "Buy Me!" 45
2.2.1 Reliable Software Products 46
2.2.2 Useful Software Products 47
2.2.3 Good User Experience 48
2.3 Process: "OK, How Will We Build This?" 49
2.3.1 Agile Processes 49
2.3.2 Object-Oriented Opportunities 53
2.3.3 Meaningful Metrics 60
2.4 Project: Making It Work 61
2.5 Problems 65
2.6 Additional Problems Based on Case Studies 67
Part 2 Ethics and Professionalism 73
3. Software Requirements 75
3.1 What Can Go Wrong With Requirements 75
3.2 The Formal Processes 76
3.3 Robust Requirements 81
3.4 Requirements Synthesis 84
3.5 Requirements Specification 86
3.6 Quantitative Software Engineering Gates 87
3.7 sQFD 88
3.8 ICED-T Metrics 91
3.8.1 ICED-T Insights 92
3.8.2 Using the ICED-T Model 94
3.9 Development Sizing and Scheduling With Function Points 95
3.9.1 Function Point Analysis Experience 95
3.9.2 NCSLOC vs Function Points 96
3.9.3 Computing Simplified Function Points (sFP) 97
3.10 Case Study: The Case of the Emergency No-Show Service 98
3.11 Problems 103
4. Prototyping 107
4.1 Make It Work; Then Make It Work Right 107
4.1.1 How to Get at the Governing Requirements 108
4.1.2 Rapid Application Prototype 108
4.1.3 What's Soft Is Hard 110
4.2 So What Happens Monday Morning? 111
4.2.1 What Needs to Be Prototyped? 111
4.2.2 How Do You Build a Prototype? 112
4.2.3 How Is the Prototype Used? 112
4.2.4 What Happens to the Prototype? 114
4.3 It Works, But Will It Continue to Work? 116
4.4 Case Study: The Case of the Driven Development 116
4.4.1 Significant Results 119
4.4.2 Lessons Learned 122
4.4.3 Additional Business Histories 123
4.5 Why Is Prototyping So Important? 128
4.6 Prototyping Deficiencies 130
4.7 Iterative Prototyping 130
4.8 Case Study: The Case of the Famished Fish 131
4.9 Problems 133
5. Architecture 137
5.1 Architecture Is a System's DNA 137
5.2 Pity the Poor System Administrator 139
5.3 Software Architecture Experience 141
5.4 Process and Model 142
5.5 Components 144
5.5.1 Components as COTS 144
5.5.2 Encapsulation and Abstraction 145
5.5.3 Ready or Not, Objects Are Here 146
5.6 UNIX 148
5.7 Tl1 149
5.7.1 Mission 150
5.7.2 Comparative Analysis 151
5.7.3 Message Formatting 152
5.7.4 TL1 Message Formulation 152
5.7.5 Industry Support of TL1 152
5.8 Documenting the Architecture 153
5.8.1 Debriefing Report 154
5.8.2 Lessons Learned 154
5.8.3 Users of Architecture Documentation 154
5.9 Architecture Reviews 155
5.10 Middleware 156
5.11 How Many Times Before We Learn? 158
5.11.1 Comair Cancels 1100 Flights on Christmas 2004 158
5.11.2 Air Traffic Shutdown in September 2004 159
5.11.3 NASA Crashes into Mars, 2004 159
5.11.4 Case Study: The Case of the Preempted Priorities 160
5.12 Financial Systems Architecture 163
5.12.1 Typical Business Processes 163
5.12.2 Product-Related Layer in the Architecture 164
5.12.3 Finding Simple Components 165
5.13 Design and Architectural Process 166
5.14 Problems 170
6. Estimation, Planning, and Investment 173
6.1 Software Size Estimation 174
6.1.1 Pitfalls and Pratfalls 174
6.1.2 Software Size Metrics 175
6.2 Function Points 176
6.2.1 Fundamentals of FPA 176
6.2.2 Brief History 176
6.2.3 Objectives of FPA 177
6.2.4 Characteristics of Quality FPA 177
6.3 Five Major Elements of Function Point Counting 177
6.3.1 EI 177
6.3.2 EO 178
6.3.3 EQ 178
6.3.4 ILF 178
6.3.5 EIF 179
6.4 Each Element Can Be Simple, Average, or Complex 179
6.5 Sizing an Automation Project With FPA 182
6.5.1 Advantages of Function Point Measurement 183
6.5.2 Disadvantages of Function Point Measurement 184
6.5.3 Results Common to FPA 184
6.5.4 FPA Accuracy 185
6.6 NCSLOC Metric 186
6.6.1 Company Statistics 187
6.6.2 Reuse 187
6.6.3 Wideband Delphi 189
6.6.4 Disadvantages of SLOC 190
6.7 Production Planning 192
6.7.1 Productivity 192
6.7.2 Mediating Culture 192
6.7.3 Customer Relations 193
6.7.4 Centralized Support Functions 193
6.8 Investment 195
6.8.1 Cost Estimation Models 195
6.8.2 COCOMO 197
6.8.3 Scheduling Tools-PERT, Gantt 205
6.8.4 Project Manager's Job 207
6.9 Example: Apply the Process to a Problem 208
6.9.1 Prospectus 208
6.9.2 Measurable Operational Value (MOV) 209
6.9.3 Requirements Specification 209
6.9.4 Schedule, Resources, Features-What to Change? 214
6.10 Additional Problems 216
7. Design for Trustworthiness 223
7.1 Why Trustworthiness Matters 224
7.2 Software Reliability Overview 225
7.3 Design Reviews 228
7.3.1 Topics for Design Reviews 229
7.3.2 Modules, Interfaces, and Components 230
7.3.3 Interfaces 234
7.3.4 Software Structure Influences Reliability 236
7.3.5 Components 238
7.3.6 Open&Closed Principle 238
7.3.7 The Liskov Substitution Principle 239
7.3.8 Comparing Object-Oriented Programming With Componentry 240
7.3.9 Politics of Reuse 240
7.4 Design Principles 243
7.4.1 Strong Cohesion 243
7.4.2 Weak Coupling 243
7.4.3 Information Hiding 244
7.4.4 Inheritance 244
7.4.5 Generalization/Abstraction 244
7.4.6 Separation of Concerns 245
7.4.7 Removal of Context 245
7.5 Documentation 246
7.6 Design Constraints That Make Software Trustworthy 248
7.6.1 Simplify the Design 248
7.6.2 Software Fault Tolerance 249
7.6.3 Software Rejuvenation 251
7.6.4 Hire Good People and Keep Them 254
7.6.5 Limit the Language Features Used 254
7.6.6 Limit Module Size and Initialize Memory 255
7.6.7 Check the Design Stability 255
7.6.8 Bound the Execution Domain 259
7.6.9 Engineer to Performance Budgets 260
7.6.10 Reduce Algorithm Complexity 263
7.6.11 Factor and Refactor 266
7.7 Problems 268
Part 3 Taking the Measure of the System 275
8. Identifying and Managing Risk 277
8.1 Risk Potential 278
8.2 Risk Management Paradigm 279
8.3 Functions of Risk Management 279
8.4 Risk Analysis 280
8.5 Calculating Risk 282
8.6 Using Risk Assessment in Project Development: The Spiral Model 286
8.7 Containing Risks 289
8.7.1 Incomplete and Fuzzy Requirements 289
8.7.2 Schedule Too Short 290
8.7.3 Not Enough Staff 291
8.7.4 Morale of Key Staff Is Poor 292
8.7.5 Stakeholders Are Losing Interest 295
8.7.6 Untrustworthy Design 295
8.7.7 Feature Set Is Not Economically Viable 296
8.7.8 Feature Set Is Too Large 296
8.7.9 Technology Is Immature 296
8.7.10 Late Planned Deliveries of Hardware and Operating System 298
8.8 Manage the Cost Risk to Avoid Outsourcing 299
8.8.1 Technology Selection 300
8.8.2 Tools 300
8.8.3 Software Manufacturing 300
8.8.4 Integration, Reliability, and Stress Testing 301
8.8.5 Computer Facilities 301
8.8.6 Human Interaction Design and Documentation 301
8.9 Software Project Management Audits 303
8.10 Running an Audit 304
8.11 Risks with Risk Management 304
8.12 Problems 305
9. Human Factors in Software Engineering 309
9.1 A Click in the Right Direction 309
9.2 Managing Things, Managing People 312
9.2.1 Knowledge Workers 313
9.2.2 Collaborative Management 313
9.3 FAA Rationale for Human Factors Design 316
9.4 Reach Out and Touch Something 319
9.4.1 Maddening Counterintuitive Cues 319
9.4.2 GUI 319
9.4.3 Customer Care and Web Agents 319
9.5 System Effectiveness in Human Factors Terms 320
9.5.1 What to Look for in COTS 320
9.5.2 Simple Guidelines for Managing Development 322
9.6 How Much Should the System Do? 323
9.6.1 Screen Icon Design 324
9.6.2 Short- and Long-Term Memory 326
9.7 Emerging Technology 327
9.8 Applying the Principles to Developers 334
9.9 The Bell Laboratories Philosophy 336
9.10 So You Want to Be a Manager 338
9.11 Problems 338
10. Implementation Details 344
10.1 Structured Programming 345
10.2 Rational Unified Process and Unified Modeling Language 346
10.3 Measuring Complexity 353
10.4 Coding Styles 360
10.4.1 Data Structures 360
10.4.2 Team Coding 363
10.4.3 Code Reading 364
10.4.4 Code Review 364
10.4.5 Code Inspections 364
10.5 A Must Read for Trustworthy Software Engineers 365
10.6 Coding for Parallelism 366
10.7 Threats 366
10.8 Open-Source Software 368
10.9 Problems 369
11. Testing and Configuration Management 372
11.1 The Price of Quality 373
11.1.1 Unit Testing 373
11.1.2 Integration Testing 373
11.1.3 System Testing 373
11.1.4 Reliability Testing 374
11.1.5 Stress Testing 374
11.2 Robust Testing 374
11.2.1 Robust Design 374
11.2.2 Prototypes 375
11.2.3 Identify Expected Results 375
11.2.4 Orthogonal Array Test Sets (OATS) 376
11.3 Testing Techniques 376
11.3.1 One-Factor-at-a-Time 377
11.3.2 Exhaustive 377
11.3.3 Deductive Analytical Method 377
11.3.4 Random/Intuitive Method 377
11.3.5 Orthogonal Array-Based Method 377
11.3.6 Defect Analysis 378
11.4 Case Study: The Case of the Impossible Overtime 379
11.5 Cooperative Testing 380
11.6 Graphic Footprint 382
11.7 Testing Strategy 384
11.7.1 Test Incrementally 384
11.7.2 Test Under No-Load 384
11.7.3 Test Under Expected-Load 384
11.7.4 Test Under Heavy-Load 384
11.7.5 Test Under Overload 385
11.7.6 Reject Insufficiently Tested Code 385
11.7.7 Diabolic Testing 385
11.7.8 Reliability Tests 385
11.7.9 Footprint 385
11.7.10 Regression Tests 385
11.8 Software Hot Spots 386
11.9 Software Manufacturing Defined 392
11.10 Configuration Management 393
11.11 Outsourcing 398
11.11.1 Test Models 398
11.11.2 Faster Iteration 400
11.11.3 Meaningful Test Process Metrics 400
11.12 Problems 400
12. The Final Project: By Students, For Students 404
12.1 How to Make the Course Work for You 404
12.2 Sample Call for Projects 405
12.3 A Real Student Project 407
12.4 The Rest of the Story 428
12.5 Our Hope 428
Index 429
Acknowledgment xxv
Part 1 Getting Started 1
1. Think Like an Engineer-Especially for Software 3
1.1 Making a Judgment 4
1.2 The Software Engineer's Responsibilities 6
1.3 Ethics 6
1.4 Software Development Processes 11
1.5 Choosing a Process 12
1.5.1 No-Method "Code and Fix" Approach 15
1.5.2 Waterfall Model 16
1.5.3 Planned Incremental Development Process 18
1.5.4 Spiral Model: Planned Risk Assessment-Driven Process 18
1.5.5 Development Plan Approach 23
1.5.6 Agile Process: an Apparent Oxymoron 25
1.6 Reemergence of Model-Based Software Development 26
1.7 Process Evolution 27
1.8 Organization Structure 29
1.9 Principles of Sound Organizations 31
1.10 Short Projects-4 to 6 Weeks 33
1.10.1 Project 1: Automating Library Overdue Book Notices 33
1.10.2 Project 2: Ajax Transporters, Inc. Maintenance Project 34
1.11 Problems 35
2. People, Product, Process, Project-The Big Four 39
2.1 People: Cultivate the Guru and Support the Majority 40
2.1.1 How to Recognize a Guru 41
2.1.2 How to Attract a Guru to Your Project 42
2.1.3 How to Keep Your Gurus Working 43
2.1.4 How to Support the Majority 43
2.2 Product: "Buy Me!" 45
2.2.1 Reliable Software Products 46
2.2.2 Useful Software Products 47
2.2.3 Good User Experience 48
2.3 Process: "OK, How Will We Build This?" 49
2.3.1 Agile Processes 49
2.3.2 Object-Oriented Opportunities 53
2.3.3 Meaningful Metrics 60
2.4 Project: Making It Work 61
2.5 Problems 65
2.6 Additional Problems Based on Case Studies 67
Part 2 Ethics and Professionalism 73
3. Software Requirements 75
3.1 What Can Go Wrong With Requirements 75
3.2 The Formal Processes 76
3.3 Robust Requirements 81
3.4 Requirements Synthesis 84
3.5 Requirements Specification 86
3.6 Quantitative Software Engineering Gates 87
3.7 sQFD 88
3.8 ICED-T Metrics 91
3.8.1 ICED-T Insights 92
3.8.2 Using the ICED-T Model 94
3.9 Development Sizing and Scheduling With Function Points 95
3.9.1 Function Point Analysis Experience 95
3.9.2 NCSLOC vs Function Points 96
3.9.3 Computing Simplified Function Points (sFP) 97
3.10 Case Study: The Case of the Emergency No-Show Service 98
3.11 Problems 103
4. Prototyping 107
4.1 Make It Work; Then Make It Work Right 107
4.1.1 How to Get at the Governing Requirements 108
4.1.2 Rapid Application Prototype 108
4.1.3 What's Soft Is Hard 110
4.2 So What Happens Monday Morning? 111
4.2.1 What Needs to Be Prototyped? 111
4.2.2 How Do You Build a Prototype? 112
4.2.3 How Is the Prototype Used? 112
4.2.4 What Happens to the Prototype? 114
4.3 It Works, But Will It Continue to Work? 116
4.4 Case Study: The Case of the Driven Development 116
4.4.1 Significant Results 119
4.4.2 Lessons Learned 122
4.4.3 Additional Business Histories 123
4.5 Why Is Prototyping So Important? 128
4.6 Prototyping Deficiencies 130
4.7 Iterative Prototyping 130
4.8 Case Study: The Case of the Famished Fish 131
4.9 Problems 133
5. Architecture 137
5.1 Architecture Is a System's DNA 137
5.2 Pity the Poor System Administrator 139
5.3 Software Architecture Experience 141
5.4 Process and Model 142
5.5 Components 144
5.5.1 Components as COTS 144
5.5.2 Encapsulation and Abstraction 145
5.5.3 Ready or Not, Objects Are Here 146
5.6 UNIX 148
5.7 Tl1 149
5.7.1 Mission 150
5.7.2 Comparative Analysis 151
5.7.3 Message Formatting 152
5.7.4 TL1 Message Formulation 152
5.7.5 Industry Support of TL1 152
5.8 Documenting the Architecture 153
5.8.1 Debriefing Report 154
5.8.2 Lessons Learned 154
5.8.3 Users of Architecture Documentation 154
5.9 Architecture Reviews 155
5.10 Middleware 156
5.11 How Many Times Before We Learn? 158
5.11.1 Comair Cancels 1100 Flights on Christmas 2004 158
5.11.2 Air Traffic Shutdown in September 2004 159
5.11.3 NASA Crashes into Mars, 2004 159
5.11.4 Case Study: The Case of the Preempted Priorities 160
5.12 Financial Systems Architecture 163
5.12.1 Typical Business Processes 163
5.12.2 Product-Related Layer in the Architecture 164
5.12.3 Finding Simple Components 165
5.13 Design and Architectural Process 166
5.14 Problems 170
6. Estimation, Planning, and Investment 173
6.1 Software Size Estimation 174
6.1.1 Pitfalls and Pratfalls 174
6.1.2 Software Size Metrics 175
6.2 Function Points 176
6.2.1 Fundamentals of FPA 176
6.2.2 Brief History 176
6.2.3 Objectives of FPA 177
6.2.4 Characteristics of Quality FPA 177
6.3 Five Major Elements of Function Point Counting 177
6.3.1 EI 177
6.3.2 EO 178
6.3.3 EQ 178
6.3.4 ILF 178
6.3.5 EIF 179
6.4 Each Element Can Be Simple, Average, or Complex 179
6.5 Sizing an Automation Project With FPA 182
6.5.1 Advantages of Function Point Measurement 183
6.5.2 Disadvantages of Function Point Measurement 184
6.5.3 Results Common to FPA 184
6.5.4 FPA Accuracy 185
6.6 NCSLOC Metric 186
6.6.1 Company Statistics 187
6.6.2 Reuse 187
6.6.3 Wideband Delphi 189
6.6.4 Disadvantages of SLOC 190
6.7 Production Planning 192
6.7.1 Productivity 192
6.7.2 Mediating Culture 192
6.7.3 Customer Relations 193
6.7.4 Centralized Support Functions 193
6.8 Investment 195
6.8.1 Cost Estimation Models 195
6.8.2 COCOMO 197
6.8.3 Scheduling Tools-PERT, Gantt 205
6.8.4 Project Manager's Job 207
6.9 Example: Apply the Process to a Problem 208
6.9.1 Prospectus 208
6.9.2 Measurable Operational Value (MOV) 209
6.9.3 Requirements Specification 209
6.9.4 Schedule, Resources, Features-What to Change? 214
6.10 Additional Problems 216
7. Design for Trustworthiness 223
7.1 Why Trustworthiness Matters 224
7.2 Software Reliability Overview 225
7.3 Design Reviews 228
7.3.1 Topics for Design Reviews 229
7.3.2 Modules, Interfaces, and Components 230
7.3.3 Interfaces 234
7.3.4 Software Structure Influences Reliability 236
7.3.5 Components 238
7.3.6 Open&Closed Principle 238
7.3.7 The Liskov Substitution Principle 239
7.3.8 Comparing Object-Oriented Programming With Componentry 240
7.3.9 Politics of Reuse 240
7.4 Design Principles 243
7.4.1 Strong Cohesion 243
7.4.2 Weak Coupling 243
7.4.3 Information Hiding 244
7.4.4 Inheritance 244
7.4.5 Generalization/Abstraction 244
7.4.6 Separation of Concerns 245
7.4.7 Removal of Context 245
7.5 Documentation 246
7.6 Design Constraints That Make Software Trustworthy 248
7.6.1 Simplify the Design 248
7.6.2 Software Fault Tolerance 249
7.6.3 Software Rejuvenation 251
7.6.4 Hire Good People and Keep Them 254
7.6.5 Limit the Language Features Used 254
7.6.6 Limit Module Size and Initialize Memory 255
7.6.7 Check the Design Stability 255
7.6.8 Bound the Execution Domain 259
7.6.9 Engineer to Performance Budgets 260
7.6.10 Reduce Algorithm Complexity 263
7.6.11 Factor and Refactor 266
7.7 Problems 268
Part 3 Taking the Measure of the System 275
8. Identifying and Managing Risk 277
8.1 Risk Potential 278
8.2 Risk Management Paradigm 279
8.3 Functions of Risk Management 279
8.4 Risk Analysis 280
8.5 Calculating Risk 282
8.6 Using Risk Assessment in Project Development: The Spiral Model 286
8.7 Containing Risks 289
8.7.1 Incomplete and Fuzzy Requirements 289
8.7.2 Schedule Too Short 290
8.7.3 Not Enough Staff 291
8.7.4 Morale of Key Staff Is Poor 292
8.7.5 Stakeholders Are Losing Interest 295
8.7.6 Untrustworthy Design 295
8.7.7 Feature Set Is Not Economically Viable 296
8.7.8 Feature Set Is Too Large 296
8.7.9 Technology Is Immature 296
8.7.10 Late Planned Deliveries of Hardware and Operating System 298
8.8 Manage the Cost Risk to Avoid Outsourcing 299
8.8.1 Technology Selection 300
8.8.2 Tools 300
8.8.3 Software Manufacturing 300
8.8.4 Integration, Reliability, and Stress Testing 301
8.8.5 Computer Facilities 301
8.8.6 Human Interaction Design and Documentation 301
8.9 Software Project Management Audits 303
8.10 Running an Audit 304
8.11 Risks with Risk Management 304
8.12 Problems 305
9. Human Factors in Software Engineering 309
9.1 A Click in the Right Direction 309
9.2 Managing Things, Managing People 312
9.2.1 Knowledge Workers 313
9.2.2 Collaborative Management 313
9.3 FAA Rationale for Human Factors Design 316
9.4 Reach Out and Touch Something 319
9.4.1 Maddening Counterintuitive Cues 319
9.4.2 GUI 319
9.4.3 Customer Care and Web Agents 319
9.5 System Effectiveness in Human Factors Terms 320
9.5.1 What to Look for in COTS 320
9.5.2 Simple Guidelines for Managing Development 322
9.6 How Much Should the System Do? 323
9.6.1 Screen Icon Design 324
9.6.2 Short- and Long-Term Memory 326
9.7 Emerging Technology 327
9.8 Applying the Principles to Developers 334
9.9 The Bell Laboratories Philosophy 336
9.10 So You Want to Be a Manager 338
9.11 Problems 338
10. Implementation Details 344
10.1 Structured Programming 345
10.2 Rational Unified Process and Unified Modeling Language 346
10.3 Measuring Complexity 353
10.4 Coding Styles 360
10.4.1 Data Structures 360
10.4.2 Team Coding 363
10.4.3 Code Reading 364
10.4.4 Code Review 364
10.4.5 Code Inspections 364
10.5 A Must Read for Trustworthy Software Engineers 365
10.6 Coding for Parallelism 366
10.7 Threats 366
10.8 Open-Source Software 368
10.9 Problems 369
11. Testing and Configuration Management 372
11.1 The Price of Quality 373
11.1.1 Unit Testing 373
11.1.2 Integration Testing 373
11.1.3 System Testing 373
11.1.4 Reliability Testing 374
11.1.5 Stress Testing 374
11.2 Robust Testing 374
11.2.1 Robust Design 374
11.2.2 Prototypes 375
11.2.3 Identify Expected Results 375
11.2.4 Orthogonal Array Test Sets (OATS) 376
11.3 Testing Techniques 376
11.3.1 One-Factor-at-a-Time 377
11.3.2 Exhaustive 377
11.3.3 Deductive Analytical Method 377
11.3.4 Random/Intuitive Method 377
11.3.5 Orthogonal Array-Based Method 377
11.3.6 Defect Analysis 378
11.4 Case Study: The Case of the Impossible Overtime 379
11.5 Cooperative Testing 380
11.6 Graphic Footprint 382
11.7 Testing Strategy 384
11.7.1 Test Incrementally 384
11.7.2 Test Under No-Load 384
11.7.3 Test Under Expected-Load 384
11.7.4 Test Under Heavy-Load 384
11.7.5 Test Under Overload 385
11.7.6 Reject Insufficiently Tested Code 385
11.7.7 Diabolic Testing 385
11.7.8 Reliability Tests 385
11.7.9 Footprint 385
11.7.10 Regression Tests 385
11.8 Software Hot Spots 386
11.9 Software Manufacturing Defined 392
11.10 Configuration Management 393
11.11 Outsourcing 398
11.11.1 Test Models 398
11.11.2 Faster Iteration 400
11.11.3 Meaningful Test Process Metrics 400
11.12 Problems 400
12. The Final Project: By Students, For Students 404
12.1 How to Make the Course Work for You 404
12.2 Sample Call for Projects 405
12.3 A Real Student Project 407
12.4 The Rest of the Story 428
12.5 Our Hope 428
Index 429
"In a study, the book was found to be successful at significantly increasing the students willingness and competency in using good software engineering processes." ( Computing Reviews.com , May 10, 2006)
"...the book is an excellent and very readable guide to the development of reliable software, augmented with humor, case studies, useful tidbits...highly recommended for all software engineers." ( CHOICE , March 2006)
"...the book is an excellent and very readable guide to the development of reliable software, augmented with humor, case studies, useful tidbits...highly recommended for all software engineers." ( CHOICE , March 2006)