000 12616nam a2201225 i 4500
001 5988896
003 IEEE
005 20220712205806.0
006 m o d
007 cr |n|||||||||
008 151221s2006 nju ob 001 eng d
020 _a9780471792536
_qebook
020 _z9780471676225
_qprint
020 _z0471676225
_qcloth
020 _z9780471792529
_qelectronic
020 _z0471792535
_qelectronic
020 _z0471792527
_qelectronic
024 7 _a10.1002/0471792535
_2doi
035 _a(CaBNVSL)mat05988896
035 _a(IDAMS)0b00006481624699
040 _aCaBNVSL
_beng
_erda
_cCaBNVSL
_dCaBNVSL
050 4 _aQA76.76.S65
_bL35 2006eb
082 0 0 _a005.1
082 0 _a005.1/4
_222
100 1 _aLaird, Linda M.,
_d1952-
_927747
245 1 0 _aSoftware measurement and estimation :
_ba practical approach /
_cLinda M. Laird, M. Carol Brennan.
264 1 _aHoboken, New Jersey :
_bJohn Wiley & Sons,
_c2006
264 2 _a[Piscataqay, New Jersey] :
_bIEEE Xplore,
_c[2006]
300 _a1 PDF (276 pages).
336 _atext
_2rdacontent
337 _aelectronic
_2isbdmedia
338 _aonline resource
_2rdacarrier
490 1 _aQuantitative software engineering series ;
_v2
504 _aIncludes bibliographical references and index.
505 0 _aAcknowledgments -- 1. Introduction -- 1.1 Objective -- 1.2 Approach -- 1.3 Motivation -- 1.4 Summary -- References -- Chapter 1 Side Bar -- 2. What to Measure -- 2.1 Method 1: The Goal Question Metrics Approach -- 2.2 Extension to GQM: Metrics Mechanism is Important -- 2.3 Method 2: Decision Maker Model -- 2.4 Method 3: Standards Driven Metrics -- 2.5 What to Measure is a Function of Time -- 2.6 Summary -- References -- Exercises -- Project -- 3. Fundamentals of Measurement -- 3.1 Initial Measurement Exercise -- 3.2 The Challenge of Measurement -- 3.3 Measurement Models -- 3.3.1 Text Models -- 3.3.2 Diagrammatic Models -- 3.3.3 Algorithmic Models -- 3.3.4 Model Examples: Response Time -- 3.3.5 The Pantometric Paradigm - How to Measure Anything -- 3.4 Meta-Model for Metrics -- 3.5 The Power of Measurement -- 3.6 Measurement Theory -- 3.6.1 Introduction to Measurement Theory -- 3.6.2 Measurement Scales -- 3.6.3 Measures of Central Tendency and Variability -- 3.6.3.1 Measures of Central Tendency -- 3.6.3.2 Measures of Variability -- 3.6.4 Validity and Reliability of Measurement -- 3.6.5 Measurement Error -- 3.7 Accuracy versus Precision and the Limits of Software Measurement -- 3.7.1 Summary -- 3.7.2 Problems -- 3.7.3 Project -- References -- 4. Measuring the Size of Software -- 4.1 Physical Measurements of Software -- 4.1.1 Measuring Lines of Code -- 4.1.1.1 Code Counting Checklists -- 4.1.2 Language Productivity Factor -- 4.1.3 Counting Reused and Refactored Code -- 4.1.4 Counting Non-Procedural Code Length -- 4.1.5 Measuring the Length of Specifications and Design -- 4.2 Measuring Functionality -- 4.2.1 Function Points -- 4.2.1.1 Counting Function Points -- 4.2.2 Function Point Counting Exercise -- 4.2.3 Converting Function Points to Physical Size -- 4.2.4 Converting Function Points to Effort -- 4.2.5 Other Function Point Engineering Rules -- 4.2.6 Function Point Pros and Cons -- 4.3 Feature Points -- 4.4 Size Summary -- 4.5 Size Exercises -- 4.6 Theater Tickets Project.
505 8 _aReferences -- 5. Measuring Complexity -- 5.1 Structural Complexity -- 5.1.1 Size as a Complexity Measure -- 5.1.1.1 System Size and Complexity -- 5.1.1.2 Module Size and Complexity -- 5.1.2 Cyclomatic Complexity -- 5.1.3 Halstead's Metrics -- 5.1.4 Information Flow Metrics -- 5.1.5 System Complexity -- 5.1.5.1 Maintainability Index -- 5.1.5.2 The Agresti-Card System Complexity Metric -- 5.1.6 Object-Oriented Design Metrics -- 5.1.7 Structural Complexity Summary -- 5.2 Conceptual Complexity -- 5.3 Computational Complexity -- 5.4 Complexity Metrics Summary -- 5.5 Complexity Exercises -- 5.6 Projects -- References -- 6. Estimating Effort -- 6.1 Effort Estimation - Where are we? -- 6.2 Software Estimation Methodologies and Models -- 6.2.1 Expert Estimation -- 6.2.1.1 Work and Activity Decomposition -- 6.2.1.2 System Decomposition -- 6.2.1.3 The Delphi Methods -- 6.2.2 Using Benchmark Size Data -- 6.2.2.1 Lines of Code Benchmark Data -- 6.2.2.2 Function Point Benchmark Data -- 6.2.3 Estimation by Analogy -- 6.2.3.1 Traditional Analogy Approach -- 6.2.3.2 Analogy Summary -- 6.2.4 Proxy Point Estimation Methods -- 6.2.4.1 Meta-Model for Effort Estimation -- 6.2.4.2 Function Points -- 6.2.4.2.1 COSMIC Function Points -- 6.2.4.3 Object Points -- 6.2.4.4 Use Case Sizing Methodologies -- 6.2.4.4.1 Use Case Points Methodology -- 6.2.4.4.2 Example: Use Case Point Methodology Example: Home Security System -- 6.2.4.4.3 Use Case Point Methodology Effectiveness -- 6.2.5 Custom Models -- 6.2.6 Algorithmic Models -- 6.2.6.1 Manual Models -- 6.2.6.2 Estimating Project Duration -- 6.2.6.3 Tool Based Models -- 6.3 Combining Estimates -- 6.4 Estimating Issues -- 6.4.1 Targets vs. Estimates -- 6.4.2 The Limitations of Estimation - Why? -- 6.4.3 Estimate Uncertainties -- 6.5 Estimating Early and Often -- 6.6 Estimation Summary -- 6.7 Estimation Problems -- 6.8 Estimation Project - Theater Tickets -- References -- 7. In Praise of Defects: Defects and Defect Metrics -- 7.1 Why study and measure defects?.
505 8 _a7.2 Faults vs. failures -- 7.3 Defect Dynamics and Behaviors -- 7.3.1 Defect Arrival Rates -- 7.3.2 Defects vs. Effort -- 7.3.3 Defects vs. Staffing -- 7.3.4 Defect Arrival Rates vs. Code Production Rate -- 7.3.5 Defect Density vs. Module Complexity -- 7.3.6 Defect Density vs. System Size -- 7.4 Defect Projection Techniques and Models -- 7.4.1 Dynamic Defect Models -- 7.4.1.1 Rayleigh Models -- 7.4.1.2 Exponential and S-Curves Arrival Distribution Models -- 7.4.1.3 Empirical Data and Recommendations for Dynamic Models -- 7.4.2 Static Defect Models -- 7.4.2.1 Defect Insertion and Removal Model -- 7.4.2.2 Defect Removal Efficiency - A Key Metric -- 7.4.2.3 Static Defect Model Tools -- 7.5 Additional Defect Benchmark Data -- 7.5.1 Defect Data By Application Domain -- 7.5.2 Cumulative Defect Removal Efficiency (DRE) Benchmark -- 7.5.3 SEI Levels and Defect Relationships -- 7.5.4 Latent Defects -- 7.5.5 Other Defects Benchmarks and a Few Recommendations+ -- 7.6 Cost Effectiveness of Defect Removal by Phase -- 7.7 Defining and Using Simple Defect Metrics: An example -- 7.8 Some Paradoxical Patterns for Customer Reported Defects -- 7.9 Defect Summary -- 7.10 Problems -- 7.11 Projects -- 7.12 Answers to the initial questions -- References -- 8. Software Reliability Measurement and Prediction -- 8.1 Why study and measure software reliability? -- 8.2 What is reliability? -- 8.3 Faults and failures -- 8.4 Failure Severity Classes -- 8.5 Failure Intensity -- 8.6 The Cost of Reliability -- 8.7 Software Reliability Theory -- 8.7.1 Uniform and Random Distributions -- 8.7.2 The probability of failure during a time interval -- 8.7.3 F(t) - The Probability of Failure by time t -- 8.7.4 R(t) - The Reliability Function -- 8.7.5 Reliability Theory Summarized -- 8.8 Reliability Models -- 8.8.1 Types of Models -- 8.8.2 Predicting Number of Defects Remaining -- 8.8.3 Reliability Growth Models -- 8.8.4 Model Summary -- 8.9 Failure Arrival Rates -- 8.9.1 Predicting Failure Arrival Rates Using Historical Data.
505 8 _a8.9.2 Engineering Rules for MTTF -- 8.9.3 Musa's Algorithm -- 8.9.4 Operational Profile Testing -- 8.9.5 Predicting Reliability Summary -- 8.10 But when do I ship? -- 8.11 System Configurations: Probability and Reliability -- 8.12 Answers to Initial Question -- 8.13 Reliability Summary -- 8.14 Reliability Exercises -- 8.15 Reliability Project -- References -- 9. Response Time and Availability -- 9.1 Response Time Measurements -- 9.2 Availability -- 9.2.1 Availability Factors -- 9.2.2 Outage Scope -- 9.2.3 Complexities in Measuring Availability -- 9.2.4 Software Rejuvenation -- 9.2.4.1 Software Aging -- 9.2.4.2 Classification of Faults -- 9.2.4.3 Software Rejuvenation Techniques -- 9.2.4.4 Impact of Rejuvenation on Availability -- 9.3 Summary -- 9.4 Problems -- 9.5 Project -- References -- 10. Measuring Progress -- 10.1 Project Milestones -- 10.2 Code Integration -- 10.3 Testing Progress -- 10.4 Defects Discovery and Closure -- 10.4.1 Defect Discovery -- 10.4.2 Defect Closure -- 10.5 Process Effectiveness -- 10.6 Summary -- References -- Problems -- 11. Outsourcing -- 11.1 The "O" Word -- 11.2 Defining Outsourcing -- 11.3 Risks Management and Outsourcing -- 11.4 Metrics and the Contract -- 11.5 Summary -- References -- Exercises -- Problems -- Chapter 11 Sidebar -- 12. Financial Measures for the Software Engineer -- 12.1 It's All About the Green -- 12.2 Financial Concepts -- 12.3 Building the Business Case -- 12.3.1 Understanding Costs -- 12.3.1.1 Salaries -- 12.3.1.2 Overhead Costs -- 12.3.1.3 Risk Costs -- 12.3.1.3.1 Identifying Risk -- 12.3.1.3.2 Assessing Risks -- 12.3.1.3.3 Planning for Risk -- 12.3.1.3.4 Monitoring Risk -- 12.3.1.4 Capital versus Expense -- 12.3.2 Understanding Benefits -- 12.3.3 Business Case Metrics -- 12.3.3.1 Return on Investment -- 12.3.3.2 Pay-Back Period -- 12.3.3.3 Cost/Benefit Ratio -- 12.3.3.4 Profit & Loss Statement -- 12.3.3.5 Cash Flow -- 12.3.3.6 Expected Value -- 12.4 Living the Business Case -- 12.5 Summary -- References -- Problems.
505 8 _aProjects -- 13. Benchmarking -- 13.1 What is Benchmarking -- 13.2 Why Benchmark -- 13.3 What to Benchmark -- 13.4 Identifying and Obtaining a Benchmark -- 13.5 Collecting Actual Data -- 13.6 Taking Action -- 13.7 Current Benchmarks -- 13.8 Summary -- References -- Problems -- Projects -- 14. Presenting Metrics Effectively to Management -- 14.1 Decide on the Metrics -- 14.2 Draw the Picture -- 14.3 Create a Dashboard -- 14.4 Drilling for Information -- 14.5 Example for the Big Cheese -- 14.6 Evolving Metrics -- 14.7 Summary -- References -- Problems -- Project -- Index.
506 1 _aRestricted to subscribers or individual electronic text purchasers.
520 _aThis book serves as a practical guide to metrics and quantitative software estimation, beginning with the foundations of measurement and metrics, and then focuses on techniques and tools for estimation of the required effort and the resulting quality of a software project.
530 _aAlso available in print.
538 _aMode of access: World Wide Web
550 _aDigitized and made available by: Books 24x7.com.
588 _aTitle from title screen.
588 _aDescription based on PDF viewed 12/21/2015.
650 0 _aSoftware measurement.
_96856
650 0 _aSoftware engineering.
_94138
655 0 _aElectronic books.
_93294
695 _aIndexes
695 _aIndustries
695 _aJava
695 _aLogic gates
695 _aMathematical model
695 _aMeasurement
695 _aMonitoring
695 _aOutsourcing
695 _aPerformance analysis
695 _aProcess control
695 _aProduction
695 _aProductivity
695 _aProgramming
695 _aProgramming profession
695 _aProject management
695 _aReliability
695 _aReliability theory
695 _aSchedules
695 _aServers
695 _aSize measurement
695 _aSoftware
695 _aSoftware algorithms
695 _aSoftware engineering
695 _aSoftware measurement
695 _aSoftware reliability
695 _aStandards
695 _aStandards organizations
695 _aTesting
695 _aTime factors
695 _aTime measurement
695 _aTraining
695 _aUncertainty
695 _aWeaving
695 _aAvailability
695 _aBenchmark testing
695 _aBuildings
695 _aCompanies
695 _aComplexity theory
695 _aComputer bugs
695 _aContracts
695 _aConvergence
695 _aCouplings
695 _aData models
695 _aEncoding
695 _aEstimation
695 _aExtraterrestrial measurements
700 1 _aBrennan, M. Carol,
_d1954-
_927748
710 2 _aIEEE Xplore (Online Service),
_edistributor.
_927749
710 2 _aJohn Wiley & Sons,
_epublisher.
_96902
710 2 _aIEEE Computer Society.
_95286
710 2 _aBooks24x7, Inc.
_926262
776 0 8 _iPrint version:
_z9780471676225
830 0 _aQuantitative software engineering series ;
_v2
_927750
856 4 2 _3Abstract with links to resource
_uhttps://ieeexplore.ieee.org/xpl/bkabstractplus.jsp?bkn=5988896
942 _cEBK
999 _c74163
_d74163