Wednesday, January 23, 2013

Methods for measuring Software Size



Key Performance Indicators (KPIs) are required to check, whether certain aspects of maintenance, development or operation are acceptable, can be improved or even require actions. The according thresholds or baselines can be found by exploring KPIs over a longer period of time - or by benchmarking, i.e. by a comparison of KPIs between different systems or projects.

A good example to explain the dilemma of comparability is the defect rate. Of course the defect rate can be defined as the number of defects (of relevant defect types) in a certain period of time. With such a metric it is not possible to decide whether a system with 20 defects per month is better or possibly worse than another system with 30 defects per month. Another value is required to normalize the defect rate. Similar to the lot size in industrial production we must quantify e.g. the amount or size of the software, because - supposed that the system with its 30 defects per month has the double size of the other system with 20 defects in a month - its number of defects can be rated less critical. The resulting metric is called defect density (see: KPIs for Controlling in Software Maintenance and Development).

There are lots of different approaches to measure the size of software. In the 70s lines of code have been counted. Because this metric depends on the programming language and on individual programming styles it does not meet the requirements for most of the KPIs. A better approach is considering the use cases of a system, as it is described by the standard ISO/IEC 14143 for functional size measurement.

A very popular metric is the function point method, where user functions / transactions and logical files with regard to the use cases are counted. Thus this metric is independent of the code and other technological aspects. However a function point analysis can be time-consuming because it requires a good understanding of the system’s functionality. The weights of the counted transactions and files are based on their complexity and have been defined by interval scales with only three steps. See also my post Is the Function Point Method still up-to-date?

The DIP method (Data Interaction Point Method) of the PASS Consulting Group counts data elements related to the use cases of a system and their interactions with the users and with external systems. In many cases counting of data elements and their equivalences in dialogs and interfaces can be automated. Weights are also based on the complexity, but only depend of the usage of a data element. See also The PASS Data Interaction Point Method (DIP Method).

Of course there can be many other metrics being feasible for normalizing KPIs in IT management, as far as they are 
  • objective (measured values are independent of the measurer), 
  • reliable (repeated measurements show the same results) and 
  • valid (measured values represent the quantity to be measured).

No comments:

Post a Comment