Showing posts with label industrial software development. Show all posts
Showing posts with label industrial software development. Show all posts

Monday, October 6, 2014

The Impact of Complexity

In software development complexity is defined as the effort for understanding a program or algorithm. Consequently the effort for the new or further development of software is not only reliant on the number of user functions, data structures, data elements, etc. (depending of the measuring method) but also of the complexity. However it is necessary to distinguish between different types of complexity.

The Complexity of an Implementation represents the effort for understanding the code and design of an application. It is important for estimating the costs for the maintenance and further development of existing systems. There are multiple metrics measuring different aspects of the complexity inherent in the code: The McCabe metric measures the cyclomatic complexity, the metric of Halstead the textual-lexical complexity, the RfC metric the class complexity, etc. For the cost estimation of a planned new development project this type of complexity has no significance because it accrues from the development process. Furthermore different implementations of the same use case can have different levels of complexity.

The complexity of an implementation has no significance for functional measurement, but the interactional Complexity has. It represents the effort for understanding and implementing the steps of the use case scenarios. The Function Point Analysis derives it from structural parameters as e.g. the number of data element types and record element types related to user functions. The Data Interaction Point Method considers the complexity related to use cases according to the usage of the counted data elements, e.g. for input/output, persistence or read-only access, etc. The COSMIC Method does not consider complexity at all, which is a disadvantage.

All functional size metrics which are conformal to the standard ISO/IEC 14143 ignore the algorithmic Complexity of the business logic lying in processes and functions of the system. They all are based on the assumption that algorithmic complexity has secondary importance towards the functional size. Actually at most systems the algorithmic complexity is limited to input validations, data inquiries, output processing, less complex logical operations and calculations, etc. Usually the number of such functions correlates with the number of interactions with the system's actors. Hence the error due to ignoring the algorithmic complexity is generally negligible. However this does not apply for systems with primarily algorithmic processing, e.g. a route planning program: Its input is a start location and a destination, the output a list of route segments. Every functional size metric would consider only the user interactions and measure a small functional size. Since the development of the undoubtedly highly complex planning algorithms requires high expenditures, the calculation of productivity on basis of the small functional size would indicate a very low productivity compared with other systems - which is incorrect. Conclusion: Functional size measurement is not suitable for systems with an above-average algorithmic complexity.

Monday, July 28, 2014

Economical Measurements (The Limits of Automation)

The functional size of an application should be measured regularly in order to determine the growth and hence the development productivity of each release or increment. Only by comparing cyclic measurements, improving or worsening effects can be identified. Furthermore the organisation gets empirical productivity values which are the key for cost estimations of new development projects (see: Measuring Productivity in Software Development). 

Experience shows that cyclic measurements will be neglected if the accociated effort is too high. Measuring techniques based on counting rules allow a high degree of automation by mapping the objects to be counted to design features of an existing application and implementing the according scripts or programs for counting. The investment in implementing these script or programs quickly pays for itself when repeated measurements do not require anymore manual effort. Possible targets for automated counting are:

  • Dialog models represented by XML or XHTML files as being used by many GUI frameworks
  • XML schemas defining interface or message structures
  • Meta data of a DBMS including information about tables and attributes
When applying automated measurement processes which have a stronger focus on design features than on the use cases of a system there is a risk of inaccurateness if counting objects have been used multiple times by the application design. An example of this is the comparison of two applications A and B, which are different implementations of the same use cases:
  • At application A similiar dialogs have been implemented separately and partially redundant.
  • At application B a generic dialog has been implemented which automatically adapts according to the use case respectively invocation. 
An automated size measurement based on design features will result in a lower value for application B, which is wrong because both applications are implementations of the same use cases. In the worst case this will promote bad development practice (copy & paste) and poor design, if developers try to increase the measured productivity by creating redundant counting objects. Actually this is fatal in several ways, because the reuse of components and services is one of the most important keys to sustainably increase productivity and improve maintainability.

Conclusion: Automated measuring techniques based on design features are necessary to make size measurements practicable at all. However a side effect is the limited comparability of applications with different degrees of reutilization.

Monday, October 21, 2013

When is Software Development agile?


The following aspects characterise agile software development :
  • Parts of a system are developed at different times. The system is always being enhanced by already accomplished parts. This is called incremental development.
  • Defect analyses and measurements are used as learning opportunity by team members and for improvements of the organization.
  • All involved parties / persons work close together and cooperate directly, e.g. by a high degree of teamwork and a continuous cooperation of the customer or product owner. 
Actually models for agile development exist since the 90s. They follow processes and, due to the mostly short iteration cycles (sprints), they heavily rely on measurements of progress, code quality, test coverage, etc. and a fast feedback of it to the developers.

Monday, October 7, 2013

Software Development with Methods of the industrial Production

Methods of industrial production may be a response to 
  • increasing complexity of requirements,
  • shorter development cycles and
  • growing cost pressure.
This has nothing to do with the series production of the same product. It more concerns the following aspects:
  • Standardization of system and application architecture 
  • Reusability of components
  • Automation  of development and quality assurance processes
  • Measurability of productivity and quality
While today a lot of standards, concepts for reusability and tools for automation are available and established in software development, many companies neglect measurements or use estimations instead. However regular and accurate measurements of productivity and quality are an important basis for the reliable determination of the effort for upcoming development projects (see also Measuring Productivity in Software Development).