# Measurement of quality

Today, the dependence of an enterprise on IT has increased many folds than it used to be twenty years back. The business too is changing very fast and to remain competitive, the agility of the IT infrastructure and software is essential. Virtualization and cloud provide this much-needed agility to IT in the infrastructure area, but when it comes to software and that too custom software the solution is not as simple. It all boils down to how fast the software, specifically the custom software, can be restructured to meet the ever-changing demands of the business. Among many factors that influence this restructuring, the biggest that comes in the way of high agility is the quality of the code and hence measurement of quality.

There are many quality metrics in the software industry that are today used to measure some aspect of the code. For example cyclomatic complexity, which measures the number of linearly independent paths through a section of program, gives a measure of the complexity of the corresponding section in some way. Is this the complete measure of the complexity? Obviously the answer would be no, since the complexity dependents on many other factor apart from the linearly independent paths. Some of the key measures are cyclomatic complexity, cohesion, coupling (for example Data Abstraction Coupling and Fan-out Coupling), N path complexity, code coverage, program size, documentation, MOOD metrics, and adherence to standards.

## Software quality measurement

The quantities obtained from these quality metrics are different as they measure different aspects of the code. Simply doing a mathematical operation to some of these quantities and then adding them will give us measure e.g. Maintainability Index, but will it balance all concerns of different stakeholders? A single approach to fit all needs would be too simplistic an approach. With years Maintainability Index has been redefined many times. Following are some of its definitions:

• The original formula:
MI = 171 – 5.2 * ln(V) – 0.23 * (G) – 16.2 * ln(LOC)
• The derivative used by SEI:
MI = 171 – 5.2 * log2(V) – 0.23 * G – 16.2 * log2 (LOC) + 50 * sin (sqrt(2.4 * CM))
• The derivative used by Microsoft Visual Studio (since v2008):
sMI = MAX(0,(171 – 5.2 * ln(V) – 0.23 * (G) – 16.2 * ln(LOC))*100 / 171)

The above formulation uses V for Halstead Program Volume; G for Cyclomatic Complexity; LOC: for Lines of Source
Code; and CM for Comment Ratio (lines of comment to the total number of lines). This formulation for Maintainability index used the experience and skills of the individuals and organizations where they were first conceived. This has for long been an art and highly dependent on the skills of the individuals and the data he/she is working with. Note that with experience only have the individuals/ organization been able to find constants or mathematical functions which have given results matching the expectations on the code at hand.

In my experience with developing and maintaining software for many of my organization’s customers, I have seen the concerns change over time in the same engagement. The index such as the above still gives the same measure and therefore becomes less applicable. From engagement to engagement, since the priorities vary, the use of the same index again is less applicable. Maintenance engagement, I have noticed, are more focused towards maintainability. So would be the case with products which are mature. Development engagements, however, are more holistic but then tend to focus on the maintainability aspect as the product being developed becomes mature.

The quality metrics sited above are not the universe. There are bound to be changes in them itself and addition of newer and smarter metrics. Architects and managers would certainly want to use them too.

A more mature method is, therefore, required to be developed which is independent of the quality metric in question and treats all the quality metrics in a similar manner. With quantities produced from such a method, it would be easier to alter the index based on the concerns relevant at the time and still be able to correlate it with the indices values obtained in the historical times.

## Why should such a quantity exist?

To answer this question, I would like to consider the example of two applications along with the quality metric ‘cyclomatic complexity’. Let me call them A1 and A2. Let these two applications have similar number of code artifacts. Application A1 has most of the code artifacts with cyclomatic complexity in the 1-3 range. While the application A2 has most of the code artifacts with cyclomatic

complexity in the 8-10 range. Note that the Industry best practice for this quality measure is 10. So the question is do the two applications considered in the scenario have the same code quality?

Obviously the answer is no. The artifacts in application A1 have cyclomatic complexity less than that in application A2. This in turn means that the artifacts of application A1 are simpler than that of application A2. The graph of two applications when plotted in the manner shown above makes this fact very obvious.

Notice, in the graph above I compared two applications. Let us for this moment assume that we have a mathematical formulation which can compare two applications in the manner shown in the graph above and give us a quantity. What if we were to compare each application with a hypothetically perfect application of similar size? Now with the assumed mathematical formulation we can obtain a quantity for both the applications and can use it to compare the two applications.
Now, what is such a mathematical formulation? One would be tempted to use average as the formulation, but then average will not cover the essence present in the graph. If one dwells further into the statistical quantities, the quantity that covers the essence of the graph above is the general correlation coefficient. Here, the correlation is on the count of code artifacts having a particular range of values of the quality metric with a hypothetical perfect application. Note that it is very simple to qualify a similar sized perfect application. All the counts would be in the range that is considered best from the quality perspective for that quality metric. The formula that I will use for correlation after deriving it from the general correlation coefficient will be as follows:

The scores ai are derived by subtracting the quality metric value of the code artifact i from the value that is considered best for that quality metric. This is to be done for all artifacts that are not at the desirable levels (It should be ensured that these values are negative). For the ones that are at the desirable levels the value obtained for the quality metric is to be used. However, if the quality metric is grouped in k groups with the ith group having the assigned score as ai and the count of artifacts from the application lying in the ith group is ni (given that ∑ni=1ni = n), the above formula will change to

Now let us look at some good formulations for this quantity for a given quality metric. The table below shows some scenario of different kinds of application where the counts for the quality metric is split into three groups viz. good (2), average(-1) and bad (-2).

 Quality Metric Grouping Artifacts Count for Application Classification Perfect Bad Bad Bad Below Average Below Average Average On the edge Good Good(2) 50 0 0 0 25 25 25 35 40 Average(-1) 0 50 0 25 0 17 25 15 7 Bad(-2) 0 0 50 25 25 8 0 0 3 Expected quantity < 0.2 < 0.2 < 0.2 < 0.4 < 0.4 < 0.65 < 0.7 > 0.7 τ -1 -1 -0.948 0 0.197 0.316 0.625 0.709 (1+τ)/2 0 0 0.025 0.5 0.598 0.658 0.812 0.854 [(1+τ)/2]2 0 0 0 0.25 0.358 0.433 0.659 0.729

Notice that the spread for bad applications correlation value lies between 0.5 to -1 while for applications average or better the range of correlation lies from 0.5 to 1. This leaves little scope of identifying good, on the edge, average applications. Thankfully, since the number is between -1 and 1, squaring or cubing the number will result in increasing the range where we want it to be increased. Squaring (and keeping the sign) reduces the range for bad applications by making it from 0.25 to -1 while increasing the range for the rest type of applications by making it from 0.25 to 1. Also notice the calculation (1 + τ)/2 just changes the whole range from [-1, 1] to [0, 1]. Since [(1 + τ)/2]2 gives very good value in comparison to the value I was expecting for each type of application.

The method to quantify the quality or measurement of quality of software provides a way to calculate a correlation value for different code quality matrices. Since the value obtained is all correlation values, a weighted addition can easily be done to arrive at an overall score. The weights can be so chosen to be in line with the key concerns of various stakeholders relevant to the software. Such a score is not dependent on the skills of the individual and therefore have greater significance and can be used in many ways.