Longpela Expertise logo
Longpela Expertise Consulting
Longpela Expertise
Home | Press Room | Contact Us | Site Map
FAQ


LongEx Mainframe Quarterly - May 2015
 

opinion: Why Don't We Use Source Metrics?

Source code metrics have fascinated me for years. The idea of a number indicating the size, quality or complexity of a program sounds brilliant – mathematics applied to source code. The problem is that in the mainframe area, no-one seems to be using them.

Source code metrics are hardly new. The idea of counting the number of lines of code (excluding blank lines and comments) has been around for decades. More complicated metrics go a bit deeper. In 1976 Thomas McCabe developed the Cyclomatic Complexity measurement: a measure of the number of possible paths through a module. This can be viewed as an indicator of the complexity of a module.

In 1977, Maurice Halstead introduced a set of source code metrics including the program difficulty, effort, time required to program and number of bugs. Other more recent metrics look to improve on these veteran metrics. For example, in 1992, the Maintainability Index was introduced, combining some of the above.

This is all great, but how would you use them? There are actually a lot of possibilities. One of the first I would consider is simple: calculate a couple of metrics whenever a program is created or changed, and record it. If a program change increased complexity, it can be flagged. Or if a new program exceeded a limit (for example, it had almost no comments), again it can be flagged.

I'd also go a step further. I'd calculate metrics for all existing programs in an attempt to identify potential hotspots. Put these together with other things like problem history, frequency of use, risk (updating a database would have a higher risk than producing a report), number of changes and other things could be used to identify programs that may be a problem. These could be targeted for further analysis.

And this is the tip of the iceberg. You could use these metrics scoping any new development or modification of existing code or applications. Or you could compare metrics of individual programmers or organisations as a possible guide to their productivity.

Now, I've been working with mainframes since 1989, and have seen and worked with many different sites and organisations. However I have yet to work at a site that uses source code metrics in any serious, ongoing way. Now I'm a systems-related consultant, so it's possible that I've missed some application teams leveraging source code metrics. So when researching this article, I asked around a few others with more application experience with the mainframe. The general consensus is that almost no sites use source code metrics. Those that do use them for project estimation – not for code quality.

I find this amazing. Why isn't anyone doing this? My guess is the usual: no time. Application development groups are pulled in two ways: pressure to quickly deliver new functionality, and pressure to reduce costs. Add to this the pressure of maintaining existing code, fixing bugs, and maintaining performance are enough to keep them very busy. Implementing my metrics ideas would to application teams seem like madness. I'm creating more work in generating and storing these metrics, and also adding to the review process. And if existing code scores poorly, it doesn't look good. Bottom line: more work, longer time to deliver, chance of looking bad. There needs to be a significant payoff before an application manager will add this project into his inbox.

Another reason is that not everyone is convinced that source code metrics are worth it. There is discussion as to the validity of some source code metrics. A more valid concern is over how these metrics are used. If a programmer knows his program has to achieve a certain metric or score, he'll manipulate the source to achieve this – sometimes as the cost of the code quality we're trying to improve. Similarly, in comes cases a higher (or lower) value may be justified. So sending perfectly acceptable source back to the programmer because it hasn't achieved a set score is pointless and counter-productive. These fears mirror concerns when using any statistics, and can be met in the same way: sensible use. Tying remuneration or bonuses to a metric is a bad idea, as is using a metric score as a hard limit. A better way is to use metrics as a guide. If a program exceeds a metric, it's worth discussion: a programmer could justify this with a written line or two for review teams.

A third reason is also simple: there aren't many mainframe-related tools that can do it. Lots of non-mainframe tools, but few that work well with z/OS. IBMs relatively new Rational Asset Analyzer (RAA) is just about your only z/OS based option. One or two non-mainframe options such as EZSource can also work easily with z/OS. But that's about it. Again, I find this amazing. I would have thought that SCM software such as CA Endevor or IBM SCLM would include this functionality, but they don't. Even more interesting, source code metrics have been around for decades, but there is almost no z/OS based legacy software that does it. IBMs RAA hasn't been around that long.

In a world where original programmers are retiring, and organisations have less contact and control of their source code from outsourcing and other business arrangements, I would have thought that anything that could quickly show the quality of source code would be very attractive.


David Stephens



LongEx Quarterly is a quarterly eZine produced by Longpela Expertise. It provides Mainframe articles for management and technical experts. It is published every November, February, May and August.

The opinions in this article are solely those of the author, and do not necessarily represent the opinions of any other person or organisation. All trademarks, trade names, service marks and logos referenced in these articles belong to their respective companies.

Although Longpela Expertise may be paid by organisations reprinting our articles, all articles are independent. Longpela Expertise has not been paid money by any vendor or company to write any articles appearing in our e-zine.

Inside This Month

Printer Friendly Version

Read Previous Articles


Longpela Expertise can manage mainframe costs, projects and outsourcing agreements. Contact us to get your own independent mainframe expert.
© Copyright 2015 Longpela Expertise  |  ABN 55 072 652 147
Legal Disclaimer | Privacy Policy Australia
Website Design: Hecate Jay