longpelaexpertise.com.au/ezine/ExistingCodeQuality.php?ezinemode=printfriend |
|
LongEx Mainframe Quarterly - May 2015
management: How Good is Your Existing Code? |
Code quality is a subject that's been beaten to death. There are so many resources for those interested in developing better code: from books, seminars and courses to tools and methodologies. We all know that better code is cheaper to maintain and enhance, runs better and faster, and suffers less bugs and errors. Creating better quality code now will pay off in the future – again and again. However in the mainframe world, we're chained to the past: we have a large amount of existing code – developed over decades. So how can we find out how good this existing code really is?
|
|
Do We Really Need to Know?
In the 1990s, many organisations were scrabbling with the Year 2000 (Y2K) problem. This forced them to look at their existing code – something many preferred to avoid. These companies were faced with a huge project, and it was difficult to know where to begin. How could you estimate the scale of the problem, the resources and time needed, and the potential risks involved? As we now know, the smart first step was to assess the problem. An obvious part of this is to scan code for areas performing date processing. However another part was to assess the code quality, or complexity. It will take more time to analyse and modify a complex routine than a simple one.
This is a high-profile past example, but the quality of existing source code will interest many different players or companies:
- Outsourcing companies: source code quality directly affects costs in maintaining or enhancing legacy code.
- Companies preparing to outsource their legacy applications support: important when assessing incoming bids - are they realistic?
- Companies that have already outsourced their applications development: is the code produced of high quality, or not?
- Developers of software products: code quality assessments are needed to prioritise areas for maintenance, or assess enhancement options. This is especially true for legacy mainframe software, and for companies acquiring these legacy software products.
The Right Source?
It's tempting to jump straight in and start analysing source code. However with legacy applications, there's a bigger problem to consider first: is it the right source code?
In some companies, there is uncertainty as to which source code relates to an executable module. For example, there may be three source code versions of PGMA. But which of these was used to create the load module PGMA? In some cases, there may be several libraries with different PGMA modules, or different modules on different systems. In a worst case, the source has been completely lost (the article What To Do When You Lose the Source explores this further).
There's no way to confirm the source code used by a module other than by recompiling the module with the same compiler, and comparing it byte by byte. This is covered in more detail in our article How to Check If You Have The Right Source Code.
Source Code Metrics
So we need a way of quickly finding out source code quality. There has been discussion about what quality is, but it is safe to say that we want:
- Functionality – does it do what it's supposed to, and will it continue to do so?
- Readability and complexity – can another programmer understand it?
- Efficiency – is it fast?
- Robustness – good enough error handling?
So how do we measure this? Source code metrics look good.
The idea of generating a numeric score or metric from source code isn't new. Code complexity metrics such as the Cyclomatic Complexity measurement have been around since the mid-1970s. All these metrics are designed to help in evaluating the quality of source code. They evaluate things like:
- Size of a program (too large may be too complex)
- Complexity of a program (well written, or spaghetti code?)
- Comments to LOC (enough comments)?
- Likelihood of errors
Source code metrics are great as they provide an easy-to-understand number. So you can get a metric for each of your 3500 COBOL programs, and quickly sort them in quality order. The flipside is that there is some disagreement as to the accuracy and usefulness of these metrics. Either way, they can provide a reasonable first pass. There are a couple of z/OS tools that can be used to obtain these metrics. Some options include:
- IBM Rational Asset Analyzer
- EZSource
- Semantic Designs Software Metrics Tools
- IRIS Analyzer
Interestingly, the latest Enterprise COBOL compiler displays a program complexity metric in the compiler output. However IBM has not published how this number has been generated, and it is primarily used for COBOL compiler processing.
Other Source Code Analysis
Software metrics are great for a quick look over many modules. However they're not perfect. There are a few tools that attempt to assist developers in improving their source code, and to allow sites to specify their own rules. The problem is that these usually deal with one program at a time, so are less useful when dealing with a large number of modules. Some z/OS options here include:
- IBM Rational Developer for System z (RDz) – their Code Review features compares code against pre-defined best practices and rules. RDz includes a starter set, and sites can add to this as required. RDz also includes a feature to identify dead code, and a graphical view of COBOL or PL/I code structure.
- Both IBM Rational Asset Analyzer and EZSource include features to locate dead or duplicated code.
- Semantic Designs markets Style Checker Tools for COBOL and C++ code.
- The non-mainframe based SonarQube can work with COBOL and PL/I.
An interesting fact is that there are few source code analysis products for languages such as COBOL or PL/I. However there are several for JCL. Usually used to pre-validate JCL before job submission, these products also include features for formatting code, and enforcing site-specific standards. Examples include ASG-JOB/SCAN, CA JCLCheck and SEA JCLPlus+.
Other Analysis
So far we've concentrated on analysing the source code. However it can be argued that a better way of determining code quality is to look not only at the source code, but also how it has and continues to execute. There are a couple of ways to approach this:
- Performance – if a program is performing badly, there's a good chance that it quality isn't high. There are many tools that can measure program performance and efficiency.
- Testing – everyone tests their programs. However for existing code, the testing history can be an indicator. A program that had three bugs during unit testing may have a lower quality than one with none. This can be used for all testing levels – from unit testing to final quality assurance.
- Problem History – every site has a problem management system. These can provide valuable intelligence about program quality. A program with a higher number of recorded problems will likely be of a lower quality than one with none. This should be used together with a measurement of how often the program is executed. A program with one previous bug executed hundreds of times every day may have a higher quality than one with one previous bug executed once per year.
A program's history or features can also be an indicator of its quality. For example:
- The individual programmer. Some programmers are better than other. So analysis of source code by programmer may provide insight, identifying previous programmers that produced poorer quality code.
- How often a program has been modified. A program with many modifications may have had many bugs to fix, particularly if there are many in a short period of time after it was created. Software patches can also quickly reduce a module's quality, particularly if the developer is under time pressure. So a module modified more often may have a lower quality than one not modified since its original development. But the opposite can also be true. A module un-modified in the past 20 years may actually be of lower quality – simply as it uses outdate or less-efficient APIs and features.
- A program's programming language. For example, many sites have assembler code written by programmers unfamiliar with assembler. Assembler can be unforgiving, so may be a first place to look when searching for problem areas. REXX is another interesting case. REXX is an easy language to use, and many routines have been created by staff inexperienced with programming. However REXX is also powerful, and can be used to create complex programs.
- A program's compile options. Compile options that are poor or not optimal may indicate that the programmer was inexperienced or less-skilled.
Conclusion
Maintaining legacy code is a difficult task often underestimated and under-appreciated. However performing even some brief analysis of this legacy code can identify problem modules, and provide an indicator of code quality. Such an indicator is essential for anyone budgeting or scoping projects to maintain or enhance this legacy code.
David Stephens
|