opinion: Are IBM Mainframes Really Backward Compatible?
Since its birth in the 1960s, backward compatibility has been a promise made by IBM for its flagship System/360, System/370, System/390, z/Series and System z mainframes. Or in other words, what you ran on last year's hardware and systems will run on this year's. Today this promise is taken as fact by all mainframe managers and technical staff. But it's not quite fact is it?
In the 1980s, IBM announced it was ending support for CICS macro level. All CICS applications calling CICS services via macros had to change to EXEC CICS commands (command level programs). This meant analysing every program, replacing every CICS macro call with the equivalent EXEC CICS statement, re-compiling and link-editing, and finally testing these programs. Today CICS macro level programs will not run on CICS - at least not without some third party emulation software. Not backward compatible.
Take a look around IBM websites, documentation and announcements, and you won't find any guarantee about backward compatibility. And there are many cases other than CICS macro level applications where IBM have withdrawn features or software, and forced mainframe users to rework or modify existing applications, or move them off the mainframe. OS/VS COBOL, VisualAge Generator, and ISAM datasets are excellent examples.
So let's think about this from IBMs point of view. They want as much work as possible to run on the mainframe. To do this they walk a tightrope. On one side, they don't want to lose any existing processing. The best way to do this is to ensure existing applications continue to run with little or no change: backward compatibility. But this backward compatibility costs IBM money, and makes introducing new features more difficult.
On the other hand, IBM want sell new stuff: new functionality, new hardware, new features, new products. Without new stuff, the mainframe becomes exactly what many called it decades before: a rusting relic sitting in the corner - soon to be switched off. And here IBM has done spectacularly well. z/OS UNIX was an amazing change to z/OS, with TCP/IP not far behind. Java on z/OS is now mainstream, and many software developers are happily porting software from other UNIX platforms to open up the z/OS market. We're seeing many large non-mainframe applications use DB2 on z/OS for performance and reliability, and z/OS clients for other non-mainframe products such as CorreLog for security event monitoring are regularly appearing.
So to walk this tightrope, IBM has to lose some backward compatibility. Sure, an assembler program assembled on OS/360 in the 1960s will probably work on today's systems. But a COBOL one won't. IBM has a couple of ways of letting its old baggage go:
- Software support. IBM and every other vendor only support a recent subset of all software versions. And you can't expect them to do anything else. If you want IBM support, you must regularly upgrade your systems software, performing any migration tasks needed as part of the process.
- Software retirement. If you look back through IBMs past software catalogs, you see some interesting software. From archaic programming languages like Basic to Lotus 1-2-3/M (yes, a spreadsheet on the mainframe) and obsolete networking like X.25. All no longer needed; all retired.
- Forcing users to change. IBM will force users to modify their systems or applications. For example, the latest Enterprise COBOL compiler will no longer produce load modules: Program Objects are the only choice. Usually these changes are presented in easily digestible chunks as part of systems software upgrades, and often include a sweetener. For example, DB2 users regularly rebind their packages to enjoy performance benefits from more recent versions.
The reality is that backward compatibility is only possible if there are enough users paying IBM for it. So core systems like IMS and CICS are likely to have the maximum backward compatibility, whereas less popular products like Tivoli Storage Manager for z/OS or obsolete products like ADF II, are well in the firing line.
By now you're probably thinking that I'm criticising IBM for not keeping their backward compatibility promise. But the exact opposite is true. IBM has done, and continues to do, a magnificent job at keeping this promise - as much as possible. And when IBM does stop backward compatibility, they give their users a lot of time to get used to the idea. Take our CICS example above. IBM stated that all new CICS functions would only be command level from CICS/VS 1.4 in 1978. Smart CICS users would have seen the writing on the wall, and started planning their migrations. The last CICS to support macros was CICS/MVS 2.1.2, released in 1987. So that's 9 years. Even nicer, IBM continued to support CICS/MVS 2.1.2 until around 1996: Another 9. So users had 18 years to convert over.
Similarly IBM is brilliant at notifying users when software or features are no longer supported, and often provide migration options. For example, IBM announced in 2002 that it would no longer manufacture 3745 communication FEPs. However support for these devices continued until at least 2010, depending on where you are. Today users can still use ACF/NCP with the z/Linux based FEP emulator CCL.
As an independent observer, it's difficult not to be impressed by how IBM walks this tightrope. They continue to provide backward compatibility when it's needed, gently easing us users down required paths when something has to change - paths that are usually not too large or difficult. And when major changes are needed, they give us lots of notice (sometimes decades). In doing so, they continue to provide a platform that satisfies the needs of its existing customers, while regularly delighting with new features and functionality.
IBM, you've done well.
David Stephens
|