longpelaexpertise.com.au/ezine/DevOpsWorkWithMainframe.php?ezinemode=printfriend

LongEx Mainframe Quarterly - November 2017

management: Can DevOps Work With The Mainframe?

DevOps has become one of the big-interest terms in the IT arena, providing a path for better software development and deployment. But when you search the web for DevOps, you'll see most talk about UNIX and Windows. However mainframe software development suffers from many of the same issues, so DevOps looks attractive for the big iron. But there are problems implementing DevOps on z/OS and similar environments. So can it really work for mainframes?

Problem #1: Existing Applications

A lot of DevOps talk is about developing new applications. But brand-new applications are rare on the mainframe. We're more likely to work on or with existing applications: much harder to implement DevOps. The longer it's been around, the harder it gets.

Most mainframe applications have been around for years, if not decades. In many sites, the people who initially developed the application have retired or moved elsewhere. Those taking over often don't have a full understanding of the application and its history. Because of their age, mainframe applications are often more complicated. An application developed in the 1980s has probably had many different changes and alterations over the years. It's also probably developed a lot of interconnections with other applications (mainframe and non-mainframe) – some documented, some not.

But what does this mean for DevOps? Let's start with Agile. DevOps is built on top of an Agile development methodology. However I have yet to see a mainframe site that has embraced Agile for its legacy systems.

Agile requires continuous testing of all modules. This is very difficult to create for a complicated, partly understood application. For this reason, I've seen few sites that continuously perform comprehensive, regular testing of their systems, with analysis of key output.

Agile also requires regular full builds, even for modules that have not changed. Mainframe development generally only rebuilds changed modules. So there are many applications with modules that have not have been recompiled for many years (often since work performed for Y2K). In these cases, management may be hesitant to recompile them in case of errors. I've also seen legacy modules where the source code has been lost.

Problem #2: Change Control

DevOps champions the Agile idea of regular builds – some propose many builds in a day. In a development environment, this is OK. However mainframe applications are often mission critical, and so have a more rigid and thorough change control system. I know of mainframe applications that only have two change windows every year. Not DevOps friendly.

Problem #3: Large Groups

DevOps champions the idea of breaking down barriers between the different groups involved in applications: developers, testers, operations, security, database and more. This is easier when these groups are small, and in the same building. However in mainframe sites I see, there are many different groups: different applications, different systems, different environments. For example, at one site, they have a solution that comprises of 20 different applications: 20 application groups. At this site some of the application development is in the US, but most is performed in India. Similarly, operations groups are separated into different groups both in the US and India. DB2 support is divided into systems, databases and applications – another three groups. There are compliance groups, data quality groups, security and audit, and business groups that will all need to be involved in any development. You can see the problem.

Many mainframe sites have outsourced some of their operations or applications development and support. In some sites I've seen applications development outsourced to one company, and mainframe operations to another. With different groups in different organisations, that's another DevOps obstacle.

Problem #4: Composite Applications

Today a lot of application development is on Windows/UNIX systems that may be working with traditional mainframe applications. Such composite application present new challenges: in developing, deploying and monitoring applications. For example, this almost always means that there are separate mainframe and non-mainframe groups: mainframe and non-mainframe developers, mainframe and non-mainframe DBAs, mainframe and non-mainframe operations. This further increases the number of people on a project, and makes it more difficult for DevOps.

Such applications that span mainframe and non-mainframe have other issues that affect DevOps. Let's take software configuration management (SCM - or as I think of it: source code management). Most sites I see have different SCM tools for mainframe and non-mainframe. Those that have the same SCM usually separate mainframe and non-mainframe artefacts. This is a common issue when developing any composite application.

Monitoring is another key pillar of DevOps, and such composite applications make end-to-end performance monitoring a challenge. I've seen many sites where no end-to-end performance monitoring is done: instead performance monitoring is done separately on each application or environment.

Is There Any Hope?

The reality is that to implement DevOps for legacy mainframe applications and systems is hard with a large price tag; few organizations will be willing to commit the resources needed. Organisations will be more likely to consider DevOps for new development and systems. So a new CICS Java application may be developed with DevOps, while legacy core CICS/COBOL changes are not.

However it's not an all or nothing choice. Many mainframe sites may choose to implement the DevOps practices that make sense (for example, improved communication and collaboration), while leaving out others that are more difficult (like continuous builds and testing). Further, this may be an evolution, where more and more DevOps practices are gradually implemented over time.


David Stephens