Longpela Expertise logo
Longpela Expertise Consulting
Longpela Expertise
Home | Press Room | Contact Us | Site Map

LongEx Mainframe Quarterly - November 2023

technical: Is Blocksize Still a Thing for Sequential Datasets?

For decades, anyone working with z/OS has been interested in I/O avoidance: reducing I/Os to improve performance. In particular, blocksize has been the favourite tool for I/O avoidance. But the z/OS of today is very different to the MVS/XA I started with in the 1980s. And disks hardware is a lot better too. So, is blocksize still a thing?

Well, it is. Mostly.

Bad Blocksizes

Don't panic. I'm not going to go drone on about what a block and blocksize is. If you don't know, Jeff Berger from IntelliMagic can help. In this article, I'm more interested in finding out the benefits of blocksize today, and what affects it. Let's start with output: writing a dataset.

I created a simple batch COBOL program that takes a LRECL=512, RECFM=FB sequential disk dataset, and copies it a second dataset with the same allocation options. To make things interesting, the input dataset is big: around 450,000 records, 220 Mbytes, 275 cylinders. I won't tell you what's in the dataset: it's not that interesting. We're more interested in the performance of copying this sucker.

So, let's take a baseline. Our DD statements look like:

//         SPACE=(CYL,(140,40),RLSE),RECFM=FB,LRECL=512,
//         BLKSIZE=0

Default input and output blocksize. This will be half-track blocking: 27648 bytes for LRECL=512. When I ran this, the job step took four second, 16774 I/Os (EXCPs), around 950 service units, and 0.05 CPU seconds.

So, let's have some fun. I changed the blocksize of our OUTFILE to 512 bytes, and things really slowed down: 41 seconds elapsed, 453000 I/Os, 13,000 service units, 0.37 CPU seconds. Ouch! Elapsed time increased by a factor of 10, CPU by a factor of 7.5.

There's another downside: our original 275-cylinder dataset bloated out 605 cylinders with a 512 byte blocksize!

Ok, blocksize is still a thing when writing. But how about reading? We took that dataset with a 512 byte blocksize, and use it as input to our program, outputting to a dataset with default blocking.

No surprise, similar results. 33 seconds elapsed, 453000 I/Os, 14,500 service units, 0.4 CPU seconds. I'd expect reads to be a bit faster: reads can be satisfied in the DASD cache, writes go to DASD Fast Write.

This all matches what I've seen in the field. For example, a few years ago I reduced the elapsed time of a batch job from 13 hours to 6 hours by simply changing the output blocksize from around 6k to the optimal half-track blocking.

When Blocksize Doesn't Matter

OK. It sounds like blocksize is really important. And it is, well, most of the time. I repeated the last test (input blocksize of 512, output blocksize of 27648), but replaced my program with IBMs IEBGENER: similar results. I also tried IDCAMS REPRO: same again.

I tried a third program replacement: ICEGENER, the IEBGENER replacement from DFSORT. Here's the JCL:

//         SPACE=(CYL,(140,40),RLSE),RECFM=FB,LRECL=512,
//         BLKSIZE=0

The results: 13 seconds elapsed, 4132 I/Os, 2125 service units, 0.13 CPU seconds. ICEGENER killed it: still slower than if we used great blocksizes, but a whole lot better.

I love ICEGENER. It's easy to use and can even be used with VSAM datasets. Smart sites will configure z/OS to automatically replace IEBGENER with ICEGENER, or Precisely SYNCSORT MFX BETRGENER. These utilities ignore the blocksizes specified and use their own magic. In fact, it is likely that the only reason that our ICEGENER step is still slower is because the input dataset uses 605 cylinders, rather than the original 275.

Other utilities and programs ignore BLOCKSIZE and do their own thing, including DFSMSdss. Smart storage administrators may even choose to setup their SMS environment to prevent bad blocksizes or uses the DFSMShsm feature to automatically reblock a dataset when it is recalled or recovered.

For another test, I made a small change to my COBOL program, and reran my baseline test using half-track blocking for input and output datasets. You'd expect this to be the same as our baseline, but in this case, the job took 42 seconds, did 453,000 I/Os, 13500 service units, and 0.37 CPU seconds. What on earth did I do?

The most important parameter when programming COBOL to access sequential files is the BLOCK statement on the File Descriptor. Ideally, the definition should look like:

       FD INFILE1
            Recording Mode Is F
            Block 0 Records
            Record 512 Characters.

In this last test, I removed the "Block 0 Records" statement: effectively eliminating blocking. So, the program did one I/O for every record, rather than every block. Classic rooky mistake. The COBOL compile option BLOCK0 can be used instead of "Block 0 Records."

Similar bad performance can be done in REXX by only reading one line at a time:


Consider the following DD statements:

//         SPACE=(CYL,(140,40),RLSE),RECFM=F,LRECL=512,
//         BLKSIZE=0

The big difference is RECFM=F, not FB. Or in other words, we are forcing every block to contain only one record. When running our original test with this change, it's no surprise that we get familiar bad results: 27 second elapsed time, 453000 I/Os, 11500 service units, 0.35 CPU seconds.


It's old-school, but blocksize is still a big thing with sequential datasets. Smart sites will use everything in their power to prevent users from accidentally specifying a bad blocksize: from SMS configuration to replacing IEBGENER with ICEGENER or BETRGENER.

David Stephens

LongEx Quarterly is a quarterly eZine produced by Longpela Expertise. It provides Mainframe articles for management and technical experts. It is published every November, February, May and August.

The opinions in this article are solely those of the author, and do not necessarily represent the opinions of any other person or organisation. All trademarks, trade names, service marks and logos referenced in these articles belong to their respective companies.

Although Longpela Expertise may be paid by organisations reprinting our articles, all articles are independent. Longpela Expertise has not been paid money by any vendor or company to write any articles appearing in our e-zine.

Inside This Month

Printer Friendly Version

Read Previous Articles

Longpela Expertise can improve your system performance. We can determine performance problems, and implement performance solutions to speed up your systems. Contact us to get your own z/OS performance expert.
© Copyright 2023 Longpela Expertise  |  ABN 55 072 652 147
Legal Disclaimer | Privacy Policy Australia
Website Design: Hecate Jay