Posted on

An incentive to collaborate

Recently, I tweeted that I had presented a virtual training class in which we used remote mob programming for the lab exercises, and at the end the participants said their main take-away was not the technical content, but rather the value of direct collaboration. Colleagues who already understand and appreciate the power of collaboration were excited to hear the story.

But what about people who do not already understand and appreciate the power of collaboration? What was unique about the situation that brought collaboration to the fore, above and beyond the technical skills that were ostensibly the subject of the class? How closely does that situation align with more-typical software development realities?

Contemporary software development methods emphasize collaborative work – pair programming; cross-disciplinary pairing; “ensemble” or samman or mob programming, in which all members of a team work together in real time on a single task at a time. Despite the demonstrated value of collaboration, the vast majority of software development teams still work in the 20th-century way – individuals are assigned specific tasks, usually isolated within “functional silos,” after which the entire team experiences stress and frustration in the attempt to assemble the various separately-built parts into a coherent whole.

The training course in question was given to an internal group at a particular company; it wasn’t a generic public course. The subject was a framework for implementing batch processes in Java – JSR 352, or the Java Batch framework. The material was highly customized for that company’s needs.

The company is in the process of shifting batch processing workloads from z/OS on an IBM System z (a.k.a. “mainframe”) to AIX. Half the participants were highly skilled in IBM z/OS technologies, including JCL, Utilities, COBOL, VSAM, DB2, and so forth; and they knew next to nothing about Java or Unix. The other half were highly skilled in Java, Unix, and using ORMs and JDBC with relational database systems; and they knew next to nothing about JCL, Utilities, COBOL, and all that.

Here’s the rub: To carry out the task of converting a z/OS batch job into a Java Batch solution requires both skillsets at the same time. It was quite literally impossible for any single individual at that company, working alone, to complete such a task. Nor would it be practical to “collaborate” in the 20th-century way – asynchronously via documentation or a project management tool.

To do this work, you would have to understand what you were looking at when you examined a jobstream, and be able to convert that into Java code and Java Batch components in a way that accurately preserved the functionality of the original job. The participants in the class would never have been able to complete the lab exercises working separately and alone.

Okay, so what’s the big deal? It’s just reading COBOL code – not mastering COBOL, just understanding what it says – and re-coding it in Java. How hard could that be?

Or so the team’s management assumed. The reality is, mainframe batch jobstreams contain all sorts of things besides COBOL programs. Besides that, there are real differences between procedural and object-oriented software design that can’t be safely ignored.

Utility steps

For example, one of the lab exercises involved a hypothetical batch job comprising six steps. The scenario was that “our company” receives data feeds from two external partners, in two different formats. For the first data feed, the original (hypothetical) developer used an IBM utility to convert the partner’s file format into “our” company’s internal format. The second one was implemented in COBOL. Here is the first step of the sample jobstream for the lab exercise. It converts the first external partner’s data file into “our” internal format (bear in mind this JCL has never run on a real system, so there may be typos in it):

//PAYMENT   JOB [...]
//* Process payment files from external partners
//* Convert file from payment source 1 to our internal format
//STEPLIB    DD [...]
//              DISP=SHR 
//              DISP=(NEW,PASS)
//SYSIN      DD * 
  OPTION COPY							
  INCLUDE COND=(91,8,CH,GE,DATE1-90)		
  INREC IFTHEN=(WHEN=(20,1,CH,EQ'N'),OVERLAY(21:21,2,C'42')),
  OUTREC FIELDS=(21,17,49,13,91,8,99,8,65,18,119,7,UFF,M11,LENGTH=9)
. . .

What’s happening is that DFSORT (which you know is being executed, thanks to PGM=ICEMAN, although maybe it isn’t DFSORT after all, if your z/OS system is configured differently), is (a) filtering the input records based on the payment date, (b) shuffling the fixed-length fields around into a different order, and (c) expanding the length of a numeric field and ensuring it’s right-justified and zero-filled.

Already, with step 1 of the job, we’re not dealing with a straightforward COBOL-to-Java conversion. We’re also not dealing with a “simple” utility step that readily translates to a Java Batch “tasklet”. DFSORT can do more than its name implies. In this case, it’s doing three different things, and none of them is a sort. There’s no corresponding Unix utility. This would have to be implemented as a Java Batch “batchlet,” written in Java. This is what the real work would look like – a lot of different kinds of “things” in job steps, some of which do not transfer neatly over to Java and Unix.

The Java developers were not able to interpret this step. The mainframers related to the sample JCL immediately and flawlessly. But they had no idea how to begin to re-implement this step with Java Batch.

To do anything at all – anything at all – toward shifting this batch job to AIX using Java Batch required the direct participation of people with each of the two skillsets, guiding each other every step of the way. There was no way for a person with a mainframe background and no Java, or a Java background and no mainframe, to complete such a task.

Fixed-format records

Another challenge in this kind of work is the fact files or datasets are not handled in the same way on the two operating systems, z/OS and AIX. You might be able to work out how to convert logic written in COBOL into equivalent logic written in Java, but you still have to deal with the differences in how files are implemented.

It’s very common to find fixed-format files on z/OS. The most common type is QSAM, or Queued Sequential Access Method. Originally the term “dataset” was used where everyone else says “file,” and “access method” where everyone else says “file system.” IBM has embraced the more widely-used terminology, but you still see many references to the older terms. In a broad sense, “QSAM dataset” means “flat file.”

But “flat file” doesn’t necessarily mean the same thing on z/OS as it does on AIX. z/OS fixed-format files don’t have newline characters to delimit logical records. In the simplest case, they just have data. If you dropped a fixed-length file from z/OS on a Unix system, it would look like a single record. On z/OS, the access methods know how to break up the data into logical records based on record lengths and block lengths. Depending on the file format, record descriptor words (RDW) and block descriptor words (BDW) might be embedded with the file’s data. This is unlike flat files on Unix.

Coming at it from the other direction, the closest analogue to a Unix flat file from a mainframe perspective is an unblocked variable-length sequential dataset, with the RDWs removed and a newline character added to the end of each logical record. It isn’t intuitively obvious, and must be learned. A simple rule of thumb is that every Unix flat file is an unblocked variable-length sequential dataset (even if it isn’t, exactly). Close enough for jazz…usually.

Based on this kind of stuff…

  OUTREC FIELDS=(21,17,49,13,91,8,99,8,65,18,119,7,UFF,M11,LENGTH=9)

…you can tease out the layout of the fixed-length fields in the input record from “partner #1”. The “21,17” part means to pull the first logical field from position 21 in the input record for a length 17 bytes.

This was obvious to the mainframers in the group, but meaningless to the Java developers. They ignored the DFSORT specification and looked at the sample input data, which looks like this…


…and they assumed everything that was jammed together without any delimiter characters must represent a single field. It’s an understandable guess on their part, but it’s wrong. They needed help from their mainframe colleagues to understand what the data meant, and to understand a record layout coded in COBOL, like this (this is the reformatted record, not the input record):

    05  PMT-CUSTOMER-ID          PIC X(17).
    05  PMT-INVOICE-NUMBER       PIC X(13).
    05  PMT-DATE-DUE             PIC X(08). 
    05  PMT-DATE-PAID            PIC X(08). 
    05  PMT-AMOUNT-PAID          PIC 9(16)V9(02).
    05  PMT-TAX-PAID             PIC 9(07)V9(02).	

Recognizing domain concepts

Converting COBOL applications to Java requires us to tease out domain concepts from the procedural COBOL code and define appropriate Java classes.

The typical mistake here is for the Java developers to transfer data elements one for one: a PIC X item becomes a String, a PIC 9 item becomes an Integer or maybe a Double. But if the PIC X item represents a Social Security Number, it needs to become a Social Security Number class in Java, and if the PIC 9 item represents a monetary amount, it needs to become a Money object in Java.

Date and time values are handled very differently in the two languages, too.

The point is this is another aspect of the conversion work that requires people with both skillsets working together in real time to avoid mistakes that would otherwise require time-consuming correction later.

Relational database access

The one area of commonality between the two skillsets is relational database technology and SQL. But even that is implemented differently in the two environments and in the two languages.

In the IBM COBOL world, you might see code like this:


    ADD WS-Next-Due-Date-Interval TO WS-Date-Integer 
        FUNCTION DATE-OF-INTEGER(WS-Date-Integer) 

            SET INV-PAID TO TRUE 
        WHEN OTHER 

            SET LASTPAY = TO_DATE(:INV-LAST-PAY-DATE, "%Y%m%d") 
            SET DUEDATE = TO_DATE(:INV-DUE-DATE, "%Y%m%d") 
            SET TAXPAID = :INV-TAX-PAID 
            SET STATUS = :INV-STATUS

Embedded EXEC SQL commands are an IBM thing; they aren’t standard COBOL and they aren’t standard SQL, either. On the Java side, relational databases may be accessed using a object-relational mapper (ORM) or via the JDBC APIs. Although the mainframers understood SQL quite well, it didn’t mean they could easily come up with Java code like this, without direct help from their Java colleagues (this is basically the same thing as the COBOL snippet above):

private void applyPayment(FormattedPaymentData formattedPaymentData, ResultSet rs) throws Exception {
    Money cumulativeAmountPaid = Money.of(rs.getBigDecimal("AMTPAID"), currencyUnit);
    cumulativeAmountPaid = cumulativeAmountPaid.add(formattedPaymentData.getAmountPaid());

    Money cumulativeTaxPaid = Money.of(rs.getBigDecimal("TAXPAID"), currencyUnit);
    cumulativeTaxPaid = cumulativeTaxPaid.add(formattedPaymentData.getTaxPaid());

    Date lastPayDate = rs.getDate("LASTPAY");

    Calendar nextDueDate = Calendar.getInstance();
    nextDueDate.add(Calendar.DAY_OF_MONTH, nextDueDateInterval);
    Money totalAmountDue = Money.of(rs.getBigDecimal("AMTDUE"), currencyUnit);
    String newStatus = STATUS_GOOD_STANDING;
    int amountComparison = totalAmountDue.compareTo(cumulativeAmountPaid);
    switch(amountComparison) {
        case -1: newStatus = STATUS_OVERPAID;
        case  0: newStatus = STATUS_PAID; 
        default: newStatus = STATUS_GOOD_STANDING; 
    PreparedStatement ps = conn.prepareStatement(UPDATE_INVOICE_DATA);
    ps.setDate(1, new java.sql.Date(lastPayDate.getTime()));
    ps.setDate(2, new java.sql.Date(nextDueDate.getTime().getTime()));
    ps.setBigDecimal(3, cumulativeAmountPaid.getNumberStripped());
    ps.setBigDecimal(4, cumulativeTaxPaid.getNumberStripped()); 
    ps.setString(5, newStatus);
    ps.setString(6, rs.getString("INVNUM"));
    ps.setString(7, rs.getString("CUSTID"));

Sometimes, it isn’t obvious to the Java developers which COBOL program is being executed in a job step. Here’s the sample JCL for the lab exercise for applying payments:

//              DISP=(OLD,DELETE,KEEP) 
//              DISP=(NEW,CATLG,DELETE)
//              DISP=(NEW,CATLG,DELETE)

The Java programmers had learned that they could identify the program being executed by the “EXEC PGM=XXXX” part of the JCL. But what program is this? They aren’t going to convert program IKJEFT01 to Java. This is one way a COBOL program might be executed when it contains EXEC SQL commands; indirectly, via utility IKJEFT01. This kind of thing is far from obvious to Java developers. It’s another reason for people with both skillsets to collaborate directly. Can you guess which COBOL program the team had to convert?


In this case, the participants in the class had no choice but to collaborate directly in order to complete the lab exercises. The nature of the work made it necessary. I hope they found enough value in direct collaboration to try it for other kinds of work, too.

Most of the participants enjoyed the mob programming experience as such. Several said they had learned a lot just by observing those who chose to participate actively. On the other hand, it wasn’t for everyone. A couple of participants found direct collaboration a little stressful. But on the whole, it was useful in this case.

In my opinion, direct collaboration is useful for nearly any kind of software-related work. The illusion of speed we experience when working alone rarely results in better outcomes or quicker delivery than collaborative work. Personally, I find it reduces stress and helps the time pass faster, too.

Many teams today separate the work into specialties at a needlessly-fine level of granularity. For instance, teams that support webapps often separate the front-end and back-end developers. While it’s true that front-end and back-end development involve different sets of challenges, the two are not so different that the same individuals can’t be effective through the full stack. Direct collaboration is a great way to spread skills, while also avoiding miscommunication about APIs and exception handling and so forth, which often come up during integration testing when the development was done separately.