What's in this article Mainframe continuous delivery overview The literature Issues with Mainframe-hosted solutions Observations…
Some of us at LeadingAgile have been talking about how best to introduce technical practices and tooling in client environments that aren’t entirely up to date in that area. We’ve been working through how to align technical improvement with the improvements in structure, governance, and metrics that our model has focused on. One point of discussion has been the appropriate time to introduce continuous integration (CI).
Some feel it’s an advanced practice that would overwhelm novice teams that are trying to get a handle on lightweight methods based on the agile and lean schools of thought. They feel it would be appropriate to bring CI into the picture somewhere in the neighborhood of Basecamp 2 or 3, according to our expedition model of organizational transformation. That’s the time when, from a process and governance point of view, organizations are beginning to get significant traction, based on the foundation of stabilizing, visualizing, and measuring their current state on the journey from zero to Basecamp 1. Coming from a technical background myself, I favor introducing CI very early; I’m not convinced it has to wait for the organization to reach a high level of maturity with respect to process.
Later that day I was working on a personal side project at home. The discussion about CI was still fresh in my mind, and I realized that I was in the habit of setting up CI whenever I created a new project. I happened to be editing a .travis.yml file the moment the thought crossed my mind.
I treat CI as a routine, baseline part of my development environment. It’s so easy to do and the value is so great (especially for avoiding the works on my machine problem), that it’s a no-brainer. That started me thinking about which tools and facilities are fundamental to a developer’s work environment, and which are optional or advanced, the reasons why, and how these things have changed over the years.
The tools developers feel they need seem to be determined mainly by four factors:
- characteristics of the solutions they’re working on
- development methods and techniques they use
- what tooling is actually available
- what year we’re in (methods and tools evolve)
The first point could quickly explode into a mass of alternatives. Different types of solutions may call for very different development setups. For instance, to work on an embedded application you need certain things available that aren’t pertinent to general business application development. To keep it simple, let’s just focus on general business application development.
The way we do our work also contributes to decisions about the tooling we need.
Finally, of course, we can only use whatever tools we can lay our hands on or build for ourselves, no matter what methods we’d like to use.
Let’s explore the question by looking at a few simplified examples.
Example 1: Monolithic app, traditional methods, mainstream stack
Assume you’re a developer on a team supporting a Microsoft .NET solution in C#. Broadly speaking, the app is “monolithic” in the sense that all the components are part of the same source code base, and the entire solution is deployed as a single unit to a server.
By “traditional methods” I mean the following: You don’t generally work in pairs or in groups, but rather alone. Also, you don’t handle comprehensive testing or deployment. There are separate teams dedicated to those functions. They may support other development teams besides yours, as well.
Your view may differ, but to me those characteristics suggest the following types of tools would be useful for a developer:
- Smart code editor that “knows” the syntax of C# and can provide hints for completing method calls and boilerplate code structures
- A debugger, to help track down minor issues in code that’s under development
- An integrated development environment (IDE) that can help you identify source-level dependencies in the code base, locate source files, and otherwise deal with the large code base. This tool typically includes the editor and debugger, too.
Because your team uses traditional development methods, you also need tooling to help with:
- Adherence to coding standards
- Adherence to architectural guidelines
- Adherence to usability guidelines
- Due diligence in unit testing
Considering the third point, tool availability, we find that there are robust and mature tools available to support all these needs. Microsoft Visual Studio provides the smart editor, debugger, compiler, and ways to cope with the numerous source-level dependencies in a large code base. Team Foundation Server or VSTS can be configured to support a check-in strategy known as “gated check-in.” This is a low-trust strategy that uses static code analysis (mainly a style checker) to block commits unless certain rules are followed.
Procedurally, gated check-in is often accompanied by a visual inspection by a designated “technical lead” on the team, usually called a “code review.” Code reviews are also useful because individual developers work alone, and no one will notice small mistakes until the code is formally reviewed. Even then there’s a risk minor issues will be overlooked, as the reviewer was not actively engaged in the design and code process in the moment. Yet, absent contemporary good practices a code review may be the only safety net available.
Generalizing from that example, a basic developer setup to support a monolithic application using traditional methods would include:
- A feature-rich IDE that includes a good debugger and facilities to help work with a large, complicated code base rife with source-level dependencies
- A version control system and continuous integration server designed to subdivide functions according to different access rights, to help protect the system from developers
- A check-in procedure that mitigates the damage untrusted and low-skilled developers are expected to cause every time they touch anything
Example 2: Microservices app, contemporary methods, mainstream stack
Does the basic developer setup change when we are working on a solution with a different architecture using different development methods? Let’s consider a Ruby-based microservices solution supported by a team that uses contemporary development methods. Here, the solution is divided into multiple small code bases. Even if all of them are contained within the same project for version control purposes, there are no source-level dependencies among different components of the solution. Common code is factored out into reusable libraries (Ruby gems, in this example).
By “contemporary methods” I mean developers work in a collaborative fashion most of the time (pair programming and/or mob programming), and individual work is the exception. It also suggests a rigorous test-first approach to modifying code and a strong emphasis on test automation at all levels of abstraction. The team handles all testing, server provisioning, and deployment independently of other teams in the organization. A lightweight, high-trust check-in strategy is used. Things are set up such that it is easy to recover from errors, rather than attempting to prevent all mistakes.
Basic tooling to support this scenario could include:
- A lightweight, low-footprint text editor with some text highlighting and completion features
- A version control system designed to support frequent check-in of changes by multiple people working on the same source modules
- Support for executable microtests and for stubbing/mocking
- Ideally, automatic execution of affected microtest cases as changes are made to the code (e.g., autotest for Ruby or Infinitest for Java)
- Style checking and general static code analysis tools
- A continuous integration (CI) server
- A runtime debugger designed to support distributed microservices
- A facility to support automated deployment
- Tooling to monitor the production environment
You can see the two scenarios we’ve explored so far call for very different baseline developer setups. In the second scenario, we don’t need a heavyweight IDE that can embed a server and that is liable to hang and crash several times a day. We don’t need IDE features to deal with complicated source-level dependencies because our solution architecture eliminates that problem. We don’t need our tools to try and protect the code from us; instead, we need tools that allow us to get things done.
Another aspect of traditional methods becomes evident in teams that attempt to use some contemporary practices, notably executable unit checks (or “unit tests”). The common practice is to run these from within the IDE. For the C# example, that would be running NUnit or MSTest cases (or xUnit.net, if you don’t mind riding the bleeding edge) inside VisualStudio. Java developers will recognize the same practice in running JUnit or TestNG cases inside IntelliJ, Eclipse, NetBeans, or another IDE.
Contemporary good practice is to minimize the differences between the development environment and the target environment. Developers typically run a “batch” build in their local environment in exactly the same manner as their CI server will run the build after check-in. This helps reduce the number of surprises during integration and in later stages of testing and deployment. For this reason, contemporary development methods don’t need “heavyweight” support for unit testing inside the IDE. A reasonably fully-featured text editor is sufficient.
On the procedural side, we don’t need support for a gated check-in procedure, because we use a high-trust strategy instead. We set things up to be resilient and recoverable rather than hardened. We don’t need formal code reviews because our code is reviewed continuously as we work (through pairing and/or mobbing). For the same reason, we don’t need designated technical leads on the team; we can be professional peers. We don’t need an integrated debugger because we use a test-first approach to changing code; we rarely, if ever, have to resort to using a debugger when modifying the source of a single microservice. Challenges for debugging occur in the interaction between microservices, and not so much within the code of any single microservice.
On the other hand, there are tools we need in our development environment that are not necessary in the first scenario. We handle our own testing and deployment, so we need tooling to spin up test environments on demand, create and manage test data, and deploy changes to production (or at least to a staging environment). We also need tooling to monitor the production environment so that we can support it properly.
Thus, the nature of the solution we’re supporting as well as the methods we use to support it are determining factors for the tools that comprise a “basic developer setup.” As we move from monolithic to services-based solution architectures, we find we require less-heavyweight IDEs and editors, but at the same time a wider range of tooling to support the additional tasks that fall to the development team.
Example 3: Legacy app, contemporary methods, legacy stack
What about the case when we want to apply contemporary methods to existing code bases running on older platforms?
For discussion purposes, let’s pick just one legacy platform. The most widely-used one is the IBM mainframe platform. The most widely-used application programming language for existing code is COBOL.
What sort of developer setup would we need to use contemporary methods to support an existing COBOL code base on an IBM mainframe? The main challenge with this platform is tool availability. There is nothing inherent in the mainframe platform or in the COBOL language that would preclude the use of contemporary development methods.
Traditionally, developers use a relatively basic text editor called OEDIT that runs on the mainframe platform. There are alternatives to this based on terminal emulation software that enable you to edit files on a desktop or laptop computer, but on the whole, developers do not use rich-featured IDEs to work on this platform.
Mainframe software vendors have been working to solve the tooling problem. I won’t try to provide a comprehensive list of products, but a couple of examples are Compuware’s Topaz Workbench and IBM’s Rational Developer.
While vendors are sorting out the details to make their products lighter weight and more usable, I’ve been setting up my development environment for mainframe COBOL work as follows:
- A local VM running Linux to isolate the development environment
- Bitbucket for version control, because you get private repos for free, and this isn’t public code
- GnuCOBOL (open source COBOL compiler with IBM compatibility support)
- Shell scripts to run FTP commands to transfer files and to kick off builds and test jobs
- On-platform hand-rolled code to resolve COPY dependencies and drive FTP transfers
- An open source COBOL microtesting framework (Cobol Unit Test)
- On-platform hand-rolled JCL and light coding to support batch test runs
Still missing: Continuous integration, automated deployment, and production system monitoring. Due to typical organizational concerns in companies large enough to have mainframe systems, these functions are usually not integrated with the work of application development teams.
Some contemporary development practices are quite easy to achieve in the mainframe environment. Remember the point about running your local build in the same manner as the “real” build? This is pretty natural in the mainframe environment because we don’t have to break the habit of running microtests inside an IDE. There are also no technical barriers to adopting contemporary collaboration methods such as pairing and mobbing. Those are technology-independent.
The point is there are practical ways to get there, or partway there, even when we’re supporting applications that were originally designed monolithically and that run on systems that don’t yet have full tool support for contemporary development methods.
Twenty years ago CI may have been nothing more than a glimmer in forward-thinking people’s eyes. Ten years ago people said CI was a practice and not a tool. Five years ago CI was an “advanced” practice that many development teams didn’t use.
I think it’s fair to say that as of 2017 it’s reasonable to consider CI to be a part of the basic development setup in nearly all environments and for nearly all categories of application development. There’s no longer anything new, experimental, or “advanced” about it. It is also integral to the emerging practice of continuous delivery (CD), and not optional when CD is to be used.