Before I start digging into the software between mainframe operating systems and their applications, I'd like to begin today's blog by thanking "zarchasmpgmr" aka Ray Mullins for his comment on last week's blog post about operating systems. I acknowledge that there have indeed been non-IBM mainframes that have had varying degrees of compatibility with the IBM ones over time, they have used various versions of IBM's and their own operating systems, and it's important to remember that some are still out there. In fact, these are topics that are worthy of their own blog posts in the future, but for now, suffice to say, "good point - thanks!"
I'd like to say "thanks!" to my friend and colleague Tim Gregerson as well for his thoughtful comments on last week's post.
I'm also reminded that there's something called LPARs (pronounced "el pars") or Logical PARtitions which run under Processor Resource/Systems Manager (PR/SM - pronounced "prism") on IBM mainframes. LPARs form a layer between the hardware and the operating system, allowing the mainframe to be divided into a small number of separate, concurrently running pieces. Today's mainframe does not allow OS images to run on its "bare metal" but requires that they either run directly in an LPAR or indirectly under z/VM, which would then run on an LPAR (directly or indirectly). z/VM can then allow a very large number of concurrent OS instances to run under it as "z/VM guests."
OK, enough delays, it's time to get into one of the most interesting aspects - or rather, sets of aspects - of the mainframe context: the software that resides between the operating systems and the applications. Of course, such software exists on non-mainframe platforms as well, and increasingly is part of enterprise-wide solution sets. However, I'll keep it simple for the time being and focus on the mainframe. In future posts I can discuss how this all fits in the entire IT (Information Technology) enterprise.
I have worked with this "middleware" most of my career, and I've seen many ways of classifying and grouping it (aka taxonomies).
In my experience, the most common taxonomy is by primary function. So, a given piece of software might be classified either as storage management, workload automation, performance management, or system automation (or a number of other primary roles), but it generally can't be more than one of these. Keep it simple and focus on core competence, you know.
A major problem with that approach is that software is flexible, adaptable, and multidimensional, and often starts out doing one thing and morphs with customer demand into something entirely else. Two examples of this are a mainframe database that began its life as a data communications product, and a performance monitor that began its life as an alternative to IBM's SDSF - a tool for watching tasks run and looking at their output. Both of these changed over time and became what they are today, while in at least one case still performing its original role as well.
It's also possible to have multiple products that have different primary roles but so much overlap in their other dimensions that at least one of them can be seen as redundant for cost optimization purposes.
In fact, between the complex historical journey a given piece of software takes and the many different uses to which it is put, any tree-like taxonomy that insists it is "this and not that" misses entirely the dynamic and adaptable nature of such software.
But we can't even begin to optimize and properly manage an environment if we don't have a straight-forward understanding of its structure and elements.
For that reason, rather than beginning with the traditional tree-structured explanation of the software in the middle, I'm going to use a dimensional approach - that is, I'm going to try to identify most of the key dimensions of what we use this software for, recognizing that most pieces of software have more than one of these dimensions.
This is more than just a tool for understanding what's out there. As I develop this in future blog posts it should become clear that this is part of a much more effective approach to optimizing a mainframe environment by identifying which dimensions are needed, which exist, and how much unnecessary overlap can be trimmed.
Now, the first thing you'll discover when you try to divide up and classify mainframe software is that there is no clear dividing line between operating system, middleware, and applications. Generically, the operating system talks to the hardware and manages the mainframe's behavior; applications talk to users and provide results with tangible business value; and the middleware enhances the operating system and enables the applications.
But there are pieces of middleware that are also implicitly embedded in the operating system and the applications. In fact, historically, many pieces of middleware emerged from one of these sources and gained a life of its own.
A great example of this is software that sorts data. Originally, IBM included a utility to do this with the mainframe operating system. However, in 1969, IBM began to sell the software on the mainframe separately from the hardware (a strategy known as "unbundling"), opening the door to competition. As a result, new third-party utilities were written to provide alternative sorting functionality to the IBM-provided sorting utility, leading to the rise of important software companies still in business today. That was only possible because sorting software emerged from being included with the operating system (which emerged from being bundled with the hardware) and became a middleware solution in its own right.
OK, then, what are the dimensions of middleware on the mainframe? First, let me offer a very basic behavior-oriented set:
1) Data Handling: Modifying, moving and managing data.
2) Device Interfacing: Interacting with devices, including storage, printing, networking and user interaction.
3) Applications and Automation: Programming, automation and reporting (including development, maintenance, interconnecting/repurposing and updating).
4) Context Management: Configuring, securing, logging, monitoring, modeling and reporting on everything from devices to applications.
5) Optimization: Optimizing the execution time and/or resource usage of mainframe systems and applications.
6) Quality and Lifecycle: Change, configuration, quality enablement and lifecycle management.
7) Production: Production/workload planning and control.
At this point, I hope you're saying something like, "but what are the actual solutions and what do they do???" which, in my opinion, is almost the right question. Ideally, you're saying something even more like, "but what business needs are responded to by solutions in this space?" which is almost exactly the right question - and close enough for now.
Because the essential deficit in all the various classification schemes - not that it invalidates them - is mapping them directly to business value in a way that allows for an optimal solution set and configuration, both in terms of costs and related contractual desirability, and more importantly in terms of enabling your business to prosper.
Now, future blog posts can talk about things like inertia, overlap, changing requirements and obsolete configurations. However, I'll conclude today's blog post with another, business-oriented list that focuses on the business needs that these solutions respond to. Each of the below needs are dimensions that can be met by one or more solutions, and each of those solutions has functionality along one or more of the above behavioral dimensions.
A) Business Enablement: Full, appropriately-controlled availability of reliable data and results required for business decisions and processes (such as financial activities).
B) Continuity: The ability to detect, prevent, and recover from problems, errors and disasters that could otherwise interrupt the flow of business.
C) Security, Integrity and Compliance: Provable, continuous security and integrity minimizing potential liability and ensuring compliance with laws and regulations governing the proper running of an organization.
D) Cost-Effective Operations: Cost-effective, comprehensive, responsive and timely operation of the computing environments, applications and resources needed to effectively do business, creating a layer of conceptual simplicity.
E) Analysis and Planning: Enablement of IT architecture and resource planning for current and future organizational success.
F) New Business Value: Facilitating new business initiatives and value by enabling new applications and data, and/or building on, connecting to and/or optimizing existing application functionality and data.
Taken together, I consider the two above lists of dimensions the foundation of version 1.0 of the Mainframe Analytics Middleware Taxonomy. I look forward to comments and suggestions to refine it further before using it to map individual solution areas.
I'll be coming back to the above and blogging on its dimensions, solutions, implications and uses in the future. However, next week, I plan to talk about the applications that run on the mainframe.