Tuesday, February 28, 2012

Generations

The mainframe ecosystem has had several generations of people in charge of it, each learning from the previous while bringing their own abilities, insights, and eventually experiences. While it's somewhat arbitrary to draw a line between each of these, it can help in understanding where we are today, so let me give it a try.

But first, I'd like to thank my two commenters from last week's post: Jim Michael, my friend and mentor and someone who is approximately in or next to my "generational band" though much wiser and slightly more chronologically gifted, and Kristine Harper, my friend and a leading member of the current new generation of mainframers (Gen-E referred to below).

Now, I'd say the first generation of mainframers are those who started their careers before the advent of electronic computing. Let's call them "Generation Able." Many of them had been in the military during World War II, and brought that culture and scrupulousness to their establishment of the culture of computing, and eventually to mainframe computing.

I'll designate the next one, "Generation Baker," and group those who started their careers on early computers, and ended up spending most of their careers on the mainframe.

The third one, "Generation Charlie," are those who started out on the mainframe when it was already in place and running - some time in the mid-to-late 1960's, the 1970's, and 1980 to 1982. For them, computing was mainframe was computing for the formative years of their careers.

In 1983, Time Magazine declared the PC "Man of the Year" and the world of computing changed forever. Suddenly, everyone spoke of the mainframe in the past tense as they looked to the future of computing on other platforms. Those hardy (or foolhardy, depending on whom you ask) few who went into mainframe careers were seen as non-mainstream, to put it politely. I was among them. We are "Generation Dog," and I include everyone who came on board before Y2K preparations took off, around 1997. We are few in number, because many from the previous generations were still around, organizations were not investing well in building a new generation on the mainframe thinking it was going away, and mainframes were requiring fewer and fewer people to keep them running, even as they continued to grow their capacities, but also their reliability and maintainability.

Y2K changed everything, as organizations realized they had invested too deeply into highly-functional mainframe environments to simply move off, so they had to update their code to survive the turn of the millennium. The world was slowly waking up to the fact that the mainframe had become a fixed foundation of large-scale IT. Those who have begun their careers since this time, while still slim in numbers, knew they had brilliant careers ahead of them, being responsible for the most important computing platform on earth. I call them, "Generation Easy."

Suddenly, everything is changing, and the ultimate generation is about to arrive: "Generation Fox." They will inherit a mainframe unlike that of their predecessors, and take part in its becoming so. The mainframe will be simpler to maintain, manage and deploy new applications for than ever, and will likely show itself to be the optimal platform for top-quality cloud computing. Unlike their technically-oriented predecessors, many in this generation will be as focused on business results as on the bits and bytes of how-to. And, if (as I expect) a tipping point of rediscovering the mainframe is reached, this new generation will also balloon as organizations invest in using the mainframe for the newest and most leading edge applications.

However, they're not here yet, and the first five generations are made of highly-competent, trustworthy, hard-working technologists who have passed down practices, cultures and user groups that have become the infrastructure of this essential platform. We will continue to need their ilk at the foundation of mainframe computing, regardless of how many of the new business-oriented generation flood in. So, my advice to organizations looking to the future of their mainframes is, hire quality now, mentor them, get them tried and proven, and then you'll be able to ensure that the mainframe continues to run well as all the Gen-F's start to flood in. Because your mainframe's not going away, but Gen's A through D are, and soon.

Next week, I'll talk about some of the ways to get a new generation in place on time to respond to the imminent challenges and opportunities on the mainframe.

Monday, February 20, 2012

...Then a Miracle Happens

A favourite cartoon of mine shows two academics at a chalk board, with a complex set of equations on the left hand side and a simple, elegant solution on the right hand side. One of them is saying to the other, "I think you need to be more explicit here" while indicating the bridge between the two sides, which is a cloud containing the words, "Then a Miracle Happens." In many ways the mainframe is like this: with all the wondrously complex things from hardware to applications running together in unison to deliver business value, it's easy to forget that none of it would be possible without that central part that makes everything happen - the people and culture of the mainframe.

Of course, long before the first computer, let alone the first mainframe, there were people. People invented the mainframe, and gave it its culture. People made and improved the hardware, operating systems, middleware and applications. People learned how to use the mainframe, building on their best abilities learned from other contexts, including in the military during the second world war. Those same people worked together to establish the culture of the mainframe, including everything from scrupulous planning and change control to a special way of saying and seeing things unique to the mainframe culture.

If you've read any blogs I've previously written before starting Mainframe Analytics (for example, "How to Talk Like a Mainframer"), you'll know that one of my favourite examples of the culture passed down from WW II military veterans is the words mainframers use for the first six letters of the alphabet: Able, Baker, Charlie, Dog, Easy, Fox (rather than the current Alpha, Bravo, Charlie, Delta, Echo, Foxtrot). These were the standard in WW II, and were in habitual use by the earliest mainframers. Consequently, they got passed down through the generations, and continue to be widely used today.

Another thing that came down the generations is SHARE, one of the remaining great mainframe user groups, and in many ways the nexus of the lot. Founded in 1955, nine years before IBM announced the System/360 which is the ancestor of modern mainframes, it was intended to enable users of IBM's business computers, including early mainframes, to share information in order to ease the task of getting value from them. Today, at 57 years old, SHARE is still going strong - in fact, their next meeting will be in Atlanta in March.

Now, there's a lot to be said about the culture of the mainframe and the various generations of mainframers - in fact, I've written quite a few articles on the topic (check out http://mainframezone.com for a good number of them). So, rather than making this post a big long one that talks about all of them, I'll stop here for this week, and pick up next week with a discussion of the state and future of the mainframe workforce.

Tuesday, February 14, 2012

App Location

Why do we use computers? What led to them being developed in the first place? What is it they do that we can't just have lots of people do instead? The simple answer is, we use computers for the applications that run on them, which do valuable things that would be impossible, unpleasant or prohibitively expensive to have people do them for you instead.

By now, most of us are used to the concept of "apps" - those single-user-focused applications that run on personal computing devices such as smart phones. Of course, "app" is just an abbreviation for "application" which is what mainframes were built to run.

The journey of recognizing the "application layer" of computing as distinct from the rest of the technology has been a long one, and it could be argued that it will never be entirely complete, because some people will always buy technology for the sizzle (i.e. bells and whistles) rather than the steak (the value it actually brings). However, on the mainframe, this journey substantially concluded a long time ago.

Today, the applications that run on the mainframe handle business at a global scale. They do billing and accounts receivable, HR, decision support, customer account handling, large-scale postal sorting, addressing and stamping, and many, many other business functions that require a massive capacity for data and throughput with total reliability.

As with smartphones and PCs, some mainframe applications can be bought from vendors, and may even run with very little customization. However, there are also many applications that are highly customizable - ERP systems (i.e. Enterprise Resource Planning, such as SAP, PeopleSoft, Oracle Financials, etc.) are a good example of this kind.

The nice thing about those vendor-supplied applications is that they're kept current and maintained, so the customer's job is just to keep installing and configuring the latest upgraded version - which is a lot more work than it sounds like, but a lot less work than writing and updating their own.

However, one of the most important kinds of application on the mainframe is in-house. These are the trade-secret, competitive-advantage, bread-and-butter applications that do unique things that no other organization does in exactly the same way. In fact, they generally embody an organization's essential identity. They have been written and maintained in-house, often for decades, and they provide core functionality, which is often built on and extended with distributed applications that sink deep roots into them.

Interestingly, while these can be some of the most valuable applications, they're also some of the most problematic, because, as they get more and more established, it becomes harder and harder to change them to respond to new needs and opportunities without adversely affecting other mainframe and non-mainframe applications that rely on they way they behave.

This often results in very complex circumstances when two large organizations merge, and they have applications with overlapping functionality. Trying to modify them to work together can be something of a nightmare, complicated by the fact that they also use data sources (usually databases) that have completely different natures as well. This is the point at which frustration may set in, and tried-and-proven applications may be set aside for vendor-provided solutions, often on non-mainframe platforms. Which, in my opinion, is a shame, given the functionality, reliability and competitive advantages that can often be sacrificed for the sake of short-term convenience.

There is a whole range of solutions that exist to enable "modernization" of mainframe applications that have been around long enough to get into an inertial funk. These include: lift-and-shift solutions to run mainframe applications mostly unmodified on other platforms; solutions that reverse engineer applications into a business rules representation for re-generation to the platform (and programming language) of choice; and solutions that build connections into and around the established ones to enable building on their functionality (on and off the mainframe) without substantially modifying them.

In any case, there are many billions of lines of programming in the mainframe applications that run the world economy, and they have proven themselves over the decades to work very well, so they're generally not going away any time in the foreseeable future. Which means that it's time for responsible people to start making long-term plans to maximize the benefit of their mainframe applications to their organizations, rather than just taking them for granted and trying to squeeze value out of them without sufficient care and feeding.

Care and feeding... yes, that's a very important topic, and not just for the mainframe hardware and software, because an essential part of what makes the mainframe great is the human side: people and culture. I'll write about that next time.

Sunday, February 5, 2012

Middle Where?

Before I start digging into the software between mainframe operating systems and their applications, I'd like to begin today's blog by thanking "zarchasmpgmr" aka Ray Mullins for his comment on last week's blog post about operating systems. I acknowledge that there have indeed been non-IBM mainframes that have had varying degrees of compatibility with the IBM ones over time, they have used various versions of IBM's and their own operating systems, and it's important to remember that some are still out there. In fact, these are topics that are worthy of their own blog posts in the future, but for now, suffice to say, "good point - thanks!"

I'd like to say "thanks!" to my friend and colleague Tim Gregerson as well for his thoughtful comments on last week's post.

I'm also reminded that there's something called LPARs (pronounced "el pars") or Logical PARtitions which run under Processor Resource/Systems Manager (PR/SM - pronounced "prism") on IBM mainframes. LPARs form a layer between the hardware and the operating system, allowing the mainframe to be divided into a small number of separate, concurrently running pieces. Today's mainframe does not allow OS images to run on its "bare metal" but requires that they either run directly in an LPAR or indirectly under z/VM, which would then run on an LPAR (directly or indirectly). z/VM can then allow a very large number of concurrent OS instances to run under it as "z/VM guests."

OK, enough delays, it's time to get into one of the most interesting aspects - or rather, sets of aspects - of the mainframe context: the software that resides between the operating systems and the applications. Of course, such software exists on non-mainframe platforms as well, and increasingly is part of enterprise-wide solution sets. However, I'll keep it simple for the time being and focus on the mainframe. In future posts I can discuss how this all fits in the entire IT (Information Technology) enterprise.

I have worked with this "middleware" most of my career, and I've seen many ways of classifying and grouping it (aka taxonomies).

In my experience, the most common taxonomy is by primary function. So, a given piece of software might be classified either as storage management, workload automation, performance management, or system automation (or a number of other primary roles), but it generally can't be more than one of these. Keep it simple and focus on core competence, you know.

A major problem with that approach is that software is flexible, adaptable, and multidimensional, and often starts out doing one thing and morphs with customer demand into something entirely else. Two examples of this are a mainframe database that began its life as a data communications product, and a performance monitor that began its life as an alternative to IBM's SDSF - a tool for watching tasks run and looking at their output. Both of these changed over time and became what they are today, while in at least one case still performing its original role as well.

It's also possible to have multiple products that have different primary roles but so much overlap in their other dimensions that at least one of them can be seen as redundant for cost optimization purposes.

In fact, between the complex historical journey a given piece of software takes and the many different uses to which it is put, any tree-like taxonomy that insists it is "this and not that" misses entirely the dynamic and adaptable nature of such software.

But we can't even begin to optimize and properly manage an environment if we don't have a straight-forward understanding of its structure and elements.

For that reason, rather than beginning with the traditional tree-structured explanation of the software in the middle, I'm going to use a dimensional approach - that is, I'm going to try to identify most of the key dimensions of what we use this software for, recognizing that most pieces of software have more than one of these dimensions.

This is more than just a tool for understanding what's out there. As I develop this in future blog posts it should become clear that this is part of a much more effective approach to optimizing a mainframe environment by identifying which dimensions are needed, which exist, and how much unnecessary overlap can be trimmed.

Now, the first thing you'll discover when you try to divide up and classify mainframe software is that there is no clear dividing line between operating system, middleware, and applications. Generically, the operating system talks to the hardware and manages the mainframe's behavior; applications talk to users and provide results with tangible business value; and the middleware enhances the operating system and enables the applications.

But there are pieces of middleware that are also implicitly embedded in the operating system and the applications. In fact, historically, many pieces of middleware emerged from one of these sources and gained a life of its own.

A great example of this is software that sorts data. Originally, IBM included a utility to do this with the mainframe operating system. However, in 1969, IBM began to sell the software on the mainframe separately from the hardware (a strategy known as "unbundling"), opening the door to competition. As a result, new third-party utilities were written to provide alternative sorting functionality to the IBM-provided sorting utility, leading to the rise of important software companies still in business today. That was only possible because sorting software emerged from being included with the operating system (which emerged from being bundled with the hardware) and became a middleware solution in its own right.

OK, then, what are the dimensions of middleware on the mainframe? First, let me offer a very basic behavior-oriented set:

1) Data Handling: Modifying, moving and managing data.
2) Device Interfacing: Interacting with devices, including storage, printing, networking and user interaction.
3) Applications and Automation: Programming, automation and reporting (including development, maintenance, interconnecting/repurposing and updating).
4) Context Management: Configuring, securing, logging, monitoring, modeling and reporting on everything from devices to applications.
5) Optimization: Optimizing the execution time and/or resource usage of mainframe systems and applications.
6) Quality and Lifecycle: Change, configuration, quality enablement and lifecycle management.
7) Production: Production/workload planning and control.

At this point, I hope you're saying something like, "but what are the actual solutions and what do they do???" which, in my opinion, is almost the right question. Ideally, you're saying something even more like, "but what business needs are responded to by solutions in this space?" which is almost exactly the right question - and close enough for now.
Because the essential deficit in all the various classification schemes - not that it invalidates them - is mapping them directly to business value in a way that allows for an optimal solution set and configuration, both in terms of costs and related contractual desirability, and more importantly in terms of enabling your business to prosper.

Now, future blog posts can talk about things like inertia, overlap, changing requirements and obsolete configurations. However, I'll conclude today's blog post with another, business-oriented list that focuses on the business needs that these solutions respond to. Each of the below needs are dimensions that can be met by one or more solutions, and each of those solutions has functionality along one or more of the above behavioral dimensions.

A) Business Enablement: Full, appropriately-controlled availability of reliable data and results required for business decisions and processes (such as financial activities).
B) Continuity: The ability to detect, prevent, and recover from problems, errors and disasters that could otherwise interrupt the flow of business.
C) Security, Integrity and Compliance: Provable, continuous security and integrity minimizing potential liability and ensuring compliance with laws and regulations governing the proper running of an organization.
D) Cost-Effective Operations: Cost-effective, comprehensive, responsive and timely operation of the computing environments, applications and resources needed to effectively do business, creating a layer of conceptual simplicity.
E) Analysis and Planning: Enablement of IT architecture and resource planning for current and future organizational success.
F) New Business Value: Facilitating new business initiatives and value by enabling new applications and data, and/or building on, connecting to and/or optimizing existing application functionality and data.

Taken together, I consider the two above lists of dimensions the foundation of version 1.0 of the Mainframe Analytics Middleware Taxonomy. I look forward to comments and suggestions to refine it further before using it to map individual solution areas.

I'll be coming back to the above and blogging on its dimensions, solutions, implications and uses in the future. However, next week, I plan to talk about the applications that run on the mainframe.