Saturday, December 22, 2012

Plumbing the Taxonomy: Why? Part E - Analysis and Planning

One of the things that really annoys me about modern business IT is that the decision makers and their closest advisers, including IT Architects, often have no clue about the role of the mainframe in their environments, even though they've bet their businesses on this most reliable of platforms decades ago, and would not survive without it.

As a result, you get some very odd and skewed perceptions and depictions. One of these is the IT Architecture diagram, a poster-sized picture of an organization's IT context that the Architect has spent a very long time fine-tuning to make it look just right. If you were to take into account all the time and effort that goes into one of these posters, they could easily be seen to have cost an organization over $100,000 to produce - which is nothing compared to their potential cost to the organization if done wrong.

Which they invariably are if the role of the mainframe is missing or understated - which it almost always is. Because the architect will spend months talking to all the people who have servers and routers, closets full of IT devices, and special projects, and create a politically-correct depiction of what they heard from all the people they talked to. And maybe they briefly talked to one or two mainframe-relevant people, and maybe they even stuck a picture of a little mainframe in the bottom right corner of the diagram just to keep from annoying those legacy dinosaurs. But the glorious new technology that is lighting the way and has all the political weight: that is far and away the bulk of the diagram.

Except that, if the IT Architect were to draw a picture that took into account the business value and essential role of each device, the mainframe would take up an unweildy majority of the poster, as it is the keeper of the corporate jewels - but that's "not fair" to all the other IT people affected.

And if the IT Architect were to draw a picture that shows the role of every machine in every application, the mainframe would show up as the foundational platform for just about every major application, with the other platforms providing barely more than cosmetic additions and user interfacing, but the essential data and processing held firmly on the mainframe.

The problem: politics and ignorance (and you thought IT would save us from the human condition). The other platforms take so many people to maintain them that they have far more political weight than the mainframe. And the mainframe works so invisibly well that everyone only notices the squeaky wheel and sizzle platforms that are barely more than nail polish on the underlying functionality provided by the mainframe.

But what has this rant to do with the "Analysis and Planning" value in the "why" dimension? This: not only does the mainframe offer so much value that any legitimate IT Architecture diagram would be mostly mainframe with tiny bits of the other platforms scattered like ornaments on a tree (like the seasonal reference?), but properly characterizing, understanding, and building on that value has the potential to bring more business value to the organization that is paying for IT like few other cost-effectiveness initiatives could.

And the software with this role gives the needed insights into resource configuration, usage, and future scenarios, therefore playing a role with significant business value, ensuring that the return on investment for the mainframe just keeps going up, regardless of what the IT Architects and other decision makers may decide to do with the minor platforms in the environment.  

This is especially important combined with proactively responding to new initiatives - but that's the next (and last) value on this dimension, so I'll save that for the next blog post.

Wednesday, December 5, 2012

Plumbing the Taxonomy: Why? Part D - Cost-Effective Operations

The "KISS" principle ("Keep It Simple, Stupid" or some gentler variation on that) has long been a key principle for effective living and business. It is closely related to keeping things affordable, as simplicity tends to be a good countermeasure against unnecessary expenditures.

In large-scale IT, keeping to these two principles is essential to maintaining the proper cost-benefits balance that keeps IT viable rather than it becoming an upward spiral of price and complexity.

So, this value of the "why" dimension is all about the business benefit of well-run IT. Over the years, many efforts have been made to document and codify how this is done - one of the more successful ones is ITIL - the Information Technology Infrastructure Library. Interestingly, this definitive set of guides that characterize a well-run environment is essentially the depiction of a functioning mainframe environment - in this case, the one at the British Government.

Where the software relevance comes in here is the selection of tools which create a layer of business-enabling simplicity, keeping IT easily manageable and consequently paying for itself. And that's an important business value!

Thursday, November 29, 2012

Plumbing the Taxonomy: Why? Part C - Security, Integrity and Compliance

During my years working as an employee of provincial and civic government, I discovered the importance of a special kind of incentive: removal and avoidance of pain.

We all know (I hope) that it's inappropriate to bribe government employees to get better or faster results from them - and that getting caught doing so could land everyone involved in jail.

What recent history has shown, however, is that there are many other types of business misbehavior that can land people - CEO's in particular - in jail, or at least get them and their organizations sued and/or sanctioned. And some of those types are also business MIS behavior.

So, if you want to incentivize a government employee - or anyone who works for a large, rule-bound organization (i.e. the kind that's prone to have a mainframe) - rather than giving them something, you want to take something away: negative experiences, consequences and potential for consequences, aka pain.

In the world of MIS (does anyone still use that abbreviation? it makes for some great puns), where some of an organization's most sensitive data and activities take place, having data or processing compromised can trigger significant pain, from regulatory audit findings and related sanctions to legal and criminal trouble for executives. Compromise can include exposure of confidential personal information of customers and citizens, leading to great expense to "make it right" as well.

All of which illustrates the value of ensuring your organization's most sensitive data, processing and business behaviors are provably compliant with relevant regulations, and sufficiently secure that only legitimately authorized parties have appropriate access to it.

Consequently, this is an essential value on the "why?" dimension of solutions used to manage large IT enterprise environments, particularly those that include mainframes.

Thursday, November 22, 2012

Plumbing the Taxonomy: Why? Part B - Continuity

How quickly can you reboot a bank if it crashes? And how do you access your accounts in the meantime?

What do you do if the air traffic controllers' computers suddenly become non-responsive - can you put the airplanes in a suspended state?

Why is it that we take for granted that the largest, most critical organizations on earth will keep functioning, 7x24, 365 days per year?

Because Continuity is a business mandate that is built into the computers that can be trusted to keep the world economy, and other critical areas, functioning.

And many of the software solutions that keep mainframes running so well have this reason for their existence: to keep the business running even if something bad happens to the mainframe. That can include backup and recovery, real-time fail-over to another mainframe (likely elsewhere in the world in case of natural disaster), and just the ability to see a problem coming and get everything in place to prevent or quickly deal with it.

This value of the "why" dimension is worth the entire existence of some organizations. If a bank suddenly stopped operating for hours, it would take a potentially lethal financial hit. If it stopped for days or weeks, it would likely go out of business. So its mainframe computers must have the necessary solutions for continuity to ensure that the bank doesn't crash. And yet, so few non-mainframe environments have ever done a successful recovery test of their entire production computing configuration - or even the most critical part - often, even in organizations where they have a mainframe that has done such disaster recovery testing and planning.

That's one of the reasons why the mainframe is such an important part of keeping the world economy running, and why Continuity is such an important value of the mainframe.

Wednesday, November 7, 2012

Plumbing the Taxonomy: Why? Part A - Business Enablement

Welcome to the second dimension of the taxonomy! And this value is the most basic of reasons for running something on a computer, mainframe or not: to do the work of the business that's paying for it!

To be more specific, this is about the applications that provide specific business deliverables that drive revenue, handle accounting, manage products, and basically act as the core competence of the business that's paying for IT.

This is why computers were invented in the first place: to enable organizations to perform their core business activities in a faster, more reliable, automated manner.

Everything else in the taxonomy (with the possible exception of the last value in this dimension) serves the business indirectly, but this value is what it's all about. Therefore, anything that calls itself an "application" probably includes this value of this dimension.

But - and here's the interesting thing - this is only one of several values in this dimension, so there are other business reasons for computing than just performing core competence processing!

However, if you don't have this one, you definitely have a problem with your IT.

Tuesday, October 30, 2012

Plumbing the Taxonomy Part 7: Production

As we conclude the "what" dimension of the taxonomy, the topic of this post is about "Production" - the day-to-day running of scheduled, coordinated activities that keep the mainframe and related business running smoothly.

Many of the solutions that are responsible for this value also have an automation aspect - most notably, those which are considered Workload Automation. In those cases, this is the "workload" value as complementary to the "automation" value of such solutions.

The workload includes running all of the "batch" tasks that production applications require. Other words used for "batch" include "offline" and "background". 

Your average user never knows or thinks about these things, because they don't talk to them. We're all used to dealing with banking machines, which are online or foreground. But there's a need for additional processing that doesn't require people to intervene - but does have to run regularly.

As an example, when you receive your utility bill in the mail, it was likely created and printed by programs running in batch mode, which pulled up the account information for you and every other customer, turned it into a bill, and sent it to the printer, all without human intervention.

And it is precisely the ability to schedule such things to run regularly without manual involvement that allows these bills to be created and sent out with such regular reliability.

However, it's not just a single step process. Often, there are "jobs" (i.e. a set of programs that are designated to run together in the same sequential order every time) that run first to perform one task - such as get all the billing information for today to be further processed - which are then followed by additional ones that only run if the first ones complete successfully (which they don't always do, for many different reasons).

So workload automation allows for the grouping, scheduling and coordination of many such jobs and applications in a regular, automated manner that only requires human intervention when there's a change or a problem that isn't readily solved by further automation.

Next time, we dig into the "why" dimension.

Monday, October 22, 2012

Plumbing the Taxonomy Part 6: Quality and Lifecycle

So, you wrote a program, tested it, and put it into production, and now everyone's using it to pay their bills.

However, it's part of a larger application with many other programs, each of which has a specific role, such as transferring money between accounts or withdrawing cash from your bank account or any number of other banking functions.

The problem is, a few months later, another program in that application had to be changed to allow for a new feature. And, the structure of your data had to be changed as well, so there was somewhere to keep track of that new feature.

Your program didn't need to use that new feature, but it did need the data, which now had more information, and therefore had a slightly modified structure. So your program had to be changed to use the new data structure.

But, you couldn't just change your program and put it into production and be done, because you had to wait until the data and all the other affected programs had also been changed, and then put it into production all at once.

Then, if even one of those programs turned out to have a significant error that hadn't been found during testing, it could become necessary to back out all the changed programs and data and revert to the previous version so production could keep running smoothly.

Meanwhile, you were also working on further changes to the program that would respond to future functions the application would offer - but not for several more months.

Keeping three concurrent versions of the same program, application, and data structure is quite normal on the mainframe. Often there may even be more. And it's necessary to keep track of each to avoid any possibility of confusion between versions.

The thorough testing of everything to minimize the possibility of problems before you "go production" is a core aspect of the Quality value. The connected value of Lifecycle Management allows for managing and tracking of multiple versions, to be able to develop one or more versions concurrently while having another in production, and even a previous one available in case a backout is needed.

Any true production computing platform needs these features, so that business activities aren't negatively impacted by the dynamic introduction of buggy programs and inability to back them out in a timely fashion.

Of course, such scrupulous practices have the paradoxical effect of making mainframes so reliable we take them for granted, forget they're there, and then focus on the squeaky-wheel platforms that are constantly crashing and having bugs, and give them all of our attention.

At IBM's System z Technical University at the beginning of October, I gave a presentation entitled "Getting a New Generation to a New Normal on the Mainframe." I had some great discussions in connection with that, and one of the concepts that emerged was the Warren Buffett approach as mapped to computing.

As you may be aware, Warren Buffett is one of the richest people in history, and he got that way by identifying and acquiring companies with excellent fundamentals but significantly reduced valuation.

Well, as evidenced by the benefits of Quality and Lifecycle Management, the mainframe is the only computing platform with such excellent fundamentals that we just take it for granted that it works. The problem is, we take it so for granted that we treat it like it doesn't exist. Talk about a reduced valuation!

So, to apply our analogy, if an organization wants to invest in a platform that will bring them a spectacular capacity to succeed - or if an IT professional wants to make such an investment in their career - there's nothing else out there like the mainframe, which has such amazing quality and so nearly invisible a reputation.

Talk about a ground floor opportunity for prosperity! After 48 years, the mainframe is poised for a tipping point of spectacular "overnight success" - will you and your organization be part of it?

Wednesday, October 17, 2012

Plumbing the Taxonomy Part 5: Optimization

Imagine a computer that normally runs nearly 100% busy all the time without slowing down, uses resources such as storage with maximal efficiency, runs programs that have been tightened up to minimize CPU and memory usage, does backups and restores efficiently and effectively, and keeps network bandwidth down while delivering massive data throughput.

Of course I'm talking about the mainframe, easily the most frugal computing platform in use today. Starting from the early days when available resources such as memory, disk and tape storage, and processor cycles were minimal, it has always been the norm to optimize the usage of the mainframe. Right from the beginning, there have been many ways - and software solutions - for optimizing the mainframe to maximize the value received.

Today, squeezing every last drop of value from the mainframe continues to be a core part of the culture and techology.

That's more important than you might initially think. As I discussed in my CMG article at http://www.cmg.org/measureit/issues/mit54/m_54_11.html, Moore's Law, an observation that has been used to point out that computers keep getting smaller, cheaper, and faster, is winding down. Already, CPU speeds have stopped increasing. The laws of physics tell us that eventually storage and memory capacity growth will also start to plateau.

When that happens, those who are already in the habit of making the most of every resource will be light years ahead of those who have gotten in the habit of letting bigger, faster computers make up for the inefficiency and sloppiness of how their solutions are built.

And, more to the point, the mainframe, which has remained lean, responsible and scrupulous, will be the only platform that is so optimized - right down to the hardware architecture - that the ever-bloating cycles of bigger, slower software on other platforms will result in the mainframe being further and further in the lead.

Let's hear it for frugal computing, and the business-enabling characteristic of optimization so ubiquitous on the mainframe!

Tuesday, October 9, 2012

Plumbing the Taxonomy Part 4: Managing the Context

Context is everything. Literally!

In the case of the mainframe, that means everything from the hardware to the operating system to the subsystems to the applications that interface with the carbon-based peripherals who pay for everything.

Managing it means configuring, securing, logging, monitoring, modeling and reporting on everything from devices to applications.

So, if something isn't functioning properly, then software with the context management value of the "what" dimension will allow you to adjust and fix this behavior.

And, if you're planning to add hardware for uses such as storage or networking, this would be the function that models possible configurations to enable good planning.

In fact, if you want to know whether anything's going wrong right now, or has gone wrong in the past, this is the feature that tells you - or even alerts you so you can fix it before anyone experiences problems.

That last functionality is particularly relevant for keeping the mainframe running smoothly. In environments without quality real-time monitoring, IT management often finds out from the users of their services that things aren't working and then have to inform the systems and operations personnel so they can fix it. However, where such monitoring is effective, it can be coupled with automation to identify and fix a problem before anyone is affected, and then notify relevant personnel that this has occurred.

Now, before I finish this week's blog, I want to take a moment to give a shout out to Bob Rogers, one of my favorite mainframers, for an excellent brief video in which he explains how western civilization runs on the mainframe. Everyone (not just IT people) should watch this.

Monday, October 1, 2012

Plumbing the Taxonomy Part 3: Applications and Automation

As someone who is familiar with computers, you may be tempted to ask about this value of the "what" dimension, "isn't that everything that computers do?"

After all, applications are written in programming languages, and automation generally includes the option of programming, and computers are all about being programmed for automation of otherwise more manual tasks.

However, this deserves its own broad category, in my opinion, because programming languages and other means of automating activities are a distinct category from the other values in this dimension, with a focus specifically on enabling people to create something versus managing, monitoring and connecting.

Of course, there are many solutions that have multiple values along this axis, so important areas such as Workload Automation and its superset IT Automation will also qualify as "Production" (that's Part 7). In fact, Enterprise IT Automation is an area that I consider significant enough that I'm currently doing some additional writing on it - stay tuned.

Now, the languages used in creating applications and automation range from Assembler - i.e. a text-based representation of the "machine language" that runs the computer - through well-known 3GL's (third generation languages) such as COBOL, to 4GL's (fourth generation languages) such as Natural, Easytrieve and REXX. You'll even find C and Java on the mainframe.

Some of the programs written in these languages originate in the 1960's, and have barely been modified since. Others have been written, rewritten, updated, and continually used throughout the nearly-five-decade history of the mainframe. Certainly, there's a lot of Y2K-proofed code - particularly in COBOL - that has been around a long time, and is of such proven quality that it will likely be around for a long time to come.

Other programs are quite new, as the mainframe continues to take on new workloads as well as supporting the tried-and-proven ones. Java shows up a lot in these new ones.

Automation programs are also an ongoing source of new development and modifications, as the context being automated changes and grows. That's particularly the case given the enterprise-wide nature of leading edge automation, which includes the mainframe along with other platforms for a single point of manageability across IT.

One further note on this topic: while there is significant overlap between products and languages on this value, reducing the number in use is, to put it mildly, non-trivial. For example, while converting all the programs in a given 4GL to run in COBOL or Assembler (in order to eliminate the 4GL and save its licensing costs) may theoretically be possible, the effort to convert and maintain the resulting much-larger programs is often prohibitive.

However, if you have two solutions that overlap in every way, including having programming languages, it can be worthwhile to examine the opportunity for consolidation, particularly if there is not too much in-house programming, or if that programming can be replaced by something simpler and out-of-the-box in an alternative solution.

Monday, September 24, 2012

Plumbing the Taxonomy Part 2: Interfacing with Devices

Last week I elaborated on the first value in the "what" dimension of the Mainframe Analytics Taxonomy.

Before I dig into the second value, however, I have some good news: our official website is finally up and running at http://MainframeAnalytics.com. Feel free to check it out and offer any feedback.

Now, concerning interfacing with devices, of course it is primarily the operating system's role to enable applications to talk with the terminals, tape and disk (and solid state) drives, printers, network, etc. After all, that's one of the main jobs of operating systems: handling the stuff that every application needs but isn't the core functionality that the application is about providing.

However, it's one thing to get data to and from these devices. It's entirely another thing to manage and optimize the usage of these devices.

For example, network problems can occur anywhere between the mainframe and the user, and tracking them down and fixing them can be nearly impossible without network management software. In fact, such software can often prevent or detect such problems before anyone else is affected.

Sharing drives and consoles between multiple different operating system images is also a challenge - for example, making sure that a change made to a drive on one system doesn't accidentally overwrite a change made to the same place on another one.

The software that manages and optimizes these devices is a core part of what makes the mainframe great. While storage, network and other device management software may be available for other platforms, it is not yet nearly common enough that the time and expense are committed to do this as well as is standard on the mainframe.

This is a common enough thread in understanding the value and role of the mainframe that it bears emphasizing: while many of the things that make the mainframe great may also be available for other platforms, it is generally the exception that they are purchased, installed, configured and run properly on those other platforms, but normal to have them on the mainframe. In fact, if a non-mainframe platform were run with all of the things that make a mainframe great, and if all these additional systems didn't degrade that platform beyond usability, it would still make the cost-benefits equation favor the mainframe by a very large margin.

It may be tempting to think of the mainframe as expensive, but for the quality of service, availability, security and reliability that we expect - and which keep the world economy functional - the price we pay is very small indeed, especially compared to the costs of the consequences of doing without.

Sunday, September 16, 2012

Plumbing the Taxonomy Part 1: Handling the Data

Back on February 5, 2012, I published a blog post about the Mainframe Analytics Taxonomy, which divides mainframe software products along two dimensions: behaviors (the "what") and business value (the "why").

In that taxonomy, I briefly listed seven values along the "what" axis (numbered 1 to 7) and six values along the "why" axis (lettered A through F).

As I mentioned in the blog post, a given software product may have more than one value along each axis, though often its current primary focus will be in just one category.

So, I thought it might be of interest to do a quick series of brief blogs just explaining each value.

I'll begin with Data Handling, number 1 on the "what" axis:

Now, this is a very big category, and covers everything from storing and managing data in databases, to processing, combining, sorting, modifying and moving data. It's also one of the things mainframes do best.

Right from the beginning, the mainframe architecture was designed to handle massive amounts of data such as a world-class business might regularly process. One example is: all the records a national government has about its citizens' taxes. Another: all the information about a large financial institution's customers and their accounts.

One aspect of this category, databases, has deep roots that go way back. The theory of how to efficiently store and access large amounts of structured data led to the development of some of today's most important databases, including IBM's DB2 and IMS, CA Technologies' CA Datacom and CA IDMS, Software AG's ADABAS, and some distributed databases such as Oracle and Ingres which are available on Linux.

Another is data sorting, which was such an important utility that it was the first product of two important software companies that are still going strong today: Syncsort and their eponymous product, and CA Sort from Computer Associates, now CA Technologies. That was despite the fact that IBM already offered their own sorting utility for the mainframe, but the business need was such that these optimized alternatives were enough to launch their companies.

There are also many utilities designed to examine and modify and move data in many different ways. Interestingly, this is a good example of being in more than one category, since they often also have value 3: Applications and Automation. A good example of this is applications that take address data and turn it into validated mailing addresses printed on envelopes (or statements visible through windows in envelopes). But I'll get to that in another two blog posts.

Saturday, September 8, 2012

Pricing Intangibles

How do you price intellectual goods that can be manufactured and distributed for no significant cost compared to the cost of their creation? Whether you're talking software, configurations, architectures, or written works, if it can be distributed virtually without ever being placed/printed on physical media (such as a book), what is the basis of value and pricing?

Back when I started working for Computer Associates International in the late 1990's, I tried to explain to my brother why mainframe software was so expensive. After all, it generally didn't have any more lines of code than PC software that sold for a few hundred dollars at the most. At the time, I suggested that the price was generally a fraction of the cost savings that it brought.

I still stand by that assertion, but over the years, I discovered just how hard that is to prove. Once a piece of software has become embedded in an environment over many years, there's no simple way of knowing how much cost it's saving, because removing it could bring everything to a halt, which would be a completely different order of magnitude of cost.

Interestingly, pricing written works can involve similar issues. If someone asks me to write up a white paper, article, or recommendation, should I be paid by the word?

I'm reminded of a joke, a quotation and an anecdote.

The joke is about a highly-experienced mechanic who is faced with a car that has stopped working, and no one can figure out why. They ask if he'll fix it, and he agrees to for $1,000. Eventually, his price is accepted, and he goes to work.

To the external observer, the mechanic appears to be dancing around the car in a manner reminiscent of Mr. Bojangles - crouching down low, leaping up high, and almost seeming to be performing a rain dance of sorts as he looks over every nook and cranny of the car. Then, he suddenly takes out a ball peen hammer, strikes the car engine with an exacting blow, and pronounces it fixed.

The owner tries it out and, sure enough: the car now works. Then the mechanic presents his bill for $1,000.

Skeptical, the owner asks for a price breakdown. The mechanic replies, "That's $1 for hitting the car, and $999 for knowing where to hit it."

The quotation, attributed to many people, but probably most famously to Mark Twain is: "I would have written a shorter letter, but I did not have the time."

The anecdote is about the production of enameled steel pots and pans during the Soviet era. Apparently, it became harder and harder to find small ones while there was an overabundance of large ones. This was not because having larger ones somehow improved the cooking experience, but rather because the factories that produced them were incented by the amount of material used, rather than any measure of useability or rates of sales.

What all three of these have in common is that the value of things is not always connected to the simplistic, some might say "common sense," measures we apply to commodities. In fact, other than the most tightly-controlled and homogeneous commodities, I suggest size is rarely a good measure of value.

One of my favorite examples of this is the Windows operating system, which I've heard may be the largest software system ever created in terms of sheer number of lines of code. And yet, you can buy a copy for a few hundred dollars, and even get a PC thrown in (or vice versa). Compare that to z/OS, IBM's premier mainframe operating system, which costs a few orders of magnitude more than that. Yet, I doubt IBM would claim there's a commensurately larger number of lines of code; likely, there are fewer.

So, how do you price mainframe software in a way that is fair? I suppose it makes sense to take a lot of factors into account, from the capacity of the mainframe (they vary greatly in capacity), to the cost savings and benefit that a typical customer is likely to derive, to the cost of creating, maintaining, and deriving a reasonable profit from a piece of code that is only installed on a portion of the 10,000 or so mainframes in use in the world today.

That's a very different approach from pricing generic consumer software, of which millions (maybe even billions?) of copies are sold.

Likewise, when pricing the generation and delivery of other intellectual property that has great value to a relatively limited but very economically significant audience, it seems to me that paying per word is like asking for excessively large cookwear, rather than looking for that perfect touché in as few effective words as possible.

Of course, like any good question, this is more of a journey than a destination. But it's one of the important questions in the mainframe world, and one that is of particular interest to me as I look to generate written works of significant value, and don't think it appropriate to charge by the word, since I can often achieve more with a small number of well-chosen-and-aimed words than with a 10-page white paper.

What do you think about the pricing of such intangibles?

Saturday, September 1, 2012

"F*** it, I'm buying a server" - the story of distributed IT

"The mainframe whisperer" - that's what my nephew Alex told me I should be after I described the culture of mainframers and explained why it clashes with the ambient business IT culture of immediate results.

Then his friend Stefan, who is tasked with getting results and has to deal with mainframers, told me of his experience of frustration with rigid behavior, leading to the memorable assertion: "F*** it, I'm buying a server." I told him that phrase was the history of IT since the 1980's.

The problem, of course, is that the rigid behavior that frustrates people so much is the reason why mainframes work so well while every other platform keeps crashing, being hacked, and generally failing the test of trustworthiness.

For me, this is the fractal meeting point of ocean and shore, of new and old, of aspiration and reality. And, somehow, mainframes have become a part of experiential reality, shoring up the world economy, while most of IT has continued to float on the waves of aspiration.

As I begin to spin up Mainframe Analytics, our blog, our website, and our business activities again after the passing of my wife earlier this year, this seems like a good leaping off point for this blog.

What can be done to bring together that which works with those who plan? Well, at SHARE in Anaheim, I had the opportunity to give the MVS Program keynote, in which I talked about this topic. And my answer, as you can see in the presentation posted at https://share.confex.com/share/119/webprogram/Handout/Session11474/2012-08-06%20Harbeck%20MVS%20Keynote.pdf, is to get to know your mainframe environment ("know thyself"), optimize it for current needs ("get a haircut") and enthusiastically tell the world how great it is ("fall in love").

And that's what I'll be doing on this blog and in my role as Chief Strategist for Mainframe Analytics. Stay tuned!

Saturday, April 7, 2012

x30 Years of Great Ideas and Counting

Today is the 48th anniversary of IBM's announcement of the System/360 mainframe on April 7, 1964, if you count in decimal like many mainframe applications and users do. However, if you count in hexadecimal, or base 16, like many computer scientists, including mainframe systems folks, do, then today is the 30th (or "x30"th) anniversary of that announcement. This seems like a good opportunity to reflect on some of the great innovations and ideas that have continued to come from the mainframe, up to and including the most recent SHARE conference in Atlanta.

First, though, let me offer a tip of the hat to Pandoria13 for comments received on the last blog. Also, I'll point out that I've enable monetization of this blog, and would be interested in feedback on this step.

Of course, the System/360 mainframe did not emerge in isolation - rather, it arose as the culmination of many years of advancement and culture, drawn fom earlier mainframes and ideas, and being the "love child" of IBM, SHARE, and the organizations that were taking the journey of defining what an ideal business computer should do and be.

In fact, unlike UNIX, Linux, Windows and MacIntosh, technologies around which a culture formed, the mainframe was the manifestation of an already-existing culture which has continued to be a core part of that platform.

So, some of the original innovations associated with the mainframe in the 1960's and 1970's were also present, at least as concepts, in other earlier and competing platforms at the time. But they generally received their most enduring manifestation in what became today's leading-edge mainframe, including:

  • Virtual memory
  • Virtual machines
  • Full system integrity and security

And, of course, many, many more.

Which leads us to the latest and greatest insights at SHARE. Though I must confess that some of the best ones I got were from an excellent interview with John Ehrman, a father of modern mainframe assemby language, for the mainframe history book that Dr. Steve Guendert and I are working on. Learning about the sources and outcomes for decisions about how the underlying language of the machine grew and adapted was fascinating.

The keynote about how "Boring Meetings Suck" by Jon Petz followed by a session later in the afternoon where he elaborated was also of interest, and a motivator to be more effective in business, including mainframe IT.

Of course, there were many interesting technology keynotes, sessions and discussions.

Two of the sessions that most grabbed my attention were a Wednesday morning one for new mainframers, where a room full of high school and university students got to learn about the mainframe thanks to the IBM Academic Initiative and some related presentations (including a quick one I was able to give about zNextGen), after which they got to check out the Technology Exchange; and a session about local mainframe user groups - which I hope leads to further discussion.

The evening receptions were great opportunities to network and share and learn information about the latest and greatest on the mainframe.

In fact, the more I think about it, the more I realize that many of the sessions make good starting points for future blogs, so I think that's where I'll leave it for today... stay tuned!

Wednesday, March 21, 2012

Leading Big Iron Edge

SHARE in Atlanta last week was an excellent event. Not only did I catch some great keynotes and many other valuable sessions, not only did I get to be with hundreds of my friends and favourite mainframers including many of the key players in the mainframe ecosystem, but I had the opportunity to see some of the many ways that the mainframe continues to be the most technologically-advanced business computing platform around.

But before I dig into the details, a couple of other notes, beginning with thanks for the comments on last week's blog post from my friend Marcel den Hartog (zMarcel) in the Netherlands and from my colleague Jerry Seefeldt with whom I attended a great session/discussion about local mainframe user groups at SHARE. It's inspiring to see the growing interest in local mainframe user groups, and I'm looking forward to seeing how they connect up and what role SHARE is able to play in it.

The other thing I wanted to mention is that I met with my friends and colleagues from key mainframe publications at SHARE, including MainframeZone and the associated magazines z/Journal and Mainframe Executive, and IBM Systems Magazine (mainframe edition). I'm looking forward to continuing to publish articles and such with all three. And, I'd like to take the opportunity to refer you to the video series "Big Iron: The Mainframe Story (so far...)" which I worked on with the folks at CA Technologies, IBM Systems Magazine and various mainframe luminaries to produce. It's a great way to get more familiar with the mainframe.

Once you've seen those videos, you're ready to find out about just how leading edge today's mainframe is. I certainly was as I roamed the Technology Exchange Expo floor, and I was impressed with what I saw.

As always, IBM had the biggest booth, and in addition to having an actual mainframe running on the floor, they had other techology and plenty of people and stations to tell the world about all the great things happening on the mainframe that are keeping it at the forefront of high-end business computing, including Linux-on-z/VM cloud computing. One of IBM's biggest emphases was on their Smarter Planet initiatives.

Fortunately, the mainframe is much more than just IBM: it's an ecosystem with many technology vendors and the largest organizations on earth as the customers. So, each vendor had one or more (sometimes many more) things to show the world about how they're making the mainframe better all the time.

Many of the products are about enabling the mainframers of today and tomorrow to be more effective, including CA Technologies' role-based CA Mainframe Chorus workspace, and Chicago-Soft's Application Knowledge Capture™ service to retain valuable understanding from imminent retirees.

Of course, forming and educating a new generation of mainframers is a key focus, so Interskill was there, and CA Technologies announced scholarships for their Mainframe Academy.

There were also solutions for bringing greater end-to-end integration and functionality between the mainframe and the rest of the computing world, such as the Mainframe Event Acqusition System™ (MEAS™) which provides integration of real-time mainframe event information with McAfee's Security Information and Event Management (SIEM) platforms.

One company, UNICOM® Global, even had their Founder, President & CEO Corry Hong at their booth, and giving presentations, in order to introduce everyone to all the solutions - both established and new - that they're involved with.

In addition to the content on the exhibit flor, there were also various other news and articles in the SHAREnews Dailies. And there were many discussions, including a focus group about "Big Data."

Of the many interesting sessions one of my favourites was Cheryl Watson's Hot Flashes, which has many of the latest and greatest tidbits about how to make mainframes run even better.

After a week at SHARE, it's clear that the mainframe is constantly staying ahead with world-class functionality and all the most modern technology to respond to the business needs of the largest organizations on earth, today and for the foreseeable future.

And SHARE is the place where it all connects.

Next blog post, I plan to share some interesting thoughts and insights from the various sessions and discussions I enjoyed at SHARE.

Monday, March 12, 2012

SHARE

Back in the earliest days of business computing, when all printing and displaying of text was uppercase, it became apparent that the users of this technology would benefit from getting together to share their innovations and lobby for the improvements that would best respond to their business needs. Thus it was that, in 1955, nine years before the announcement of the System/360 mainframe, SHARE was born.

57 years later, the semi-annual SHARE conference has now opened in Atlanta with a great keynote by Jon Petz about how to survive and improve business meetings, to be followed by a week full of technical and how-to sessions.

When SHARE was founded, the name was what they did - not an acronym, but uppercase because computers didn't offer lowercase letters back then. One of the most important things that emerged from their first nine years of lobbying for their business computing requirements was the announcement of IBM's System/360, the epitome of business computing and progenitor of today's mainframe.

In keeping with SHARE's business orientation, the Monday and Tuesday of the conference now also include a parallel ExecuForum for key IT decision makers to meet and discuss issues from a business perspective. Many of these folks started their careers as mainframe technical experts before moving eventually to their current responsibilities.

One of the most important things SHARE has always offered above and beyond its sessions is an opportunity for business computing professionals - particularly those reponsible for large-scale business IT that includes mainframes - to network and do dynamic problem solving among peers. This hearkens back to the origins of SHARE, and it's something you'll see and hear in the hallways between sessions, at the various receptions, on the exhibit floor, in the session rooms before and after the presentations, and at meals and coffee meetings between colleagues who only ever see each other at SHARE - sometimes even if they work for the same organization!

While SHARE was the first computer user group ever founded, it didn't take long for others around the world to follow suit, so sister organizations in Europe and Pacific Rim countries have also existed for most of SHARE's history.

However, SHARE is the pinnacle - or, as I prefer to think of it, the nexus - of mainframe and large enterprise computing user organizations. And it's where you'll meet the key players - both people and organizations - in the ecosystem.

Interestingly, while SHARE is now 57 years old, it's actually getting younger. The number of first-time attendees seems to increase each time. Many of these newbies will associate with the zNextGen project at SHARE, but regardless of whether or not they do that, you'll see them taking every opportunity to chat with and learn from the many highly-experienced attendees who are delighted to be able to mentor them.

Of course, the session content is of significant value all by itself, so many people who can't be at SHARE in person are virtually attending SHARE Live! from Atlanta to benefit from the keynotes and other valuable sessions.

However, the opportunities that come from attending in person are even greater, and they include not only networking and mentoring, but also building lasting friendships that can be there for you at a time of need. I have personally experienced this.

Being at SHARE in person has yet another benefit: the Technology Exchange Expo, where you can meet people from just about any organization that wants to be seen as a credible part of the mainframe ecosystem, and learn about the latest in business information technology.

For those who wish to take their benefits even further, there are many volunteering and speaking opportunities, as SHARE is a volunteer-run organization (with the paid assistance of an organization that handles many of the logistical details, of course). That means that, whether you'd like to develop your speaking, people, or organizational skills, there are ways to do so with SHARE.

SHARE also has its share of traditions, from pins and ribbons on badges, to receptions, to group dinners and networking events, to special sessions that everyone tries to attend. One of my favourites has always been "Cheryl Watson's Hot Flashes" which is  at 9:30 am on Friday morning, and contains a summary of everything significant happening in the mainframe ecosystem, much of which she has gleaned from the content of the week leading up to her session.

Why all this detail about a user group and educational conference? Because, until you've understood the mainframe community and culture, you can't possibly understand the platform. The mainframe is much more than merely technology: it's at the beating heart of the key organizations in the world economy, and the beating heart of the mainframe is the people that make it run. And those people can be found at SHARE.

In my next blog post, inspired by all the leading-edge technology being announced and displayed at SHARE, I intend to write about some of the important innovations currently happening on the mainframe.

Wednesday, March 7, 2012

Easy Does IT

In my last blog post, I discussed the generations of mainframers, and called the current new generation of the technical experts that will act as the beating heart of this platform "Generation Easy." The problem is, getting this generation fully in place before the previous ones depart is not turning out to be particularly quick or easy. This post is about how to make it so.

First, though, I should mention that I also had some nice and appreciated comments from friends of mine in generations Charlie and Easy - thanks!

In 2005, I wrote a whitepaper and an article, and gave a presentation at SHARE, about the need to get a new generation in place on the mainframe, and what steps were necessary to do so. Since then, I've continued to develop my thinking and experience on this, and am continuing to write more articles on the topic as well.

So, this blog post is a good opportunity to sketch out the basics. To find out more, you can read my articles at MainframeZone.com and the related magazines (z/Journal and Mainframe Executive), or contact me for a consultation or presentation.

The first step is to hire new people while you still have experienced ones around to teach and mentor them, and do a proper transfer of responsibilities.

In-house projects and mentoring, including tracking down and updating obsolete configurations and programs, are important activities to get them going.

However, before that happens, you'll likely need to get your new people introduced to the mainframe, unless you're lucky enough to hire people have have done some initial learning at universities and colleges working with the IBM Academic Initiative. Even in those cases, though, some additional training can be helpful. There are a number of good options for this. Three that I'm familiar with (though this is not an official endorsement) are:

1) Have them join the z/NextGen project of SHARE (free). This will give them the opportunity to start connecting and learning, and also give them access to a select number of mainframe introductory eLearning courses made available for free to z/NextGen members by the folks at Interskill.

2) Go for the whole meal deal and sign them up for the complete selection of eLearning courses from Interskill.

3) Sign them up for CA Technologies' Mainframe Academy.

Of course, there are other options for introductory courses as well - and, ideally, if your organization is big enough to have its own mainframe, you should also have some of your own in-house introductory courses to help people get familiar with your particular context.

I can also strongly recommend self-study to complement this, and IBM's Red Books are excellent resources for this purpose.

Now, once your new people have the basics in place and have begun being mentored, getting to know your environment and doing introductory projects, the next important thing is to get them connected and acculturated into the mainframe culture. If you've already signed them up for z/NextGen, you've made a good start. Getting them involved with such communities is important. The follow-on step is to send them to a mainframe educational conference such as SHARE.

In fact, if you happen to be in the Atlanta area (or have the financial and schedule flexibility to make a last-minute travel booking), I can strongly recommend sending your newbies to attend SHARE in Atlanta next week. Or, you can sign them up to attend virtually with SHARE Live! from Atlanta.

In addition to the above, you'll want to update your local mainframe technology and culture to be more compatible with this new generation, and the one that follows it. I intend to dig into that in future blog posts. And, of course, there's room for plenty of elaboration on the above basics.

Next week, however, I plan to blog more about SHARE.

Tuesday, February 28, 2012

Generations

The mainframe ecosystem has had several generations of people in charge of it, each learning from the previous while bringing their own abilities, insights, and eventually experiences. While it's somewhat arbitrary to draw a line between each of these, it can help in understanding where we are today, so let me give it a try.

But first, I'd like to thank my two commenters from last week's post: Jim Michael, my friend and mentor and someone who is approximately in or next to my "generational band" though much wiser and slightly more chronologically gifted, and Kristine Harper, my friend and a leading member of the current new generation of mainframers (Gen-E referred to below).

Now, I'd say the first generation of mainframers are those who started their careers before the advent of electronic computing. Let's call them "Generation Able." Many of them had been in the military during World War II, and brought that culture and scrupulousness to their establishment of the culture of computing, and eventually to mainframe computing.

I'll designate the next one, "Generation Baker," and group those who started their careers on early computers, and ended up spending most of their careers on the mainframe.

The third one, "Generation Charlie," are those who started out on the mainframe when it was already in place and running - some time in the mid-to-late 1960's, the 1970's, and 1980 to 1982. For them, computing was mainframe was computing for the formative years of their careers.

In 1983, Time Magazine declared the PC "Man of the Year" and the world of computing changed forever. Suddenly, everyone spoke of the mainframe in the past tense as they looked to the future of computing on other platforms. Those hardy (or foolhardy, depending on whom you ask) few who went into mainframe careers were seen as non-mainstream, to put it politely. I was among them. We are "Generation Dog," and I include everyone who came on board before Y2K preparations took off, around 1997. We are few in number, because many from the previous generations were still around, organizations were not investing well in building a new generation on the mainframe thinking it was going away, and mainframes were requiring fewer and fewer people to keep them running, even as they continued to grow their capacities, but also their reliability and maintainability.

Y2K changed everything, as organizations realized they had invested too deeply into highly-functional mainframe environments to simply move off, so they had to update their code to survive the turn of the millennium. The world was slowly waking up to the fact that the mainframe had become a fixed foundation of large-scale IT. Those who have begun their careers since this time, while still slim in numbers, knew they had brilliant careers ahead of them, being responsible for the most important computing platform on earth. I call them, "Generation Easy."

Suddenly, everything is changing, and the ultimate generation is about to arrive: "Generation Fox." They will inherit a mainframe unlike that of their predecessors, and take part in its becoming so. The mainframe will be simpler to maintain, manage and deploy new applications for than ever, and will likely show itself to be the optimal platform for top-quality cloud computing. Unlike their technically-oriented predecessors, many in this generation will be as focused on business results as on the bits and bytes of how-to. And, if (as I expect) a tipping point of rediscovering the mainframe is reached, this new generation will also balloon as organizations invest in using the mainframe for the newest and most leading edge applications.

However, they're not here yet, and the first five generations are made of highly-competent, trustworthy, hard-working technologists who have passed down practices, cultures and user groups that have become the infrastructure of this essential platform. We will continue to need their ilk at the foundation of mainframe computing, regardless of how many of the new business-oriented generation flood in. So, my advice to organizations looking to the future of their mainframes is, hire quality now, mentor them, get them tried and proven, and then you'll be able to ensure that the mainframe continues to run well as all the Gen-F's start to flood in. Because your mainframe's not going away, but Gen's A through D are, and soon.

Next week, I'll talk about some of the ways to get a new generation in place on time to respond to the imminent challenges and opportunities on the mainframe.

Monday, February 20, 2012

...Then a Miracle Happens

A favourite cartoon of mine shows two academics at a chalk board, with a complex set of equations on the left hand side and a simple, elegant solution on the right hand side. One of them is saying to the other, "I think you need to be more explicit here" while indicating the bridge between the two sides, which is a cloud containing the words, "Then a Miracle Happens." In many ways the mainframe is like this: with all the wondrously complex things from hardware to applications running together in unison to deliver business value, it's easy to forget that none of it would be possible without that central part that makes everything happen - the people and culture of the mainframe.

Of course, long before the first computer, let alone the first mainframe, there were people. People invented the mainframe, and gave it its culture. People made and improved the hardware, operating systems, middleware and applications. People learned how to use the mainframe, building on their best abilities learned from other contexts, including in the military during the second world war. Those same people worked together to establish the culture of the mainframe, including everything from scrupulous planning and change control to a special way of saying and seeing things unique to the mainframe culture.

If you've read any blogs I've previously written before starting Mainframe Analytics (for example, "How to Talk Like a Mainframer"), you'll know that one of my favourite examples of the culture passed down from WW II military veterans is the words mainframers use for the first six letters of the alphabet: Able, Baker, Charlie, Dog, Easy, Fox (rather than the current Alpha, Bravo, Charlie, Delta, Echo, Foxtrot). These were the standard in WW II, and were in habitual use by the earliest mainframers. Consequently, they got passed down through the generations, and continue to be widely used today.

Another thing that came down the generations is SHARE, one of the remaining great mainframe user groups, and in many ways the nexus of the lot. Founded in 1955, nine years before IBM announced the System/360 which is the ancestor of modern mainframes, it was intended to enable users of IBM's business computers, including early mainframes, to share information in order to ease the task of getting value from them. Today, at 57 years old, SHARE is still going strong - in fact, their next meeting will be in Atlanta in March.

Now, there's a lot to be said about the culture of the mainframe and the various generations of mainframers - in fact, I've written quite a few articles on the topic (check out http://mainframezone.com for a good number of them). So, rather than making this post a big long one that talks about all of them, I'll stop here for this week, and pick up next week with a discussion of the state and future of the mainframe workforce.

Tuesday, February 14, 2012

App Location

Why do we use computers? What led to them being developed in the first place? What is it they do that we can't just have lots of people do instead? The simple answer is, we use computers for the applications that run on them, which do valuable things that would be impossible, unpleasant or prohibitively expensive to have people do them for you instead.

By now, most of us are used to the concept of "apps" - those single-user-focused applications that run on personal computing devices such as smart phones. Of course, "app" is just an abbreviation for "application" which is what mainframes were built to run.

The journey of recognizing the "application layer" of computing as distinct from the rest of the technology has been a long one, and it could be argued that it will never be entirely complete, because some people will always buy technology for the sizzle (i.e. bells and whistles) rather than the steak (the value it actually brings). However, on the mainframe, this journey substantially concluded a long time ago.

Today, the applications that run on the mainframe handle business at a global scale. They do billing and accounts receivable, HR, decision support, customer account handling, large-scale postal sorting, addressing and stamping, and many, many other business functions that require a massive capacity for data and throughput with total reliability.

As with smartphones and PCs, some mainframe applications can be bought from vendors, and may even run with very little customization. However, there are also many applications that are highly customizable - ERP systems (i.e. Enterprise Resource Planning, such as SAP, PeopleSoft, Oracle Financials, etc.) are a good example of this kind.

The nice thing about those vendor-supplied applications is that they're kept current and maintained, so the customer's job is just to keep installing and configuring the latest upgraded version - which is a lot more work than it sounds like, but a lot less work than writing and updating their own.

However, one of the most important kinds of application on the mainframe is in-house. These are the trade-secret, competitive-advantage, bread-and-butter applications that do unique things that no other organization does in exactly the same way. In fact, they generally embody an organization's essential identity. They have been written and maintained in-house, often for decades, and they provide core functionality, which is often built on and extended with distributed applications that sink deep roots into them.

Interestingly, while these can be some of the most valuable applications, they're also some of the most problematic, because, as they get more and more established, it becomes harder and harder to change them to respond to new needs and opportunities without adversely affecting other mainframe and non-mainframe applications that rely on they way they behave.

This often results in very complex circumstances when two large organizations merge, and they have applications with overlapping functionality. Trying to modify them to work together can be something of a nightmare, complicated by the fact that they also use data sources (usually databases) that have completely different natures as well. This is the point at which frustration may set in, and tried-and-proven applications may be set aside for vendor-provided solutions, often on non-mainframe platforms. Which, in my opinion, is a shame, given the functionality, reliability and competitive advantages that can often be sacrificed for the sake of short-term convenience.

There is a whole range of solutions that exist to enable "modernization" of mainframe applications that have been around long enough to get into an inertial funk. These include: lift-and-shift solutions to run mainframe applications mostly unmodified on other platforms; solutions that reverse engineer applications into a business rules representation for re-generation to the platform (and programming language) of choice; and solutions that build connections into and around the established ones to enable building on their functionality (on and off the mainframe) without substantially modifying them.

In any case, there are many billions of lines of programming in the mainframe applications that run the world economy, and they have proven themselves over the decades to work very well, so they're generally not going away any time in the foreseeable future. Which means that it's time for responsible people to start making long-term plans to maximize the benefit of their mainframe applications to their organizations, rather than just taking them for granted and trying to squeeze value out of them without sufficient care and feeding.

Care and feeding... yes, that's a very important topic, and not just for the mainframe hardware and software, because an essential part of what makes the mainframe great is the human side: people and culture. I'll write about that next time.

Sunday, February 5, 2012

Middle Where?

Before I start digging into the software between mainframe operating systems and their applications, I'd like to begin today's blog by thanking "zarchasmpgmr" aka Ray Mullins for his comment on last week's blog post about operating systems. I acknowledge that there have indeed been non-IBM mainframes that have had varying degrees of compatibility with the IBM ones over time, they have used various versions of IBM's and their own operating systems, and it's important to remember that some are still out there. In fact, these are topics that are worthy of their own blog posts in the future, but for now, suffice to say, "good point - thanks!"

I'd like to say "thanks!" to my friend and colleague Tim Gregerson as well for his thoughtful comments on last week's post.

I'm also reminded that there's something called LPARs (pronounced "el pars") or Logical PARtitions which run under Processor Resource/Systems Manager (PR/SM - pronounced "prism") on IBM mainframes. LPARs form a layer between the hardware and the operating system, allowing the mainframe to be divided into a small number of separate, concurrently running pieces. Today's mainframe does not allow OS images to run on its "bare metal" but requires that they either run directly in an LPAR or indirectly under z/VM, which would then run on an LPAR (directly or indirectly). z/VM can then allow a very large number of concurrent OS instances to run under it as "z/VM guests."

OK, enough delays, it's time to get into one of the most interesting aspects - or rather, sets of aspects - of the mainframe context: the software that resides between the operating systems and the applications. Of course, such software exists on non-mainframe platforms as well, and increasingly is part of enterprise-wide solution sets. However, I'll keep it simple for the time being and focus on the mainframe. In future posts I can discuss how this all fits in the entire IT (Information Technology) enterprise.

I have worked with this "middleware" most of my career, and I've seen many ways of classifying and grouping it (aka taxonomies).

In my experience, the most common taxonomy is by primary function. So, a given piece of software might be classified either as storage management, workload automation, performance management, or system automation (or a number of other primary roles), but it generally can't be more than one of these. Keep it simple and focus on core competence, you know.

A major problem with that approach is that software is flexible, adaptable, and multidimensional, and often starts out doing one thing and morphs with customer demand into something entirely else. Two examples of this are a mainframe database that began its life as a data communications product, and a performance monitor that began its life as an alternative to IBM's SDSF - a tool for watching tasks run and looking at their output. Both of these changed over time and became what they are today, while in at least one case still performing its original role as well.

It's also possible to have multiple products that have different primary roles but so much overlap in their other dimensions that at least one of them can be seen as redundant for cost optimization purposes.

In fact, between the complex historical journey a given piece of software takes and the many different uses to which it is put, any tree-like taxonomy that insists it is "this and not that" misses entirely the dynamic and adaptable nature of such software.

But we can't even begin to optimize and properly manage an environment if we don't have a straight-forward understanding of its structure and elements.

For that reason, rather than beginning with the traditional tree-structured explanation of the software in the middle, I'm going to use a dimensional approach - that is, I'm going to try to identify most of the key dimensions of what we use this software for, recognizing that most pieces of software have more than one of these dimensions.

This is more than just a tool for understanding what's out there. As I develop this in future blog posts it should become clear that this is part of a much more effective approach to optimizing a mainframe environment by identifying which dimensions are needed, which exist, and how much unnecessary overlap can be trimmed.

Now, the first thing you'll discover when you try to divide up and classify mainframe software is that there is no clear dividing line between operating system, middleware, and applications. Generically, the operating system talks to the hardware and manages the mainframe's behavior; applications talk to users and provide results with tangible business value; and the middleware enhances the operating system and enables the applications.

But there are pieces of middleware that are also implicitly embedded in the operating system and the applications. In fact, historically, many pieces of middleware emerged from one of these sources and gained a life of its own.

A great example of this is software that sorts data. Originally, IBM included a utility to do this with the mainframe operating system. However, in 1969, IBM began to sell the software on the mainframe separately from the hardware (a strategy known as "unbundling"), opening the door to competition. As a result, new third-party utilities were written to provide alternative sorting functionality to the IBM-provided sorting utility, leading to the rise of important software companies still in business today. That was only possible because sorting software emerged from being included with the operating system (which emerged from being bundled with the hardware) and became a middleware solution in its own right.

OK, then, what are the dimensions of middleware on the mainframe? First, let me offer a very basic behavior-oriented set:

1) Data Handling: Modifying, moving and managing data.
2) Device Interfacing: Interacting with devices, including storage, printing, networking and user interaction.
3) Applications and Automation: Programming, automation and reporting (including development, maintenance, interconnecting/repurposing and updating).
4) Context Management: Configuring, securing, logging, monitoring, modeling and reporting on everything from devices to applications.
5) Optimization: Optimizing the execution time and/or resource usage of mainframe systems and applications.
6) Quality and Lifecycle: Change, configuration, quality enablement and lifecycle management.
7) Production: Production/workload planning and control.

At this point, I hope you're saying something like, "but what are the actual solutions and what do they do???" which, in my opinion, is almost the right question. Ideally, you're saying something even more like, "but what business needs are responded to by solutions in this space?" which is almost exactly the right question - and close enough for now.
Because the essential deficit in all the various classification schemes - not that it invalidates them - is mapping them directly to business value in a way that allows for an optimal solution set and configuration, both in terms of costs and related contractual desirability, and more importantly in terms of enabling your business to prosper.

Now, future blog posts can talk about things like inertia, overlap, changing requirements and obsolete configurations. However, I'll conclude today's blog post with another, business-oriented list that focuses on the business needs that these solutions respond to. Each of the below needs are dimensions that can be met by one or more solutions, and each of those solutions has functionality along one or more of the above behavioral dimensions.

A) Business Enablement: Full, appropriately-controlled availability of reliable data and results required for business decisions and processes (such as financial activities).
B) Continuity: The ability to detect, prevent, and recover from problems, errors and disasters that could otherwise interrupt the flow of business.
C) Security, Integrity and Compliance: Provable, continuous security and integrity minimizing potential liability and ensuring compliance with laws and regulations governing the proper running of an organization.
D) Cost-Effective Operations: Cost-effective, comprehensive, responsive and timely operation of the computing environments, applications and resources needed to effectively do business, creating a layer of conceptual simplicity.
E) Analysis and Planning: Enablement of IT architecture and resource planning for current and future organizational success.
F) New Business Value: Facilitating new business initiatives and value by enabling new applications and data, and/or building on, connecting to and/or optimizing existing application functionality and data.

Taken together, I consider the two above lists of dimensions the foundation of version 1.0 of the Mainframe Analytics Middleware Taxonomy. I look forward to comments and suggestions to refine it further before using it to map individual solution areas.

I'll be coming back to the above and blogging on its dimensions, solutions, implications and uses in the future. However, next week, I plan to talk about the applications that run on the mainframe.

Monday, January 30, 2012

Who Cares About Operating Systems?

A mainframe is more than just a computing device: it is a business computing platform; the difference is its operating systems. They are highly optimized to the mainframe hardware and context and embody over half a century of the requirements from the biggest users of business computing.

Obviously, there are many other computing platforms than the mainframe out there. Terms like "wintel" describe a generic PC with an Intel (or similar) processor running the Windows operating system. An Apple with some version of Mac OS is another example. A given hardware configuration may have multiple different operating system options (e.g. Windows and Linux, among others, for PCs), and a given operating system may have different versions that run on multiple different hardware platforms (e.g. Linux, which has versions for Intel-type PCs, but also for other hardware platforms including the mainframe).

The history of IT is rife with "holy wars" about operating systems, including what level of functionality is sufficient, whether they should be proprietary or open, and if they should be optimized to a specific hardware platform or generically available to many. Before Linux made it big and after the mainframe had become so taken-for-granted that it was already generally ignored and written off as extinct, there were big "holy wars" between supporters of Windows and of UNIX. A Dilbert comic from that era embodies this well: http://dilbert.com/strips/comic/1995-06-24/ depicts a "condescending UNIX computer user" telling Wally, likely a Windows user, "Here's a nickel, kid. Get yourself a better computer."

Of course, throughout that era, debates about such operating systems were able to proceed with more energy than urgency since the critically important work was already being handled by mainframes.

The four mainframe operating systems which have continued to be available for the past few decades are today known as: z/TPF, z/VSE, z/VM and z/OS.

Which leads to the question: what is it that these operating systems do?

Put simply, they provide a functioning context for all the applications that run on the mainframe to focus on what they do best, letting the operating systems handle everything from talking to the hardware to enabling many, many different tasks to run concurrently and safely with the best possible performance and availability.

Being written specifically for the IBM mainframe hardware, and having then evolved concurrent with it over time to respond to the demands of the biggest users on earth, this has resulted in a platform of unparalleled performance, capacity and reliability.

In addition to the four "z/" operating systems, there is also Linux available for the mainframe (though it generally runs as a "guest" under z/VM) and an interface to z/OS known as UNIX System Services (USS) or z/OS UNIX. Because each of these relies on one of the previously-mentioned operating systems to interact with the hardware of the mainframe, I'll save specific discussion of them for future blog entries, and focus this one on the aforementioned four.

Over the years, the names of these operating systems have changed. The original operating system announced for IBM's System/360 line of computers was to be known as OS/360, but the learning curve that came with developing such a complex operating system led to significant delays in delivery (as discussed in Fred Brooks' great book, "The Mythical Man-Month: Essays on Software Engineering."). So, as a stop-gap, IBM announced the scaled-down DOS/360 (for "Disk Operating System/360" - not to be confused with any of the other operating systems also known as DOS). OS/360 was eventually delivered, and it grew and changed and went through multiple names, becoming what we know today as z/OS. DOS/360 went through many twists and turns to become z/VSE.

I like to coin epigrams (just ask my kids, who have a compilation of what they call, "dad-isms"), and one of them is, "The temporary outlasts the permanent." This refers to the fact that we often adopt short-term measures without the detailed planning and perfectionism we would apply to something intended to last. Then, these short-term measures, free from the obligation of perfection, have grown, adapted and gained a life of their own. Meanwhile, the more carefully-planned results may see the world pass them by if we don't keep applying the same level of scrupulousness to their ongoing viability as we applied to their original development.

Interestingly enough, z/VSE and z/OS represent the two sides of viability inherent in this: the stop-gap that adapted and survived and the scrupulously-created high-quality result that continued to be maintained with great effort and attention.

Now, don't get me wrong: today's z/VSE is indeed a high-quality operating system, and has accrued many of the advantages originally developed for OS/360 and its successors over time. And, for that matter, it's always been "good enough" - so much so, that IBM's efforts to get its users to convert to OS/360's successors have never seen a complete conclusion.

In fact, that's one of the great stories of the mainframe: how IBM has tried to get the users of the "good enough" operating system to convert to the "top quality" operating system, and how those users have responded.

While my focus for this blog entry isn't to give an in-depth history of the mainframe context (I'm working on a book on that topic with my friend and colleague Dr. Stephen Guendert - stay tuned), it's worth following this thread a little way just to see a couple of noteworthy outcomes.

The first of these is z/VM. 1972 marked the beginning for a precursor to z/VM: VM/370. While this was intended to host multiple users in a time-sharing context, there are two very relevant aspects about it for this discussion: 1) it was the first Virtual Machine operating system, allowing multiple concurrent environments, including full mainframe operating system instances, to think they had the entire mainframe to themselves; and 2) it was employed as part of IBM's ongoing strategy to get the users of DOS/360's descendants to convert to OS/360's descendants by allowing them to run both operating systems concurrently on the same machine, thus allowing for a smooth and gradual cutover.

The other interesting thread is the emergence of a range of non-IBM operating systems that were generally enhanced alternatives to the successors of DOS/360. One of the most well-known of these was MVT/VSE from Software Pursuits, which my friend and colleague Tim Gregerson was closely involved with. He has shared many insights with me about this turbo-charged alternative to IBM's light mainframe OS, and I look forward to including some of them in the mainframe history book I mentioned above.

Lastly, let me give a tip-of-the-hat to z/TPF (or z/Transaction Processing Facility). Descended from the Airlines Control Program (ACP - developed in the mid-1960s), it is a highly-optimized environment for serving up intensive, real-time services such as airline reservations at the greatest of volumes. While it is the least commonly-used of the big four mainframe operating systems, for those who use it, nothing else comes close to the nature and scale of performance it offers.

Because all four of these operating systems run on the same hardware platform, they are able to benefit from significant cross-pollination. That means that RAS (Reliability, Availability, Serviceability/Scalability/Security) features of one can be repurposed or used to model similar aspects in the others.

When I say "the same hardware platform" it should not be construed to indicate that there's only one kind of IBM mainframe, of course. Rather, since the beginning, the System/360 and its descendants have provided an extremely wide range of capacities and performance characteristics. But they're all designed to be able to run the same software and operating systems, providing the functional equivalent of an extremely open platform.

However, operating systems are just one more layer of what makes the mainframe great, and the next layer is the one I know best. Next week: all the software between the operating systems and the applications, part one!