As I sit here typing this blog entry on my PC, I think of the uncountably many ways I interact with computers every day, and I realize that not one of those makes me think, "I'm dealing with a mainframe right now."
That's actually intentional. Back in the 1980's, IBM came out with a strategy known as "Systems Application Architecture" (SAA) which was intended to bring together the various platforms of computing in a way that would allow workloads to run on the most suitable computer, while serving up interfaces on other suitable computers, without users knowing or caring that they were dealing with more than one computer.
That soon came to be seen as part of the larger "Client/Server" model, which allowed users to have "client computers" that only did tasks that were best handled locally, such as interfacing. A larger, remote server would handle the more intense processing.
This reminds me of when, in 1998, I asked clients of mine in Québec what the French word for mainframe was, and they said, "serveur central," which translates literally as, "central server."
In any case, we no longer need to know or care what the computer "back there" doing the main processing is. Unless, that is, we're responsible for it.
Today, with the Internet and Cloud Computing, we are more abstracted than ever from the platforms that do the core processing we need. We just sit at our browser or smartphone and talk to a graphical interface. And we trust that it will work.
And when it works, we barely notice. It's like that old saying, "Housework is something that nobody notices unless you don't do it."
Given that perspective, you would think that the computer that works most well would be the most invisible - and it is. Whether you're submitting your taxes online, booking flights, or even withdrawing cash from a bank machine, you're dealing with a mainframe, and it's behaving with such reliability that you don't have to know or care that it's there.
But here's the catch: for the most serious, identity-handling, secure and data-intensive of applications, it makes a very big difference to have a computer that has proven over nearly five decades that it can be trusted to handle these requirements so well that you don't have to care.
However, in our rush to welcome the client side of computing over the last three decades, we have participated in this abstraction of the main servers by distancing ourselves and our strategies from the computers we rely on to make everything else work. We've used words like "legacy" as if they were put-downs rather than recognizing the value they imply. And we've focused much of our new development of people and applications on these distributed platforms.
The good news is that our faith was not misplaced in taking the mainframe for granted, and even treating it with the contempt that comes from the kind of familiarity you only have with someone or something you trust never to let you down, no matter how you treat it.
There is a problem, however, that needs to be addressed - or perhaps it can better be seen as an opportunity: decision makers whose decisions impact the maintenance and usage of the mainframe need to have a more balanced understanding of the fact that they've bet their businesses, and the economy, on the mainframe, and realize how much better off they can be if they take that into account.
An illustration of this reliance, though very rarely occurring, is the occasional anecdotal story of an organization that has had their mainframe accidentally powered off. This takes an amazing confluence of events that usually includes at least one action by someone authorized to take it who should really know better, plus the failure of entire aspects of corporate culture built to prevent such things - for example, change control.
When it happens, the results are generally the complete failure of every important corporate application - even those that were apparently entirely built on non-mainframe platforms. It's like a corporate heart attack. Why? Because all those new applications are generally relying on important data and processing from "somewhere trusted" on the corporate intranet. And at the bottom of that entire upside-down pyramid of trusted data and processing is the nearly-invisible mainframe, running the up-to-date version of the most high-performing, secure, reliable, scalable legacy applications that keep the organization functioning.
Did I say scalable? Yes, and adaptable too. Sadly, many architect-level and above people in modern IT seem to think of the mainframe as inflexible and limited. In fact, the opposite is true. No other platform has such a high ceiling for one or (far more often) thousands of applications to take up as much capacity as needed on a moment-by-moment adjustable basis.
I'll be opening up the substance of this topic much more over coming blog entries. I intend to continue next week with a high-level overview of what makes the mainframe so functional and reliable.
Meanwhile, I'd also like to recognize and thank those who have already given feedback to this blog. Since launching it last Monday we've had hundreds of views from every continent but Africa and Antarctica, and I hope to see them join the audience too.
Willie, thank you for your positive comments, and for the many ways you support and encourage the mainframe community, including with your blog and your support of zNextGen.
Pandoria13, I certainly agree that there were some very important innovations such as memory speed for platforms that qualified as mainframes and were precursors to the System/360. However, since anything written for them had to be rewritten before it could be used for the System/360, but anything written for the System/360 could still be in use today, I've chosen to focus on that "line that survived" of mainframes.
And to other colleagues I've discussed this with, I just wanted to confirm that, while it is sometimes appropriate to move off of an out-of-date application that may happen to run on the mainframe to a newer one that might not run on the mainframe, it is also worth looking at performance, reliability, scalability, security and other architectural considerations when looking for an ideal platform to move a "legacy" application to, even if the legacy is Unix or Windows, and the ideal destination platform turns out to be a mainframe.
No comments:
Post a Comment