Tuesday, October 30, 2012

Plumbing the Taxonomy Part 7: Production

As we conclude the "what" dimension of the taxonomy, the topic of this post is about "Production" - the day-to-day running of scheduled, coordinated activities that keep the mainframe and related business running smoothly.

Many of the solutions that are responsible for this value also have an automation aspect - most notably, those which are considered Workload Automation. In those cases, this is the "workload" value as complementary to the "automation" value of such solutions.

The workload includes running all of the "batch" tasks that production applications require. Other words used for "batch" include "offline" and "background". 

Your average user never knows or thinks about these things, because they don't talk to them. We're all used to dealing with banking machines, which are online or foreground. But there's a need for additional processing that doesn't require people to intervene - but does have to run regularly.

As an example, when you receive your utility bill in the mail, it was likely created and printed by programs running in batch mode, which pulled up the account information for you and every other customer, turned it into a bill, and sent it to the printer, all without human intervention.

And it is precisely the ability to schedule such things to run regularly without manual involvement that allows these bills to be created and sent out with such regular reliability.

However, it's not just a single step process. Often, there are "jobs" (i.e. a set of programs that are designated to run together in the same sequential order every time) that run first to perform one task - such as get all the billing information for today to be further processed - which are then followed by additional ones that only run if the first ones complete successfully (which they don't always do, for many different reasons).

So workload automation allows for the grouping, scheduling and coordination of many such jobs and applications in a regular, automated manner that only requires human intervention when there's a change or a problem that isn't readily solved by further automation.

Next time, we dig into the "why" dimension.

Monday, October 22, 2012

Plumbing the Taxonomy Part 6: Quality and Lifecycle

So, you wrote a program, tested it, and put it into production, and now everyone's using it to pay their bills.

However, it's part of a larger application with many other programs, each of which has a specific role, such as transferring money between accounts or withdrawing cash from your bank account or any number of other banking functions.

The problem is, a few months later, another program in that application had to be changed to allow for a new feature. And, the structure of your data had to be changed as well, so there was somewhere to keep track of that new feature.

Your program didn't need to use that new feature, but it did need the data, which now had more information, and therefore had a slightly modified structure. So your program had to be changed to use the new data structure.

But, you couldn't just change your program and put it into production and be done, because you had to wait until the data and all the other affected programs had also been changed, and then put it into production all at once.

Then, if even one of those programs turned out to have a significant error that hadn't been found during testing, it could become necessary to back out all the changed programs and data and revert to the previous version so production could keep running smoothly.

Meanwhile, you were also working on further changes to the program that would respond to future functions the application would offer - but not for several more months.

Keeping three concurrent versions of the same program, application, and data structure is quite normal on the mainframe. Often there may even be more. And it's necessary to keep track of each to avoid any possibility of confusion between versions.

The thorough testing of everything to minimize the possibility of problems before you "go production" is a core aspect of the Quality value. The connected value of Lifecycle Management allows for managing and tracking of multiple versions, to be able to develop one or more versions concurrently while having another in production, and even a previous one available in case a backout is needed.

Any true production computing platform needs these features, so that business activities aren't negatively impacted by the dynamic introduction of buggy programs and inability to back them out in a timely fashion.

Of course, such scrupulous practices have the paradoxical effect of making mainframes so reliable we take them for granted, forget they're there, and then focus on the squeaky-wheel platforms that are constantly crashing and having bugs, and give them all of our attention.

At IBM's System z Technical University at the beginning of October, I gave a presentation entitled "Getting a New Generation to a New Normal on the Mainframe." I had some great discussions in connection with that, and one of the concepts that emerged was the Warren Buffett approach as mapped to computing.

As you may be aware, Warren Buffett is one of the richest people in history, and he got that way by identifying and acquiring companies with excellent fundamentals but significantly reduced valuation.

Well, as evidenced by the benefits of Quality and Lifecycle Management, the mainframe is the only computing platform with such excellent fundamentals that we just take it for granted that it works. The problem is, we take it so for granted that we treat it like it doesn't exist. Talk about a reduced valuation!

So, to apply our analogy, if an organization wants to invest in a platform that will bring them a spectacular capacity to succeed - or if an IT professional wants to make such an investment in their career - there's nothing else out there like the mainframe, which has such amazing quality and so nearly invisible a reputation.

Talk about a ground floor opportunity for prosperity! After 48 years, the mainframe is poised for a tipping point of spectacular "overnight success" - will you and your organization be part of it?

Wednesday, October 17, 2012

Plumbing the Taxonomy Part 5: Optimization

Imagine a computer that normally runs nearly 100% busy all the time without slowing down, uses resources such as storage with maximal efficiency, runs programs that have been tightened up to minimize CPU and memory usage, does backups and restores efficiently and effectively, and keeps network bandwidth down while delivering massive data throughput.

Of course I'm talking about the mainframe, easily the most frugal computing platform in use today. Starting from the early days when available resources such as memory, disk and tape storage, and processor cycles were minimal, it has always been the norm to optimize the usage of the mainframe. Right from the beginning, there have been many ways - and software solutions - for optimizing the mainframe to maximize the value received.

Today, squeezing every last drop of value from the mainframe continues to be a core part of the culture and techology.

That's more important than you might initially think. As I discussed in my CMG article at http://www.cmg.org/measureit/issues/mit54/m_54_11.html, Moore's Law, an observation that has been used to point out that computers keep getting smaller, cheaper, and faster, is winding down. Already, CPU speeds have stopped increasing. The laws of physics tell us that eventually storage and memory capacity growth will also start to plateau.

When that happens, those who are already in the habit of making the most of every resource will be light years ahead of those who have gotten in the habit of letting bigger, faster computers make up for the inefficiency and sloppiness of how their solutions are built.

And, more to the point, the mainframe, which has remained lean, responsible and scrupulous, will be the only platform that is so optimized - right down to the hardware architecture - that the ever-bloating cycles of bigger, slower software on other platforms will result in the mainframe being further and further in the lead.

Let's hear it for frugal computing, and the business-enabling characteristic of optimization so ubiquitous on the mainframe!

Tuesday, October 9, 2012

Plumbing the Taxonomy Part 4: Managing the Context

Context is everything. Literally!

In the case of the mainframe, that means everything from the hardware to the operating system to the subsystems to the applications that interface with the carbon-based peripherals who pay for everything.

Managing it means configuring, securing, logging, monitoring, modeling and reporting on everything from devices to applications.

So, if something isn't functioning properly, then software with the context management value of the "what" dimension will allow you to adjust and fix this behavior.

And, if you're planning to add hardware for uses such as storage or networking, this would be the function that models possible configurations to enable good planning.

In fact, if you want to know whether anything's going wrong right now, or has gone wrong in the past, this is the feature that tells you - or even alerts you so you can fix it before anyone experiences problems.

That last functionality is particularly relevant for keeping the mainframe running smoothly. In environments without quality real-time monitoring, IT management often finds out from the users of their services that things aren't working and then have to inform the systems and operations personnel so they can fix it. However, where such monitoring is effective, it can be coupled with automation to identify and fix a problem before anyone is affected, and then notify relevant personnel that this has occurred.

Now, before I finish this week's blog, I want to take a moment to give a shout out to Bob Rogers, one of my favorite mainframers, for an excellent brief video in which he explains how western civilization runs on the mainframe. Everyone (not just IT people) should watch this.

Monday, October 1, 2012

Plumbing the Taxonomy Part 3: Applications and Automation

As someone who is familiar with computers, you may be tempted to ask about this value of the "what" dimension, "isn't that everything that computers do?"

After all, applications are written in programming languages, and automation generally includes the option of programming, and computers are all about being programmed for automation of otherwise more manual tasks.

However, this deserves its own broad category, in my opinion, because programming languages and other means of automating activities are a distinct category from the other values in this dimension, with a focus specifically on enabling people to create something versus managing, monitoring and connecting.

Of course, there are many solutions that have multiple values along this axis, so important areas such as Workload Automation and its superset IT Automation will also qualify as "Production" (that's Part 7). In fact, Enterprise IT Automation is an area that I consider significant enough that I'm currently doing some additional writing on it - stay tuned.

Now, the languages used in creating applications and automation range from Assembler - i.e. a text-based representation of the "machine language" that runs the computer - through well-known 3GL's (third generation languages) such as COBOL, to 4GL's (fourth generation languages) such as Natural, Easytrieve and REXX. You'll even find C and Java on the mainframe.

Some of the programs written in these languages originate in the 1960's, and have barely been modified since. Others have been written, rewritten, updated, and continually used throughout the nearly-five-decade history of the mainframe. Certainly, there's a lot of Y2K-proofed code - particularly in COBOL - that has been around a long time, and is of such proven quality that it will likely be around for a long time to come.

Other programs are quite new, as the mainframe continues to take on new workloads as well as supporting the tried-and-proven ones. Java shows up a lot in these new ones.

Automation programs are also an ongoing source of new development and modifications, as the context being automated changes and grows. That's particularly the case given the enterprise-wide nature of leading edge automation, which includes the mainframe along with other platforms for a single point of manageability across IT.

One further note on this topic: while there is significant overlap between products and languages on this value, reducing the number in use is, to put it mildly, non-trivial. For example, while converting all the programs in a given 4GL to run in COBOL or Assembler (in order to eliminate the 4GL and save its licensing costs) may theoretically be possible, the effort to convert and maintain the resulting much-larger programs is often prohibitive.

However, if you have two solutions that overlap in every way, including having programming languages, it can be worthwhile to examine the opportunity for consolidation, particularly if there is not too much in-house programming, or if that programming can be replaced by something simpler and out-of-the-box in an alternative solution.