Monday, September 24, 2012

Plumbing the Taxonomy Part 2: Interfacing with Devices

Last week I elaborated on the first value in the "what" dimension of the Mainframe Analytics Taxonomy.

Before I dig into the second value, however, I have some good news: our official website is finally up and running at http://MainframeAnalytics.com. Feel free to check it out and offer any feedback.

Now, concerning interfacing with devices, of course it is primarily the operating system's role to enable applications to talk with the terminals, tape and disk (and solid state) drives, printers, network, etc. After all, that's one of the main jobs of operating systems: handling the stuff that every application needs but isn't the core functionality that the application is about providing.

However, it's one thing to get data to and from these devices. It's entirely another thing to manage and optimize the usage of these devices.

For example, network problems can occur anywhere between the mainframe and the user, and tracking them down and fixing them can be nearly impossible without network management software. In fact, such software can often prevent or detect such problems before anyone else is affected.

Sharing drives and consoles between multiple different operating system images is also a challenge - for example, making sure that a change made to a drive on one system doesn't accidentally overwrite a change made to the same place on another one.

The software that manages and optimizes these devices is a core part of what makes the mainframe great. While storage, network and other device management software may be available for other platforms, it is not yet nearly common enough that the time and expense are committed to do this as well as is standard on the mainframe.

This is a common enough thread in understanding the value and role of the mainframe that it bears emphasizing: while many of the things that make the mainframe great may also be available for other platforms, it is generally the exception that they are purchased, installed, configured and run properly on those other platforms, but normal to have them on the mainframe. In fact, if a non-mainframe platform were run with all of the things that make a mainframe great, and if all these additional systems didn't degrade that platform beyond usability, it would still make the cost-benefits equation favor the mainframe by a very large margin.

It may be tempting to think of the mainframe as expensive, but for the quality of service, availability, security and reliability that we expect - and which keep the world economy functional - the price we pay is very small indeed, especially compared to the costs of the consequences of doing without.

Sunday, September 16, 2012

Plumbing the Taxonomy Part 1: Handling the Data

Back on February 5, 2012, I published a blog post about the Mainframe Analytics Taxonomy, which divides mainframe software products along two dimensions: behaviors (the "what") and business value (the "why").

In that taxonomy, I briefly listed seven values along the "what" axis (numbered 1 to 7) and six values along the "why" axis (lettered A through F).

As I mentioned in the blog post, a given software product may have more than one value along each axis, though often its current primary focus will be in just one category.

So, I thought it might be of interest to do a quick series of brief blogs just explaining each value.

I'll begin with Data Handling, number 1 on the "what" axis:

Now, this is a very big category, and covers everything from storing and managing data in databases, to processing, combining, sorting, modifying and moving data. It's also one of the things mainframes do best.

Right from the beginning, the mainframe architecture was designed to handle massive amounts of data such as a world-class business might regularly process. One example is: all the records a national government has about its citizens' taxes. Another: all the information about a large financial institution's customers and their accounts.

One aspect of this category, databases, has deep roots that go way back. The theory of how to efficiently store and access large amounts of structured data led to the development of some of today's most important databases, including IBM's DB2 and IMS, CA Technologies' CA Datacom and CA IDMS, Software AG's ADABAS, and some distributed databases such as Oracle and Ingres which are available on Linux.

Another is data sorting, which was such an important utility that it was the first product of two important software companies that are still going strong today: Syncsort and their eponymous product, and CA Sort from Computer Associates, now CA Technologies. That was despite the fact that IBM already offered their own sorting utility for the mainframe, but the business need was such that these optimized alternatives were enough to launch their companies.

There are also many utilities designed to examine and modify and move data in many different ways. Interestingly, this is a good example of being in more than one category, since they often also have value 3: Applications and Automation. A good example of this is applications that take address data and turn it into validated mailing addresses printed on envelopes (or statements visible through windows in envelopes). But I'll get to that in another two blog posts.

Saturday, September 8, 2012

Pricing Intangibles

How do you price intellectual goods that can be manufactured and distributed for no significant cost compared to the cost of their creation? Whether you're talking software, configurations, architectures, or written works, if it can be distributed virtually without ever being placed/printed on physical media (such as a book), what is the basis of value and pricing?

Back when I started working for Computer Associates International in the late 1990's, I tried to explain to my brother why mainframe software was so expensive. After all, it generally didn't have any more lines of code than PC software that sold for a few hundred dollars at the most. At the time, I suggested that the price was generally a fraction of the cost savings that it brought.

I still stand by that assertion, but over the years, I discovered just how hard that is to prove. Once a piece of software has become embedded in an environment over many years, there's no simple way of knowing how much cost it's saving, because removing it could bring everything to a halt, which would be a completely different order of magnitude of cost.

Interestingly, pricing written works can involve similar issues. If someone asks me to write up a white paper, article, or recommendation, should I be paid by the word?

I'm reminded of a joke, a quotation and an anecdote.

The joke is about a highly-experienced mechanic who is faced with a car that has stopped working, and no one can figure out why. They ask if he'll fix it, and he agrees to for $1,000. Eventually, his price is accepted, and he goes to work.

To the external observer, the mechanic appears to be dancing around the car in a manner reminiscent of Mr. Bojangles - crouching down low, leaping up high, and almost seeming to be performing a rain dance of sorts as he looks over every nook and cranny of the car. Then, he suddenly takes out a ball peen hammer, strikes the car engine with an exacting blow, and pronounces it fixed.

The owner tries it out and, sure enough: the car now works. Then the mechanic presents his bill for $1,000.

Skeptical, the owner asks for a price breakdown. The mechanic replies, "That's $1 for hitting the car, and $999 for knowing where to hit it."

The quotation, attributed to many people, but probably most famously to Mark Twain is: "I would have written a shorter letter, but I did not have the time."

The anecdote is about the production of enameled steel pots and pans during the Soviet era. Apparently, it became harder and harder to find small ones while there was an overabundance of large ones. This was not because having larger ones somehow improved the cooking experience, but rather because the factories that produced them were incented by the amount of material used, rather than any measure of useability or rates of sales.

What all three of these have in common is that the value of things is not always connected to the simplistic, some might say "common sense," measures we apply to commodities. In fact, other than the most tightly-controlled and homogeneous commodities, I suggest size is rarely a good measure of value.

One of my favorite examples of this is the Windows operating system, which I've heard may be the largest software system ever created in terms of sheer number of lines of code. And yet, you can buy a copy for a few hundred dollars, and even get a PC thrown in (or vice versa). Compare that to z/OS, IBM's premier mainframe operating system, which costs a few orders of magnitude more than that. Yet, I doubt IBM would claim there's a commensurately larger number of lines of code; likely, there are fewer.

So, how do you price mainframe software in a way that is fair? I suppose it makes sense to take a lot of factors into account, from the capacity of the mainframe (they vary greatly in capacity), to the cost savings and benefit that a typical customer is likely to derive, to the cost of creating, maintaining, and deriving a reasonable profit from a piece of code that is only installed on a portion of the 10,000 or so mainframes in use in the world today.

That's a very different approach from pricing generic consumer software, of which millions (maybe even billions?) of copies are sold.

Likewise, when pricing the generation and delivery of other intellectual property that has great value to a relatively limited but very economically significant audience, it seems to me that paying per word is like asking for excessively large cookwear, rather than looking for that perfect touché in as few effective words as possible.

Of course, like any good question, this is more of a journey than a destination. But it's one of the important questions in the mainframe world, and one that is of particular interest to me as I look to generate written works of significant value, and don't think it appropriate to charge by the word, since I can often achieve more with a small number of well-chosen-and-aimed words than with a 10-page white paper.

What do you think about the pricing of such intangibles?

Saturday, September 1, 2012

"F*** it, I'm buying a server" - the story of distributed IT

"The mainframe whisperer" - that's what my nephew Alex told me I should be after I described the culture of mainframers and explained why it clashes with the ambient business IT culture of immediate results.

Then his friend Stefan, who is tasked with getting results and has to deal with mainframers, told me of his experience of frustration with rigid behavior, leading to the memorable assertion: "F*** it, I'm buying a server." I told him that phrase was the history of IT since the 1980's.

The problem, of course, is that the rigid behavior that frustrates people so much is the reason why mainframes work so well while every other platform keeps crashing, being hacked, and generally failing the test of trustworthiness.

For me, this is the fractal meeting point of ocean and shore, of new and old, of aspiration and reality. And, somehow, mainframes have become a part of experiential reality, shoring up the world economy, while most of IT has continued to float on the waves of aspiration.

As I begin to spin up Mainframe Analytics, our blog, our website, and our business activities again after the passing of my wife earlier this year, this seems like a good leaping off point for this blog.

What can be done to bring together that which works with those who plan? Well, at SHARE in Anaheim, I had the opportunity to give the MVS Program keynote, in which I talked about this topic. And my answer, as you can see in the presentation posted at https://share.confex.com/share/119/webprogram/Handout/Session11474/2012-08-06%20Harbeck%20MVS%20Keynote.pdf, is to get to know your mainframe environment ("know thyself"), optimize it for current needs ("get a haircut") and enthusiastically tell the world how great it is ("fall in love").

And that's what I'll be doing on this blog and in my role as Chief Strategist for Mainframe Analytics. Stay tuned!