Information Technology and the World

The name of this blog, Information Technology and the World, may seem grandiose. It probably is, but then my plans for it are also grandiose. I want us – you and me – to explore three issues: (1)the impact of information technology on business and society; (2) the impact of society on information technology; and (3) the lessons that each domain can teach the other, including both the possibilities and the limits of what technology and society can accomplish.

Name:
Location: Chicago, Illinois, United States

I have worked in construction, petroleum, software and consumer electronics. Professionally, I am a physicist, and engineer, and an IT professional.

Wednesday, August 29, 2007

Becoming the Master of My Web Site

I run a very small consultancy specializing in helping organizations make money from their information technology expenditures.

My web site - www.ghco.biz - has been primarily brochure ware, which I use to introduce my company and my services to prospective clients. Because of an excellent design and implementation – by Konlon & Associates ( www.knolon.com) – it has served this purpose quite well.

I now want to do more with it. I want to use it to store and distribute white papers that are too long for this blog, and to keep my clients and colleagues up to date on what I am doing. I cannot justify having a permanent web designer on my staff and I cannot keep the site up to date using a contractor because turnaround time for modifications is just too long.

So, can I maintain and update it myself? Up to now I have not been able to do it. The good news noted above – it really is a cool site – is accompanied by bad news: it is coded in native html and I am not an html programmer. I found this out after a week of fruitless experiments which the site barely survived.

During this trauma, I kept asking myself why no one had built a simple converter that would accept an MSWord document (my preferred word processor) and turn it into an html page. It seemed to me that if would be a fairly easy task for a competent programmer, but as a colleague of mine observed long ago, everything is easy until you actually have to do it. Luckily for me, I ran into Marcus Ketel of WeseditorPlus (http://www.webeditorplus.com/). at a meeting of the Illinois Technology Association. He told me about WebeditorPlus. I tried it and it works. It has an interface that looks very much like MSWord, and the web pages look just fine.

A passing irony: In a prior post to this blog, I argued that Software as a Service is not ready for prime time. WebeditorPlus has proven me wrong.


<><><><>

Monday, June 18, 2007


Some Thoughts About IT Governance

An ongoing theme in this blog is that many of the issues that executives face in managing IT have counterparts in other areas of management, and that IT leaders and business leaders have much to learn from one another about how to address these common issues.

Another kind of insight can be gained within IT by comparing different activities within IT with one another. Outsourcing is a case in point.

I recently attended two seminars on outsourcing. At each of them, there was considerable discussion of the issues that arise in negotiating and managing the outsourcing relationship, both in the sessions and in the breaks. These experiences got me thinking about how managing an outsourcing relationship is different from managing the relationship between an internal IT shop and its user communities, if indeed it is different.

As we shall see, although the goals are always the same – put IT into the service of the enterprise effectively and efficiently – the governance of outsourced IT is quite different from the governance of internal IT. Each mode has something to teach the other.

I use the term “IT governance” to describe the relationships between IT and the users of its services. (This is in contrast to the term “IT management”, which deals with the internal activities of the IT function.)

“Outsourcing” is a state of mind. Business processes are characterized as outsourced when they are moved from the legal and managerial jurisdiction of the users of the processes in question, and placed under the control of an independent entity. Typically most businesses have activities controlled and managed by other companies. Examples come to mind include the company cafeteria, corporate cash management by banks, and operations of office buildings. But we don’t typically call them outsourcing because they seem to be our normal mode of doing business. The outsourcing epithet is only applied when something that has been done internally is moved to external ownership and control.

Note that from a logical point of view, a staff department (say HR or IT) is really an outsourcer to the line department that uses its services.

A few years ago, I had occasion to examine a Request for Proposal (RFP) for outsourcing from a fairly large multinational corporation. The RFP was limited to managing and operating its mid-range servers, with the idea that success at this level would be a prelude to a much larger relationship. The RFP occupied four 8 ½” x 11” three ring binders, each containing about 1,000 pages.

Needless to say, there was no analogous document governing the internal activities that the outsourcing would displace. Apparently the company doing the outsourcing was willing to govern its mid-range servers with much less formality than it would accept from an outside contractor. Why?

Before we address that question, let’s look at two parallel examples.

When United States based companies began to do serious business in Japan about 20 years ago, they encountered an unexpected roadblock. (Actually there were many roadblocks, but I want to deal with only one.) This road block was completely different philosophies of business contracts. The Japanese were accustomed to broad agreements about goals and roles, with details worked out as the work was executed.

United States companies held an almost completely opposite view of what a contract should be. They believed in detailed contracts specifying all activities and all conceivable contingencies. The Japanese type of agreements were viewed by US companies almost as Letters of Intent: statements of broad goals, hopes, and wishes. It has taken years for these differences to be accommodated in the practical world of business.

The analogy with outsourcing is this. Most US companies are willing to use Japanese-style IT governance with their internal IT shops, but insist on detailed US-style contracts to govern outsourced IT activities. Why?

An example from outside the sphere of IT is informative. Globalization of financial markets has pointed up differences in accounting standards between the US and Europe. These differences are usually characterized as European standards stating broad principles and leaving it to companies and their accountants to apply these principles, whereas the US approach is to prescribe detailed rules for all conceivable contingencies. There are currently significant international efforts under way to resolve these differences. Again, why?

The glib answer to Why? is that our culture is different from the culture of Japan and from the culture of Europe. This, of course, is a tautology, because culture is defined, roughly, as “the way we do things around here.”

Another answer to Why? is that being a technology based society, we in the US have come to expect more precision and predictability than the typical letter-of-intent contract provides.

A third possibility is that the US is a litigious society, and lawsuits with vendors and contractors are much more common than lawsuits or their bureaucratic equivalent among divisions of the same legal entity. Detailed contracts are an attempt to prevent litigation; or to provide protection if litigation ensues.

We could go on, but the Why? question is too big for this forum.

Instead, let’s think for a moment about what to do to bring these competing philosophies together in order to do a better job of IT governance in either situation.

Next time you are planning a project for internal development of a new information system, ask yourself what contractual commitments you would require if you were outsourcing the project. You would probably require meeting functional specifications, fixed or computable price, pre-specified time schedule, and rigorous change control. You might even require that certain people be assigned to the project (and that certain others not be assigned.)

Now look at the reverse situation: you are planning to outsource the same kind of development. Would you be willing to give the outsourcing contractor the same flexibility in design and specification that you would give your in-house shop? If not, why not? How would you specify conformance to corporate IT architecture and other standards, things you would expect your internal staff to honor without any discussion?

A good place to start in each of the scenarios above is to focus on the functionalities and benefits to be achieved, rather than on technical and organizational requirements.

Which brings us to a much broader question: how much can you and how should you incorporate the outsourcing company into the operations and decision making processes of your company.

For more than a few years we have been inundated with advice from the organizational and leadership gurus saying that the way to success is to emphasize teamwork in the interests of corporate success, and correspondingly deemphasize the organization’s hierarchy.

The ultimate outsourcing question is this: how far are you willing to go in making your outsourcer a member of your team, a true partner? Or, in other terms, do you want the outsourcer to contribute its creativity to your project, or do you just want a bunch of designers and coders? There will probably be different answers for different projects.

As we observed earlier, internal staff supporting business operations are outsourcers logically if not psychologically. So, when doing a project internally, do you want the IT department to contribute its creativity to your project, or do you just want a bunch of designers and coders? Are you willing to make the IT department a full member of your team?

I think you should.


<><><><><>

Wednesday, May 02, 2007

Information Technology in 2022

In the course of a discussion with a client the other day, he asked me what I thought IT would look like fifteen years from now. Because my time frame is usually closer to five years than fifteen, I had no ready answer, an unusual posture for me. On reflection, I think it is possible to make some useful statements about how things may evolve. (Note that I carefully avoid calling them predictions.)

Over the history of IT in business we have seen three kinds of changes, in addition to some things that seem never to change.


Ø Cyclical changes, where some characteristic oscillates between two extremes, but never seems to go off in an entirely new direction. These cycles will continue unless some major disruption occurs.
Ø Secular changes, trends superimposed on the cyclical changes. These trends tend to continue until some external factor intrudes.
Ø Disruptive changes, things that are not the product of orderly evolution and are therefore almost completely unexpected.

To give my client’s question some bounds, I am assuming that he meant, “What will the IT departments of large organizations look like in 2022?” Here is what I think, and why.

1. 25% to 35% of IT departments will report to finance. The others will report to a COO or within a business unit.


The first significant use of IT in business was financial record keeping. Hence, in the beginning almost all IT departments reported in the finance function. This has been changing very gradually. In 2006, about 50% of IT departments still reported in finance. I think that as IT becomes more deeply enmeshed in business processes, particularly in cross-silo processes, some organizations will have IT report to a COO or to business unit managers.


2. The IT department will be either (1) centralized and about to decentralize, or (2) decentralized and about to centralize.


This is one of the cyclical processes. It is driven by organizational change (which is itself often an oscillation between centralized and decentralized), by changes in the economics of IT as technology changes, and/or by changes in business norms such as the 'work at home' movement, and the demand for inter- and intra-organizational collaboration to achieve business goals.


3. The CIO will be a new kind of person: someone with both deep business skills and deep IT skills.


We have gone through the alternation of business executive replacing technologist, technologist replacing business executive, then repeating the cycle. I think that by 2022 we will have become sophisticated enough to build career paths that will produce the business/technologists that we so sorely need.


4. The IT department will be about the same size as now in relation to the organization it serves, in terms of budgets and number of people, but the focus and skill sets will be radically different, changing from technology focus to business and managerial focus.


We have experienced a long term trend to drive IT capabilities from individual applications to infrastructure. Data management used to be built into each application; now it resides in a DBMS available to all applications. The same is true of telecommunications. IT technologists will move from applications development and operations to infrastructure development and operations. There will be more project managers, particularly those skilled in managing outsourcing.

In addition, some technology jobs have moved to producers of packaged software, others to outsourcing companies. More recently, the advent of software-as-a-service has continued this trend.

However, two classes of technical skills will continue to be housed in company IT departments. One is systems integration; the other is some form of new technology monitoring and development.


Innovation in IT software and hardware will continue well beyond the 15 years we are considering here. IT departments will need to keep track of these innovations and integrate them into existing systems without depending on outsiders.

You might think that all these changes would decrease the headcount in IT departments, but this probably won’t happen. There is, first of all, bureaucratic inertia. But beyond that, IT success will increasingly depend on the business skills of the IT analysts. The easy-to-understand applications that save clerical work have been completed in most companies. New gains will be achieved by automating the harder-to-understand parts of the business. And current IT professionals are – as a whole – not very good at this.

Evidence? The rise of a new pseudo-discipline called Business Intelligence, accompanied by a new job title, Business Intelligence Analyst. Business Intelligence is merely a new name for the long-standing goal of using IT to support the goals of the enterprise. The fact that this new name has gained traction in the market place is proof that IT people have not been doing this job as well as they should, and that software companies and consultancies are doing their jobs very well.

Bottom line: additional business analysts will more than fill the slots vacated by displaced and outsourced technologists. By 2022 we may be realistic enough to call them by their proper name.

5. The level of trust and acceptance of the IT department by the rest of the organization will be better than it is now for existing and stable information systems. There will be important novel and disruptive systems under development, which will be viewed even more skeptically than they are today.

This is a secular trend that will continue in most companies until IT makes some particularly egregious mistake, which will not happen very often. We really are getting better at using the technology. More and more members of the user community work with information technology in their jobs, and for the most part they find IT systems useful and reliable. In addition, many of the victims of early IT fiascos are retiring and thus are no longer part of the environment in which IT must function.

6. The key issues on the agenda of the CIO will be about the same as they are now:

  • Aligning the IT organization with the business organization
  • Developing IT strategy to support business strategy.
  • Developing and maintaining good relations between IT and the user community.
  • Developing and implementing measures of the value of IT that are acceptable to both IT and the rest of the organization.
  • Developing and maintaining effective communications between the IT community and the user community, and between the IT community and senior management.
  • Developing an acceptable form of IT governance.

    Various academics and consultancies have been surveying CIOs for decades about the issues they face. Typically, the surveyors ask the CIOs to “List the top ten issues you expect to be facing during the next year or two.” The items listed above have appeared on almost every such survey from 1980 into the 21st century. There is nothing on the horizon that seems likely to solve any of these problems: they will continue to vex everyone in the IT profession.

    In summary, the IT department of 2022 will look from the outside about the same as it looks now, but inside it will be very different indeed.

<><><><>


Sunday, March 11, 2007

Another Kind of IT Project Failure



Scott Rosenberg describes the trials and tribulations of building innovate computer software in his new book “Dreaming in Code” (ISBN 978-1-4000-8246-9). He reports the history of the Chandler project under development by Mitch Kapor’s Open Source Application Foundation. The subtitle of the book gives us a sense of the issues he addresses: “Two dozen programmers, three years, 4,732 bugs and one quest for transcendent software.”


Greenberg observed much of the Chandler project from the inside, sitting in on meetings, discussing issues with the participants, and recording what he heard and learned. It’s a great read, and I commend it to anyone interested in the development of software. (An extensive review appeared in the Chicago Sunday Tribune Book Section on March 4, 2007.)


As the book ends, Chandler is in a pre-release stage. Some modules have been released, but the system is incomplete. As of this writing (March 8, 2007), Chandler’s website – http://chandler.osafoundation.org/ - says “Our goal for our most recent release, 0.7alpha4, is to give a rough sketch of what Chandler will be by our first Preview release (Spring 2007).” So it is still not finished.


As you may know, I have been interested for some time in what causes IT project failures and how to prevent them. [http://www.ghco.biz/publications.htm] My research and consulting has been oriented toward development of information systems by user companies for their own use. “Dreaming in Code” offers us the opportunity to see how my results might apply to the somewhat different world on the shrink wrap developer.


In the course of my research on project failure, I identified seven root causes of project failure, any one of which would cause failure. They are:



  1. Incomplete project planning and evaluation

  2. Project plan misses non-technical issues

  3. Roles and responsibilities of users, senior management, and IT are not well understood and agreed to by al parties.

  4. Bad or non-existent communications among interested parties.

  5. Inadequate project governance

  6. Lack of post audit procedures and project archive.

  7. Inconsistent application of good practices.

As I tried to go through the above list item by item to see how each root cause applied to the Chandler project, I had an epiphany. I suddenly realized that the Chandler effort was not a project at all! A project is, after all, an activity with a pre-defined beginning, a pre-defined end and a well-defined work product. It is not clear from the book whether Chandler had a pre-defined beginning; apparently Mitch Kapor simply started it. Certainly there was no pre-defined end. And apparently from the beginning to where they are now, there has been no well-defined product.


The author argues that no IT project has an end because changes are always being made. This, I think, is pernicious nonsense. The concept of project is a management concept, a methodology to get some thing done. The fact that the thing will change later – which almost every thing known to man will do – should not be used as an excuse for never finishing.


Chandler is a research effort, and a badly managed one at that. I spent some years in the chemical process industry. Our company had a 500 person research facility that did primarily applied research, much of it on new chemical engineering processes. We had a well-defined sequence of stages in the development of any new process:



  1. Bench top work to see whether the chemistry at the heart of the idea actually worked.

  2. A bench top model, in which all the process steps (not just the chemistry) were implemented and joined to one another.

  3. Construction of a pilot plant, the smallest possible configuration that could make enough of the product to test.

  4. A semi-commercial scale plant, the smallest possible plant that could actually earn a profit.

  5. A full scale production plant.

Each of these stages included testing, modifications, retesting, and finally a formal approval to proceed to the next stage.


Chandler started by trying to execute Stage 5, after doing a little Stage 1 work. Is it any wonder that the developers had problems?


All of the old project management clichés apply: If you don’t know where you are going, it doesn’t matter what road you take; If you don’t know where you are going, you will never know when you get there; If you don’t know where you are going …..et cetera, et cetera, et cetera ….


What should Chandler have done in the beginning? It should have been treated as a project, complete with project plan and all the other impedimenta of project management. Such a plan should have identified the research needed to reach the goal, and provided for the execution of this research. Or, don’t even try to build a product until the relevant research is complete.


What should Chandler do now? What they are doing, as reported by Rosenberg: partial releases – which are often called picking the low hanging fruit; stabilize the technology; cut out, for now, anything that cannot be bounded and estimated, and thereafter, stabilize the scope of the project.


<><><><><>

Wednesday, February 28, 2007

Save Money the Easy Way – Stop Doing Useless Things

Most organizations spend a lot of time and money doing things that are totally useless, and they don’t even know it. If you eliminate these things, you can save a lot of money, with little up-front expense. It is not hard to identify these opportunities, but it takes some effort and a lot of courage to eliminate them.

In the days of mainframe computing, one of the biggest tasks at the computer center was printing and distributing reports. Large companies had multiple high speed line printers (24,000 lines per minute was a standard speed) and platoons of messengers with carts to distribute them.

Everybody knew that most of these reports went unread, (there weren't a lot of 24,000 lines per minute readers) but most users were unwilling to give them up just in case they were needed. So some CIOs (then known as data processing managers) developed the tactic of simply not printing some arbitrarily chosen set of reports for a month or two. If no one complained, he permanently deleted the reports and proceeded to the next set. This continued until the complaints started. The computer operations departments saved a lot of money and the businesses got along just fine.

This particular problem ultimately went away as online systems and desktop displays made most printed reports obsolete. But there are a lot of other activities that are equally useless and are carried on because no one has had any incentive to curtail them. Here is one example.

I did some business process consulting a few years for a company that had headquarters in the United States and did business in about twenty-five companies around the world. In the course of our analysis we found that all sales contracts above $10,000 from one of the overseas offices were routinely sent to headquarters for approval. Our analysis showed that if the local approval limit were raised to $50,000, about 90% of the headquarters work would disappear, yielding substantial savings in time and money with very little added risk.

We recommended this change to management and were told that the local approval limit already was $50,000 and had been for a number of years. Further investigation showed that on one occasion, five years before our study, a contract approved locally had gone bad and local management had been blamed by headquarters. Local management took the understandable position of “never again”, and routinely sent everything above $10,000 to headquarters just to play it safe. Locals were protected, but the company spent a lot of money on useless work, and customer service suffered in the bargain.

This a common situation. A one-time problem arises and a business process “fix” is installed. The fix becomes permanent and continues long after the original problem has disappeared. The first task is to identify these things and kill them.

The standard way to go about this is business process redesign. (We don’t call it reengineering any more because that has become a four letter word.) But that is an extensive and expensive process. The trick is to pick a place to begin where you can see some immediate benefits, benefits that you can get without spending much time or money.

Here is one way to start. Ask each of your workers to identify the three most useless things they do. Use the results to decide where to start. Then pick the low-hanging fruit – the easy wins. When you have completed that process, you will know enough and have saved enough money start a real business process design project.

Or not. In many companies your can find enough low-hanging fruit to keep increasing your profits for a long time without ever going through the pain and disruption of a major business process redesign.
<><><><><>

Ethics and the Information Technology Professional

The current epidemic of corporate scandal and wrongdoing has involved mostly senior executives and their cohorts, as well auditors and investment bankers. There has been a lot of unarguably criminal behavior. When bad things that seem happen to be in a gray area of the law, we hear about ethics. “Maybe it wasn’t against the law, but it was unethical.”

All of this hue and cry caused me to wonder where IT fits into this picture, if anywhere. I am happy to say that so far I have seen no mention of IT in the press in connection with this wave of corporate corruption. However, for the future, it is worth asking: Is there something special about IT that creates special ethical problems for the IT professional? The answer, as we shall see is, "Yes, IT is different, but not very."

There are, it seems to me, three broad areas of concern, beyond actions that are clearly illegal.

    IT professionals are the designers and the custodians of the information systems that run the enterprise.
    IT professionals are the custodians of the information that these systems generate as well as the data that surrounds the systems. (Auditors and controllers might think that this is a usurpation of one of their roles, but clearly, IT has a big part to play here.)
    IT professionals have obligations as professionals, obligations that that they share with members of other professions.

Information systems have a lot of influence over the business processes that run the company. Theory says that IT designs systems that do what the users specify. The dirty little secret is that no matter how carefully and how well the users do their part in system design, there remains a myriad of details that the designers and programmers implement without user input as they go about their business.

Ethical issues for the designer/analyst abound.

    How far should the IT people go in imposing their ideas about how to conduct business processes when they design a new system?

    What should the designer/analyst do if the user does not properly specify parts of the system that do not directly affect the user’s own operation? This happens most often because of misguided efforts to save money and decrease implementation time. Things like security, audit trails and disaster recovery are also often neglected or ignored.

    In selecting the technology platform for a new system, does the analyst strike a reasonable balance among the needs of the user, the overall needs of the enterprise, and his personal preference to use a new and exciting technology?

The custodian of data and information faces a different set of problems.

    Personal privacy is at the forefront of our consciousness at the moment. How should the database manager respond if she is asked to deliver personal data about customers to an outside company? To a corporate affiliate?

    Personal privacy is an issue within a company as well. The supervisor of HR systems has access to a lot of information about employees, including in many instances health information. What should be disclosed? What should be held back? What should not be collected at all?

    Information about new systems is often exchanged by IT professionals from different organizations in order to keep up to date. When may this be done?

IT professionals, because they are prfessionals, have ethical obligations similar to those of other professionals, e. g., physicians, lawyers, accountants. These include

    The obligation to be competent at their professional tasks, and not to claim expertise that they do not have.

    The obligation not to use their status as professionals to deceive or bully others into doing things against their interests.

    The obligation not to use their status as professionals as a cloak for other agendas.

    Ironically, the professional obligation group (the last set listed above) are the most straightforward (although not necessarily the easiest.) Professional societies take these responsibilities seriously and offer extensive standards and guidelines.

    The data custodians must be guided by policies set by senior management and enforced rigorously. The stakes are too high these days to be cavalier about enforcing privacy policies. There are, of course, many technical tools and techniques to guard the privacy of data, but in the end human beings decide what should be shared and what should not.

    The ethical requirements on the analysts and designers of information systems are harder to define and enforce. They always in the mode of balancing competing interests: getting the current job done as quickly and inexpensively as possible versus making sure that the current work fits into the future plans of the enterprise; serving the interests of the immediate user versus not degrading service to the user of another system; balancing the capabilities of new technology with the lower costs and higher reliability of older technology.

    Each of these choices has both technical and business dimensions. The project management system must assure that all of these tradeoffs are considered by both business staff and technical staff, and require all decisions to be in the long term interests of the company.

    Those are the problems. What are the solutions? Philosophers and theologians have been trying to answer this question for millennia, with only modest success. So what can you do? The best overall guidelines I can think of come from the theologians of 2000 years ago. Jesus said, “Do others as you would have them do unto you.” Hillel (a rabbi who was roughly Jesus’ contemporary) phrased it slightly differently: “Do not do unto others that which you would find hateful if done to you.” These sound like retty good guidelines to me.


    <><><><><>

      Labels: ,

      Monday, January 22, 2007

      Service Oriented Architecture
      Is it Ready for Prime Time?

      The short answer to the question in the title of this piece is, “Yes and no; it depends on what you mean by prime time.”

      When I first heard the buzzword Service Oriented Architecture (SOA), it sounded like a pretty good idea. After all, we in IT have been talking obsessively for years about better service for our users, while not doing a very good job of providing it. Developing an architecture (a blueprint) for providing good service to users seemed like a natural next step. So I decided to look into it.

      As the hype began to build I found myself asking the same questions over and over again without getting any clear answers. “What is SOA?” “Who is getting the service, the end user or another computer program?” It was déjà vu, back to the beginnings of client/server architecture, trying to find out exactly what a client is and what a server is. We now have a pretty good idea about these things and how they interact. But it was a struggle to reach this point.

      Initially I naively assumed that the services in question were oriented toward the end users. It took me several months and a couple of dozen interviews to learn that the services in question were oriented toward IT applications. In a service oriented architecture, when a program – typically an application – requests a service, say a customer name and address, it requests this service from an independent program that provides this service not only to the application in question but also to other applications requiring the same kind of information. SOA has no direct contact with a user. Rather, its goal is to enable and expedite the work of application programs.

      Recently a client (a human, not a workstation) asked me attend a conference on SOA and give him my opinion of its maturity and suitability for major rollout. I attended an IDC conference and then did some further research. Here is a summary of what I learned. (Incidentally, the presentations are on IDC’s web site. Go to http://www.idc.com/. E-mail idc_support@idc.com to get a password. The event name is IDC Service-Oriented Architecture Forum Midwest.)

      The core idea of SOA is to design applications around the use of various services, each dedicated to a specific task and each usable and reusable by multiple applications.

      The goals of SOA as recited by several authors and speakers are the same as the goals we have been hearing about for many other architectures and technologies: lower costs of new applications; quicker deployment of new systems; quicker and less expensive updates; lower total costs of ownership. All of these can be achieved by having widely useful reusable services.

      If you have been around IT for a while, this probably sounds familiar to you. We have been talking about reusable code for decades. COBOL has stored procedures; FORTRAN has callable subroutines. object-oriented programming (OOP) has – what else – objects. In every case, the purpose was to save money and deployment time by having pre-coded subprograms that already had been used in other applications and could be used in still other applications in the future.

      We had a little success with reuse in COBOL, and a modest amount in FORTRAN. There were a few successes in OOP. The most promising user was a large financial services company which was so pleased with its use of OOP that it spun off a company to go into the business of selling the objects it had created. This company was not a resounding success.

      Note that in every case cited above, the code in question was in fact reusable. It simply wasn’t reused very much. Why? The promise was there. The engineering was done. The potential benefits were large. Before we jump into SOA with both feet, we should figure out why previous attempts to reach the goal of reuse failed.

      One difference between the world of SOA and the prior world of reusable code is the presence of the Internet as a host for services. It is in many ways the ideal host. It uses industry standard interfaces; it transfers the work and responsibility for operation and maintenance of the service from the user to an independent vendor. And its use prevents the application programmers and architects from meddling with the service to customize it for a specific use.

      The current level of discourse about SOA in the trade press and on the seminar circuit suggests that SOA is far from a mature technology. One article lists the Five Things You Must Do to succeed with SOA. Another lists the Seven Keys to SOA Success, Unfortunately, some of the Five contradict some of the Seven. One suggests that the beginnings of SOA be small and low key; another says that SOA must be a long term enterprise-wide activity,

      There is some consensus that a successful SOA program will require changes in the IT organization, changes in IT governance including SOA steering committees and SOA standards committees, enhanced cooperation across business units (to assure that the services developed are in fact reusable and reused), and increased understanding of business needs by the IT community. None of this is new, and none of it is unique to SOA. Look at any of the surveys of CIO concerns over the last couple of decades and you will find the same issues. We haven’t resolved them before. Why should we think that the new perspective of SOA will enable us to resolve them now?

      The hype surrounding SOA has called forth the usual identity theft associated with new software; other things are renamed SOA to ride the marketing wave. Be sure you are getting the real thing. Lack of common definitions at all levels, including the definition of “ service” itself causes confusion and mistakes.

      As of now, all of the tools are not in place. Any real world application of SOA will require some custom code. There will be legacy systems which cannot be effectively served by SOA. Further, you will need a new set of tools to manage both the services and the applications that rely on them.

      Finally, there remain significant design issues that are unlikely to be resolved in any definitive way. One is the question of granularity. How much detail should, say, a name-and-address service provide? Zip code (5 digit or 9)? E-mail address? Identification of spouse and children? Making these decisions is an art, not a science, and is likely to remain so.

      Please do not interpret my comments above as a condemnation of SOA. Quite the contrary. I think it is a positive development, particularly the use of Web based services. The attention SOA is getting is reemphasizing the importance of the perennial IT issues of better understanding of the business by IT, better communications between IT and the business, and better governance of IT activities shared between IT and business executives.

      SOA is mostly old wine in new bottles, but this is not bad. It is offering us another chance to get at core IT issues that have bedeviled us for years. And, finally, the re-use idea may actually work. If it does, it will constitute a major advance in the way we do things, and provide us with systems that are less expensive, more agile, and more focused on business needs.

      Is SOA ready for prime time? I recently read about a new kind of cable TV channel. Each cnannel serves a specific individual resort community such as Aspen, Vail, or Stowe. These channels are focused on visitors rather than residents. They report snow conditions, weather, and information about local events. Prime time for this small, select audience is not 7 PM to 10 PM as it is for network television; it is 8AM to 9 AM and 5 PM to 7 PM.

      The answer to our topic question is this: SOA is ready for prime time for a small and select group of organizations: those that can bear the risks and live within the constraints noted above, those that have advanced technical skills available for this work, and those willing to make the changes in business processes (IT and other) that a successful SOA implementation requires.

      SOA is too unstable and too iffy at this point in its history to be a wise choice for an organization-wide architecture of a large company. But it may be a good choice as a target architecture “to be” in five to ten years.

      I have left until last the most important question about SOA. Will your designers and implementers actually use the reusable services designed and built for other applications? The passive aggressive opposition by IT professionals to reusable code is what doomed most previous efforts. Can your organization overcome this mind set?

      I am reminded of the old riddle about statisticians. Why do statisticians distrust all data? They divide data into two classes: data collected by others and data they themselves have collected. They distrust data collected by others because they do not know how it was collected; they distrust data collected by themselves because the know exactly how it was collected.

      Make you own translation to the world of system designers and programmers.

      <><><><><>

      Labels: ,

      The Very Worst Identity Theft

      “Who steals my purse steals trash; ‘tis something, nothing; ‘Twas mine, ’tis his, and has been slave to thousands. But he that filches from me my good name robs me of that which not enriches him but makes me poor indeed.” Shakespeare, Othello, Act III, Scene 3.

      The worst kind of identity theft is the theft of reputation. The purpose of most identity theft is to enable the thief to steal money or merchandise that is the equivalent of money. This is important to the victims but it is small in the larger scheme of things. The smooth functioning of our complex society depends on trust and trusted relationships. When this trust is compromised, bad things often happen.

      There are many examples in the marketing arena. False trademarks on fashion goods “steal” the identity of the owner to enable the counterfeiter to sell his goods for more that they are worth.

      A slight variant of this is common in the IT world. Marketers change the names of their products to associate them with some new fad. A recent example is in the field of knowledge management (KM). When this became a hot topic a couple of years ago, some vendors of document management systems suddenly discovered that these systems were knowledge management systems – which of course they were not – and proceeded to market them under that banner. Who suffered? Some naïve customers were taken in by the propaganda, but more important, the discipline of knowledge management is degraded by these distortions and false promises. In the end we all suffer from this debasement of what will become an important part of our business activities.

      The very worst identity theft of our time is the rampant theft of the identity of Science. This theft will be much more damaging in the long run than monetary fraud. The identity of Science is regularly and shamelessly stolen by social activists, politicians, marketers., and others with some axe to grind and weak justification for their positions. They pretend that their ideas are supported by the kinds of objective observations and analysis that are the core of the scientific enterprise. The example of current interest is the debate over global warming. Much of this debate depends on pseudo-science at best, and cynical distortions and misrepresentations at worst.

      It is useful to set the debate about global warming and the related apocalyptic predictions into the larger context of models of the world, of which the global warming model is merely the latest.

      Models of the World – Thomas Malthus

      In 1798, Thomas Malthus created one of the earliest models of society in his paper “An Essay on the Principle of Population, as it Affects the Future Improvement of Society”. [You can find it at htttp://www.ac.wwu.edu/~stephan/malthus/malthus.0.html. Read the first chapter or two to get a sense of the argument and the data with which he supports it.] He says:

      I think I may fairly make two postulata.
      First, That food is necessary to the existence of man.
      Secondly, That the passion between the sexes is necessary and will remain nearly in its present state.


      He argues that the rate of increase in food production will be arithmetic – it will grow by a constant amount each year as limited amounts of new land are turned to agriculture and as improvements are made in agricultural practices and tools.

      He goes on to assert that the passion between the sexes will cause a geometric growth rate in population unless checked by famine, disease, war, or natural catastrophe.

      Based on these arguments, he reaches the inevitable conclusion: population will outrun food supplies until some limiting event, such as one of the catastrophes listed above, occurs. He further suggests that there will be cyclical oscillations between a condition of too much food and not enough food, and that the brunt of the hardship will fall on the urban working (lower) classes.

      Malthus’ work was (and still is) enormously influential, so much so that to this day the term ‘Malthusian’ refers to the idea that human population and welfare are limited by available land and other resources, and that we are nearing the limits.

      Malthus’ model was not a model in the sense that we use the term today. It is verbal, not mathematical. (Babbage did not begin work on his Difference Engine until 1833.) It relied on the personal observations of Malthus and a few others. But, given the postulates and the data, the analysis and conclusions are faultless.

      One of the beauties of Malthus’ paper is the clarity of thought and exposition that he provides. He starts with the purpose of the model – which is to study whether it is possible to improve Society. He describes his assumptions and methods clearly, and argues forcefully for his conclusion, which is that Society cannot be improved very much.

      Malthus’ predictions proved unfounded. What did he miss? He missed the possibilities of innovation, invention, and huge amounts of new land available for agriculture. What he saw as the arithmetic rate of increase in agricultural productivity has turned out to be geometric, with production rising at a much faster rate than population growth.

      Let’s not fault him for these omissions. Hindsight is about 20/20 after a couple of hundred years. He did a first rate job based on the science of his time.

      Models of the World – The Club of Rome

      174 years later, The Club of Rome revisited the issue. The Club is a global think tank, a non-profit, non governmental organization (NGO). Its basic premise is precisely the opposite of Malthus’ conclusion. In the words of its Web site, “it brings together scientists, economists, businessmen, international high civil servants, heads of state and former heads of state from all five continents who are convinced that the future of humankind is not determined once and for all and that each human being can contribute to the improvement of our societies.”

      But its most prominent work, the report “The Limits to Growth” (1972), reaches precisely Malthus’ conclusion, although by a different route. The report is based on the outputs of a large scale computer-based model of the world and its economy. It includes representations of natural resources – renewable and non-renewable – human resources and behavior, and it incorporates elaborate feedback mechanisms that describe the interactions among these factors.

      It also includes Malthus’ two assumptions. The first assumption, in slightly more general terms is that all resources – not merely agricultural land – are finite in quantity and even with increased productivity and capital investment, we will still run out of one or another in a rather short time, 100 years or so. Malthus’ second assumption, “That the passion between the sexes is necessary and will remain nearly in its present state” is included intact, although expressed in much less charming terms: population will grow geometrically, unless restrained by natural catastrophe or human planning.

      The Malthusian assumptions inevitably led the Club of Rome to the Malthusian conclusion, restated in “The Limits to Growth”: population growth and economic growth are limited and the task of man is to guide the world to live within these limitations. The main point where the Club of Rome diverges from Malthus is that Malthus thinks that no long range improvement is possible, whereas the Club thinks that the elite can guide us to a constrained and limited future. Market forces are seen as inadequate to the task.

      We are still within the time frame of the Club of Rome’s predictions, so it is difficult to prove that they are wrong. But the trends are clear. In many parts of the developed world, population is decreasing rather than increasing. The green revolution, advanced plant genetics, and improved methods of cultivation have turned the world from a place of chronic food shortages to a place of chronic food oversupply. (There are plenty of food supply problems, but they are almost all in the area of distribution and political constraints, not in the area of production.)

      What about shortages of other resources, copper, iron, coal, petroleum? While these materials are essential to life as we know and want it, the proportion of our wealth and productivity used in the production of physical goods decreases year by year (12% of GDP in the USA last year), as services and intellectual property become more important in our lives. There is no shortage of mental resources, and there never will be until every brain cell of every human being is fully occupied. Even this may not turn out to be a limit. See Sharon Begley’s recent book “Train Your Mind, Train Your Brain”.

      Models of the World – Global Warming

      The latest incarnation of the Malthusian disaster scenario is the specter of global warming. The models are too complex to analyze here. You can find them described on many web sites; I just Googled “global warming” and got 70,400,000 hits.

      The important point is this: the people who predict that global warming will happen claim Science as the basis for their conclusions, and as the justification for the social and political changes they propose ‘to save the earth.’ This claim is dubious at best. Let me count the ways.

      1. It is widely asserted that the consensus of scientific opinion is that global warming is occurring and will continue. This is simply false. The assertion is often based on a report several years ago by the National Science Foundation that collected the significant papers on the topic and published them in one volume to facilitate discussion of the issue. But the introduction/summary of the report (which is all that most people read) commented favorably on the papers which supported the global warming hypothesis, and completely ignored all the papers in that volume that took contrary views.

      And anyway, since when is scientific truth decided by a vote? Check with Galileo.

      2. Many of the advocates of the warming-is-inevitable position are attempting to stifle debate on the issue, contending that it is “settled”. A couple of weeks ago Senators Jay Rockefeller and Susan Collins sent an open letter to the CEO of Exxon/Mobil demanding that it stop supporting groups that don’t believe in global warming. A couple of weeks later, Exxon/Mobil bowed to the threat of federal punishment, and changed its position.

      Putting aside the First Amendment to the Constitution, this is not the way science is conducted. Anyone who has a strong scientific position welcomes debate because history has shown that scientific truth can only be discovered through the filter of controversy.
      3. The whole global warming argument is based on the results of a number of different large-scale computer models of various aspects of climate – atmospheric trends, oceanographic studies, sunspot activity, and others. These in turn are supported by other related data: the fossil record, studies of volcanic activity, and so forth.
      There are two separate but related problems here. One has to do with the reliability of large scale computer models in general. These models are hard to build and even harder to validate. In the end, the only reliable validation is by comparing the results of the models with what happens in the real world. This validation has not been achieved with the global warming models.The second problem is related to the data required by the models. All of the data about the future is derived essentially from extrapolations of prior trends, modified according to the preferences of the modeler. Any errors in extrapolation for year 1 and magnified in year 2 and more in year 3. By year 50 or 100, most of the data has lost all relation to reality. Meteorologists have been developing computer models of the weather for three or four decades, and they have reached the point where their predictions are pretty good for three or four days in advance. I am not denigrating their work. It constitutes a major scientific advance. But what does this imply about models that purport to look 50 years into the future?

      Why Worry?

      My purpose in writing this diatribe is not to defend one or another position on global warming. It is to defend Science from the cynics and opportunists who attempt to hide their social and political agendas behind a shield of Science and who if left unchallenged, will undermine the Scientific enterprise which is at the core of our prosperity and our culture.

      <><><><><>

      Labels: ,

      Wednesday, December 06, 2006

      How to Prevent IT Project Failures

      The Issue

      For the past year I have been doing research on the causes of IT project failures and on how to prevent them. Here is a summary of what I have leaned. More information is available on my web site: www.ghco.biz.

      More than forty years after the advent of third generation computing, the IT profession overall still doesn’t know how to execute IT projects predictably and successfully. 70% of all IT projects fail, a number that has been documented in numerous surveys conducted by many organizations in many countries. This number is startling, and a bit frightening when you recall that IT expenditures constitute more than half of all capital expenditures by American businesses.

      Maybe we are too prone to use sports as metaphors for business. If a baseball player consistently bats .300, he is likely to get a multi-year, multi-million dollar contract. Why shouldn’t a CIO who bats .300 (which is the same as saying that 70% of his projects fail) also get a multi-year, multi-million dollar contract?

      Several possible explanations occur to me. One is that the baseball player’s career is very short and he should be compensated for a lifetime during the few years he plays. On the other hand, is a CIO’s career really any longer? Current statistics say that it is not. Or maybe it is because anyone interested can see for himself how good a job the batter is doing while it is almost impossible for a spectator to see how good a job the CIO is doing. Or maybe because the ball player is more fun to watch. Resolving this compensation anomaly would be interesting, but we have a better chance of increasing the CIO’s batting average, so let’s look at how we could to that.

      The Costs

      The costs of these failures are substantial. Here is an analysis of a hypothetical company based on industry averages.

      Revenue$100 MM
      IT budget @ 5% of revenue$5 MM
      Application development budget @ 20% of IT budget$1 MM
      Total budgets of failed projects @70% $700,000
      Budget overruns @ 50% $350,000
      Average project length 2 years
      Average schedule over-run of project @30% 0.6 year
      Average planned ROI of projects 25%/year
      Foregone profits of late projects @ 0.6 x 0.25 x $700,000 $105,000
      Total annual losses because of budget and schedule overruns$455,000

      Looked at another way, preventing project failures can increase the productivity of a system development department by 45%.

      Root Cause Analysis

      What does it mean to say an IT project is a failure? A project is a failure if the relevant business executives say it is. Here is a reasonably comprehensive list of things that might cause a business executive to make that judgment. Call them indicators of failure.

      1. End product of the project does not meet real business need.
      2. End product of the project does not support organizational strategy.
      3. Project implementation was chaotic and disruptive.
      4. Project was not implemented or was soon abandoned.
      5. Degree of project success or failure is unclear.
      6. Success or failure of future projects is unpredictable.
      7. Project exceeded budget.
      8. Project was finished late or not at all.
      9. Final product lacks originally planned functionality.

      Items 1-6 above have to do with the business effects of the project; iems 7-9 are more focused on the execution of the project by IT.

      My root cause analysis of IT project failures identified these seven generic root causes:

      • Incomplete project planning and evaluation.
      • Project plan misses non-technical issues.
      • Roles and responsibilities of users, senior. management, and IT are not well understood and agreed to by all parties.
      • Bad or non-existent communications among interested parties.
      • Inadequate project governance.
      • Lack of post audit procedures and project archive
      • Inconsistent application of good practices.

      In the course of the root cause analysis, several things became clear.

      • Most of the project failures – but certainly not all of them – have non-technical causes.
      • Many of these causes exist outside of the IT function and hence beyond the direct control of IT.
      • The relationships between the indicators of failure and the root causes are many and complex. For example, “end product does not meet real business need” might be caused by bad project planning, or by inadequate project governance. Similarly, each root cause may influence more that one bad result. For example, inadequate project governance may lead to budget overruns, schedule overruns and/or chaotic or disruptive implementation.
      • Although it may be expedient to attack the root causes one at a time, you should remember these interactions and not be surprised when fixing one root cause does not cure the specific problem you are addressing.

      Preventing Project Failures Before They Occur

      A program to prevent these failures should include these elements.

      • A careful analysis of the “as is” situation. This analysis will almost certainly show that some of the generic root causes have already been taken care of, and it may also identify company specific root causes not included in the generic list.
      • An educational program for all the relevant constituencies, including senior corporate management and senior business unit management as well as the IT community.
      • A business process redesign of the project management function to address the needs identified by comparing the “as is” situation with the results of the root cause analysis.
      • Institutionalizing the new business process by coaching the project team through at least one project.

      This may sound like a complex and onerous process, but it really is not. It will be a little awkward for the first project, but the payoff will be substantial.

      <><><><>



      Labels: