Chapter 5
The Application Domain

As we begin the discussion of a new domain of architecture and governance, I want to refocus the goal of this book, which is to create a framework for managing the complexity of IT functions by creating an organization structure that reduces those thousands of individual IT functions into a simple conceptual napkin drawing, the functional framework.

At the top level, we first divided the functions into four architectural domains: business, information, application, and technology. The business drives everything. They tell what the company will do. If the IT department is serving the business well and has built trust, then the business will trust IT to say how. Within IT, the first priority is to determine the information needed to support the business. Once the information needs are understood, then consideration is given to the business logic needed to maintain, transform and analyze the information. Only then can you begin to consider the technology necessary to support the information and application domains.

At this point we’ve discussed the business domain and the information domain, and are now ready to discuss the application domain.

Most companies today are better served building infrastructure comprised more of purchased third party solutions than of in-house developed solutions. In the information domain, the information lifecycle functions are somewhat indifferent concerning whether the application hosting the data is built in-house or purchased from a third party. Both create, maintain, deliver, and destroy data. However, in the lifecycles of the application domain, that build-or-buy distinction makes all the difference.

In the application domain, many differences exist between the IT functions that support the development of new applications and those that support applications in production. So many, in fact, that we divide the functions into two separate lifecycles.

  • The Software Infrastructure Lifecycle functional area includes all the functions related to software ownership: acquisition, implementation and configuration, release management, license compliance, security, and end-of life. Even in-house developed software may be managed through the software infrastructure lifecycle once it reaches production.
  • The Software Development Lifecycle functional area includes all the functions related to software development: project initiation, tracking, requirements, design, development, change management, testing, and implementation.

There’s sometimes a third lifecycle for software services, which we’ll discuss later.

Notice that the application domain functions aren’t the information systems themselves. There isn’t a billing application function and a separate sales application function. Billing and sales are business functions. In the application domain, both the billing and the sales applications are supported by the same software infrastructure lifecycle functions. The information domain supports the data needed by billing and sales, the application domain supports the software used to support the billing and sales information, and the technology domain supports the technology required to host the billing and sales software. The billing and the sales business functions may be quite different, but the governance needed to manage them is the same at the framework level. If there are any significant IT functional differences, they can be spelled out in individual information system-specific documents within the same framework function. There’s no need to create a separate IT function (column) in the framework for each information system. The software infrastructure lifecycle covers all the IT functions necessary to support production application software on behalf of the business.

Not all software infrastructure functions will apply to every information system. In-house developed software may not require any acquisition, contract management, and licensing functions. Cloud-based software may be managed in such a way that the configuration, release management, and operations functions are taken care of by a third party transparently to you.

The purpose here isn’t to give detailed information about the internals of each of these functions, but rather to discuss how these functions integrate at a higher level into an enterprise architecture and governance program. None of these application domain IT functions can be effectively managed in isolation of the larger context. There are many integration points that must be built in order to reduce the overall IT complexity and simplify effective management of IT functions.

For the most part, software management is a mature concept, with decades of well-defined best practices, solid vendor toolsets, experienced professional services, and a large body of practical advice and training available from industry experts. All of that is critically important, but well beyond the scope of this book. In keeping with our goal of taming IT complexity at the enterprise level, I’m going to focus on how all these various IT functions integrate together in the larger context of an enterprise functional framework, with particular emphasis on some of the common cross-functional gaps. I’m going to assume each of these processes are pretty mature in your organization within the scope of any individual information system, and focus on the complexity of managing all of these functions at the enterprise level across all systems in some sort of unified, cohesive manner.

Two kinds of people – structured learners versus researchers

There are two types of people in this world: those who learn better when turned loose to explore material on their own path, and those who learn better in a structured community setting. My wife and I are good examples of both extremes. Both of us decided to pursue additional degrees online long after graduating from college. My wife did much better in a classroom setting where there was a lot of structure and a lot of interaction with other students, all learning from each other. She’s much more of a people-person than I am, and appears to be wired to learn best in a social setting.

I, on the other hand, did much better in classes where they gave me the book and told me when the test would be. When I was in a more structured class, I often felt that my time was being wasted while the class progressed at the pace of the slowest student. More often than I care to admit, I was the slowest student. I was feeling rushed and pressured to move on without really understanding. I don’t want to learn just enough to pass the test, I want to really understand. I prefer to study when it’s convenient for me, where my wife likes the accountability of regular class hours. I don’t want people trying to talk to me when I’m trying to study. Most of all, I like to use the textbook as a guide and do my own research, finding and reading the experts on each individual topic. My kids constantly make fun of how often I respond to their questions with, “Well, I did a little research, and…”

Some roles within your functional framework are well suited to formal training. You aren’t breaking new ground every day. You aren’t the first person to ever face this challenge. There are other roles where you really are doing something that’s never been done before, and it’s up to you alone to figure it out. You need information and there’s no handy textbook to teach you what you need to know.

I recall my very first day on the job working for a major telco where I was hired to lead an effort to convert part of their infrastructure from a mainframe platform to client-server architecture. I showed up early, eager to get to work. As it happens, everybody was busy with a production issue that morning, and no one had time for me. An important new retail store was opening that morning, and the network hardware in the store wasn’t working. While everything had worked well in the test lab, out in the field they couldn’t get the store to come online.

Everyone was rushing around trying to fix the issues at this store opening, which had been heavily advertised. I had no idea what was going on, and no one really had time to bring me up to speed. I finally gathered they were having problems with a special two-channel modem. It was unsettling to listen to the baffled IT staff as they faced this challenge. “This was supposed to work…” “It worked in the lab…” “Have you tried powering it off and on? Try it again.” “This was supposed to work.” “It worked in the lab…”

I should say that these were extremely talented, technical people, very good at what they did. However, the mainframe platform they worked on changed very little over time. Once you learn COBOL, everything you do after that is just more of the same. Ok, that’s not fair, but it’s true that the environment they worked on was much more stable than the world I came from, where everything you knew at dinnertime is obsolete by breakfast.

I don’t pretend to know anything about that particular device, but these people were getting nowhere! Thankfully, this wasn’t the first time I’ve been in well over my head, in desperate need of information and no idea where to find it. You learn out how to figure things out.

I found the manufacturer of the modem by looking at the box in the lab, and started through my list of contacts making calls. The first few numbers I called either didn’t work anymore, or wanted to charge for consulting, but I eventually found a hardware tech at the manufacturer who said he could help. I got the attention of the people in the room, and put the phone on speaker. Best first day ever! I was a hero, just for being able to find information without access to a structured learning environment.

Architecture involves a lot of cutting-edge concepts and technology. A great architect one year can be a dinosaur the next if they don’t keep current. No one can teach you everything you need to know. An architect is going to have to be someone who can go hunt down the stuff they don’t know, and then share that knowledge around.

Do you have someone you’re considering making an architect? Ask them a question about some new topic they can’t possibly be prepared to answer. Do they get upset and flustered? Angry or defensive? Do they try to bluff their way through, pretending they know more than they do? On the other hand, do they admit what they don’t know, and come back tomorrow, or even this afternoon, with a good handle on the basics? Guess which one is going to work out as an architect?

Software infrastructure lifecycle functions

Outside of third-world countries, it’s hard to imagine a company in this century surviving without any software. I do blacksmithing as a hobby, and have found that even the artist-craftsmen and women specializing in ancient trades as a full-time career use accounting software to run their business, pay taxes, interact with their peers, and market their product.

Most companies own a lot of software, integrated and supported in unimaginable complexity. As with everything else in the IT industry, supporting software infrastructure is a whole lot more complicated than you think it would have to be.

Software has to be acquired, installed, configured, and kept current with updates. You have to worry about license compliance, security, and what to do at the end of its life. Without IT managing all of these software infrastructure support functions in an efficient, coordinated manner, the business would quickly suffer. A well-executed business plan, supported by solidly architected and governed information can be brought to its knees within months by poorly executed software infrastructure support functions.

These application infrastructure lifecycle functions are necessary for all your software, whether developed in-house or purchased from a third party. These functions are even needed when your software is outsourced in a third-party managed hosting solution, though the roles change.

The key is to bring some kind of organization, reducing the complexity. The result should be easily understandable, and flexible enough to adapt as needed, yet reflect the whole subject area in a way that instills confidence in the framework.

For production software support, the best way to conceptualize the functions is to think of them as supporting the lifecycle of an application. The application is conceived and placed into the environment. You watch over it through the years, keeping it out of legal trouble, keeping it safe from those with bad intentions, patching it up when needed, until the time comes to lay it to rest with dignity, leaving a solid legacy for the next generation. It’s no accident that managing the software infrastructure life cycle sounds a lot like parenting. And, like parenting, most of us are just winging it.

Software acquisition

The software infrastructure lifecycle functional area begins with the functions involved in initially acquiring software. Once the software is paid for, you’re going to have to live with it no matter how poorly it fits into your long-term strategic vision for your software infrastructure. The software acquisition function, therefore, is critical to the task of building that vision.

There are a number of sub-functions to the software acquisition function, including:

  • Business need prioritization
  • Business requirements
  • Architectural review
  • Build versus buy
  • Total cost of ownership
  • Budgeting cycles
  • Proof of concept
  • Request for information/proposal
  • Vendor selection
  • Contract management

Depending on your industry and the size of your company, your software acquisition process may be more or less complex. Most of these software acquisition functions are related to performing the due diligence necessary to make sure that the business’ money is being spent wisely. Really, most of these are common sense. As my father says, “It’s just work.” It’s a great deal of very hard work, but it’s not black magic. Just make sure these processes are integrated, documented, communicated, and monitored for compliance and you’ll be fine.

You’re probably already doing most of these functions at the enterprise level, sending all software purchasing requests through a central process. However, it can be tricky to integrate these software acquisition functions into the framework of an enterprise architecture and governance program. We’ll discuss a few key points in the following section.

Architectural review

One of the more important software acquisition functions when it comes to managing IT complexity is architectural review. The EAG program team should review any proposed software acquisition. The software architects may lead this discussion, but the final decision must be endorsed by all EAG domains.

That architectural review needs to include discussion of each of the following elements.

Alignment with the strategic vision

Hopefully, your strategic vision for the relevant functional area is complete and the tactical roadmap up-to-date. If not, you may need to move quickly and flesh it out now. It’s hard sometimes trying to stay ahead of the business. Things move fast, and you don’t always see change coming.

Examine your strategic vision to make sure that the software aligns with your vision of the future. It doesn’t have to be a full-blown implementation of every aspect of your long-term strategy, but it should take you as far as can be justified given the budget, time, and resources available for this project. Also, the strategic vision doesn’t have to be a utopian ideal. You can’t impose an architectural burden on a project that will kill the business case that was being used to justify the project in the first place. This is still a business, not a science project.

By “strategic vision,” we mean both the strategic vision for the IT infrastructure, such as a services oriented architecture, and the strategic vision for the business function. If you are investing in new business software, you need to make sure that the software is going to meet the business process needs, deliver the desired functionality, and integrate smoothly with the rest of the business. This may require the EAG business architect to flesh out the strategy for the business function in question.

Compliance with corporate policies

There are likely quite a few corporate policies that will apply to the software once it’s up and running. That’s not the time to find out that the solution you just purchased isn’t compliant. Do your research ahead of time. The EAG team owns those corporate policy documents. Part of your responsibility in an EAG software review is to see if the solution is compliant with the governance policies you manage.

You’re going to look like a real idiot if you approve software spend, then find the software isn’t compliant with the rules that you put in place!

Eliminating redundancy

Does the enterprise already own software that provides the desired functionality? Most companies own a number of products that provide the same basic functionality, purchased by various departments at different times. This creates needless IT complexity, the very thing we are trying to reduce! Even if there’s no annual maintenance, there’s still expense for operations, user training, administrative support, resource usage, and disaster recovery.

Eliminating all redundancy isn’t the goal. Supporting the business efficiently is the goal. When deciding whether redundancy should be eliminated, you must weigh the risk versus reward, not weigh the architectural purity of your infrastructure. There are times when it makes sense to retain two existing applications that do the same thing if they are both already in place, have minimal total cost of ownership, and generate minimal security and data quality risk. It will be much rarer to be justified in purchasing new software functionality that duplicates functionality you already own.

Your application architectural strategic vision should definitely depict a software suite that has no duplications of functionality. Write that down and communicate it as a standard. However, recognize that, like any strategic vision, it will have to be rolled out over time in a series of tactical steps. Like any standard, you are going to have to have a process to grant waivers and exceptions.

Remember that a business request for software acquisition may be that chance you were looking for to hijack a business initiative to implement an aspect of your long-term vision. At the very least, you can make sure your infrastructure doesn’t get even farther from the strategic vision than before.

There will be exceptions to your policy of no duplication of functionality. Just document those exceptions with waivers to the standard, and move on, still holding the standard high for everyone to follow.

Enterprise scalability

No software is purchased for just one information system. That system isn’t the business. Every software solution has to be examined in the large context of all information systems. Does any other team need this functionality? Have the requirements for all those information systems been included in the evaluation process?

You are the team who is supposed to have the crystal ball that tells what your infrastructure will look like in the future. You’ve already compared the functionality in the software with your vision, but make sure you also consider how the software will scale as the company grows, and as your infrastructure grows with it. Will this software handle the load?

It’s easy to say that the architectural team needs to review all software acquisitions, but good intentions are not always enough. Your company likely buys a lot of software. Your architectural team may quickly become a roadblock to the business getting things done – the last thing you want to happen.

You’re going to have to decide on some objective means of determining which acquisitions are reviewed and which are not. All software that costs over $5000 must be reviewed? All software needed by strategic projects must be reviewed? Some combination? This is a policy decision, not a standard. Phrase it clearly enough so there is no room for ambiguity. Just recognize that you are probably going to have to pick your battles here. Again, remember the end goal is not architectural purity, but effective management of IT complexity in support of the business. As with any policy, this decision needs to be reviewed annually, and may change over time.

Vendor selection

Another area where it’s easy for IT to lose control of IT complexity is in the selection of which vendor’s solution to purchase. You can easily make shortsighted decisions that duplicate functionality, don’t fit the long-term strategy or current infrastructure, and don’t meet the actual business requirements.

Certainly all software purchases that go to architectural review should go through formal vendor selection, but if you have a simple, flexible process, you can create policy that all software acquisitions go through a formal vendor selection process, regardless of whether the architecture team is involved or not.

Likely, your purchasing or contract management process will have some input here. Many companies mandate that at least three different vendors be included in the analysis. Choosing one vendor over another is a lot harder than you would think. It’s very hard not to make a subjective decision, regardless of personal integrity. It’s difficult to compare products when the feature function sets aren’t apples-to-apples. By the time you talk to the last vendor, it’s been weeks since you talked to the first and the memory of that first product’s selling points is fading. One vendor may have had a sales rep that shared extracurricular interests with you or took you to your favorite restaurant for lunch, and that pleasant experience can be reflected onto the software product without you actually realizing. Objectivity doesn’t come without careful preparation. You need a system.

When you were looking through the enterprise to find if you already owned applications with the desired functionality, you probably got a lot of, “We don’t have that, but we sure could use it!” responses. All of those departments are your requirements committee, not just the department that initially requested the software.

You need to gather all the business requirements and figure out how to measure and weigh each possible vendor response in terms of the value to the business. Then add in your IT requirements (information, application, and technology), thinking in terms of both the existing infrastructure and your long-term strategic vision. You may in fact need to bump heads and actually figure out what the long-term strategic vision is, if you haven’t done that before. You should do all of this before you talk to a vendor. As you talk to vendors, you may learn more and modify your initial requirements, but they still need to be captured, scored, and weighed.

This doesn’t have to be rocket science. I’ve used variations of the same spreadsheet for vendor evaluations for several companies in at least three different industries with great success:

  • Collect, organization and categorization your requirements and organize them into groups.
  • For each requirement, assign a weight. I use a scale of 1 to 5. This relative importance should be assigned by the business, not by IT:
    • 0 – no importance at all (don’t even put these in your requirements document)
    • 1 – Feature not required, but would add future value
    • 2 – Feature not required, but would add significant value to initial business case
    • 3 – Feature required – significant development/integration/configuration effort on our part is acceptable
    • 4 – Feature required out of the box – minimal customization effort by the vendor or the addition of third party add-on components is acceptable
    • 5 – Feature required out of the box – must not be a customization or third party add-on
  • Decide and document what the acceptable range of responses is for each requirement. For example, when evaluating enterprise report writers, you might have a requirement that the product support capability to prevent runaway queries from consuming all of the reporting platform’s resources. The scoring might be
    • 0 – functionality not supported
    • 1 – Supports resource caps and execution time caps, but not configurable user-by-user and report-by-report
    • 2 – Supports both resource caps and execution time caps, configurable by both user and by report (Max score).
  • Create a spreadsheet that auto-scores the responses. Consider a feature that was assigned a weight of 3 and a max possible score of 2. If a particular vendor received a score of 1 out of 2, then they got 50%, multiplied by the assigned weight of 3 yields a weighted score of 1.5 (50% of 3). The weighted score is always ((vendor’s raw score for requirement / max possible score for requirement) * assigned weight for requirement).
  • Total all the weighted scores for each vendor, and you’ll get the cumulative score. This is the best way I have found to ensure a fair, objective vendor evaluation.

When you select a candidate pool of potential vendors,36 you can actually send them the list of requirements and scores up front, before they bother coming around for the inevitable dog-and-pony show. It’s only fair to be up front with the vendors regarding your requirements. Ask them to score themselves and provide any supporting documentation or explanation they feel is needed. I usually provide the list in a Word document, rather than providing the actual spreadsheet with weights and calculations. No need to give away all your secrets!

Your first pass might be based on the scoring by the vendor. If they admit up front they don’t have the features you need, then there’s no need for you to waste any further time. For those that claim near perfect scores, tell them that you want to discuss these particular features with them personally as the next step. Get all your stakeholders (representatives from all the business units that gave you requirements, and representatives from all four architectural domains) in the room with the vendor, and go through the list, asking pointed questions. Odds are pretty good you’ll score the vendor lower than they scored themselves.

I recommend using a decision matrix similar to the following. It’s a great way to make sure that your requirements are well documented, that your evaluation is as objective as possible, and that the relative importance of each requirement is factored into the final score.

You can add pretty much anything to this matrix: a score for number of installs of the product, a score based on an interview with a client reference. It’s a pretty flexible method of objectively evaluating vendor solutions.

If two or three vendors all score well, then consider going to a Proof of Concept (POC) showdown, where you ask each vendor to prove their claims for your most critical requirements. Ideally, you should make the vendor POCs be as similar as possible to each other, using conditions as near as possible to your production environment. Typically, the vendor will pay for this engagement. Make sure the POC occurs in your own environment, if possible, not in the vendor’s lab. And, if the vendor is doing the installation and configuration, ask to sit with them while it occurs, so you can measure how long it takes, what kinds of problems were encountered, and how they were overcome.

Contract management

We aren’t going to speak in depth about every single IT function, but contract management is one of the functions that are critical in order to tame the IT chaos at the enterprise level. Contract management impacts your EAG functional framework in many non-intuitive ways. One key to managing IT complexity at the enterprise level is to run all the contract negotiation, execution, performance monitoring, modification, and termination through a single, dedicated office. This includes contracts with customers, vendors, and employees (full time and contractors). There are a great many legal aspects that require specialized management (such as non-disclosure agreements), but a surprising majority of the work involves following established processes and managing working relationships.

Figure 5.1 Sample Vendor Evaluation Spreadsheet. [Weighted score -Vendor A]37

It’s surprising how many companies don’t have a centralized contract management organization with infrastructure designed to integrate with other IT support functions such as asset inventory, sales, and performance metrics. Whether you are the vendor or the customer in the contractual relationship, there are going to be requirements in the contract that relate to these other IT functions. If you want to ensure that your contract management activities are being managed efficiently on behalf of the business, then you need to make sure your contract management software doesn’t only do a good job of managing the contract text, but is designed to integrate to the relevant business metrics you’re collecting within other IT functions.

The stages of contract management include:

  • Request. An ideal enterprise contract management system will have a repository of contracts and contract type templates. There should be an online tool to allow business users to request new contracts as needed. Many of these tools have the ability to request contract templates for any contract type.
  • Research. Existing contracts should be able to be searched online by any user with the appropriate security authorization and business need. When requesting a vendor contract, it isn’t at all uncommon in a large company to find that another area of the organization already has a contract in place with the same vendor. The business users need to be able to examine and compare the terms and conditions in various similar contracts, without digging through filing cabinets full of legal documents. Forrester claims that a centralized contract repository is by far the most common use case for purchasing commercial contract management products.38
  • Generate. The requestor should be able to use an online wizard that makes use of predefined contract templates with up-to-date terms and conditions pre-vetted by Legal. The legal department can then spend most of their time managing the exceptions that require additional negotiation or special terms and conditions.
  • Negotiate. A contract management system is more than just a document repository and templates. It should be able to track changes to the contract documents and templates over time, noting who made the change, when, and why. Many contract management systems have the ability to collect a database of pluggable terms and conditions that can be added to contracts as needed, even if they are not part of the standard template. The standard template for each type of contract, in fact, can be thought of as a custom subset of the globally available terms and conditions.
  • Approve. Contract changes and changes to contract templates typically fall under a workflow management tool to allow customized, multi-stage approvals, with automatic routing and tracking. Contract approval may be a workflow consisting of multiple documents and multiple signatures. For example, before a major new software purchase, you may require various non-disclosures, a formal Request For Information/Proposal (RFI/RFP), and a Proof of Concept, all of which involve paperwork and approvals. Your EAG team architectural review would be part of this approval workflow.
  • Execute. Once the contract is approved, it is placed into the searchable repository, and becomes a legally binding business requirement. In a B2B environment, contractually approved sales, pricing and service levels need to be automatically available to the procurement and invoicing systems.
  • Compliance monitoring. While a contract is active, it must be periodically reviewed to make sure all parties are complying with all the terms and conditions. Ideally the contractual metrics, such as user licenses and service levels, are being collected and retained automatically, so that compliance monitoring is quick and simple. If non-compliance results in automatic penalties (paid or received), your contract management process should include the necessary monitoring and alerts. This integration piece is the main thing that needs to be addressed in order to lift the contract management function from an isolated silo of expertise to an integrated component within an enterprise wide EAG program. The relevant sales, license, and performance metrics that are under contract need to be collected across the enterprise into a central repository, and tied automatically to contractual requirements. Compliance monitoring is not an afterthought, and should be automated and continuous, with alerts when compliance is at risk. The EAG templates for policy, process, and role governance artifacts need a section that requires that these metrics be included in every project.
  • Amend/review. The original contract had a business justification such as revenue generation, expense reduction, and risk reduction. This justification should be periodically reviewed to ensure that the contract is still meeting the original business need. Included in the analysis of the business need should be the growing or shrinking demand for the number of licenses, units, or performance levels under contract. In some cases, the contract may need to be amended over time in order to best meet the changing needs of the business. This periodic review should be an automated workflow, and should be able to accommodate changes in staffing and ownership over time. The affected staff should be periodically interviewed to see if the nature of the contractual relationship has change, or needs to change. Many contracts have built in termination dates, which should automatically trigger a review.
  • Terminate. At some point, every contract will come to the end of its useful life, and will need to be terminated. The termination process should have been spelled out in the contract, and the workflow and approvals processes created. Terminated contracts should be retained in the contract repository for future research.

This is not a treatise on contract management, and I am not a legal expert. There are many complexities and some major differences in contracts depending on whether you are the vendor or the customer, or whether the contract is for products or services. None of that detail is covered here. Rather, the goal here is to show how your contract management function fits into your enterprise architecture and governance program, primarily in the area of compliance monitoring. It should have an architectural vision that is an integral part of your overall high-level software infrastructure lifecycle management program (functional area), and it should have a tactical roadmap to implement that vision over time. This vision and roadmap should be reviewed at least annually prior to budgeting the next year’s infrastructure spend. There should be documented policies, standards, processes, and roles that are all retained with the rest of the company’s architecture and governance documents.

Any contractual metrics should be part of the requirements and standards for the relevant IT infrastructure components. If the contract guarantees certain service levels, the contractual requirement should generate an automatic requirement that the actual service level be monitored and collected automatically to facilitate periodic compliance review, and alerts generated proactively and automatically when the terms and conditions of the contract are not being met.

These contractual requirements are integral to your IT infrastructure, not just words on paper in a file cabinet in the legal offices. Managing IT complexity on behalf of the business requires that the contract management process be as central to IT development and operations as any other business requirement.

The cost of failure to provide an integral contract management process includes business delays due to contract creation inefficiencies, legal penalties for non-compliance, damaged customer relationships, bad publicity, and loss of revenue. With an integrated contract management process, your sales cycles will be faster, your lead conversion rates higher, your highly paid lawyers will be focusing on complicated exceptions rather than mundane paperwork, processes will be streamlined and automated, contracts will be consolidated and consistent, and contracts will be automatically monitored for compliance and continuing relevance.

Recommended reading:

  • Enterprise Contract Lifecycle Management: Mastering Integration. By Gerard Blokdyk
  • Forrester Wave: Contract Life-Cycle Management. Forrester Research, 2016 Q3

Software implementation and configuration

When you stand up new software, some work must be done to make sure that it is configured correctly and consistently. This is easily confused with Software Configuration Management (SCM). SCM is that part of the software development lifecycle concerned with managing various releases as they are assembled and propagated through the development, test, and production environments. We’ll discuss that later, under the software development lifecycle functional area.

In the context of the Software Infrastructure Lifecycle, software configuration plays an important part in your ability to support IT functionality on behalf of the business. It’s important for disaster recovery, for software upgrades, and important when re-implementing software due to a hardware replacement (upgrade or recovery). Most companies I’ve worked at regularly replace hardware such as servers, workstations, and network equipment on a three-to-five year rotation. Rather than looking forward to a newer, more powerful desktop, most people tend to dread the inevitable three weeks loss of productivity that occurs after their workstation is replaced, when they have to slowly identify and re-configure all the stuff that’s no longer working. This occurs both when the machine is upgraded, and when the hardware has a failure and needs to be replaced. It’s even worse if it’s a server that is being upgraded.

There are many approaches to this problem, from archived images that can be laid down quickly, to cloud-based software that can be provisioned pre-configured. The point being that you can’t support the software infrastructure lifecycle IT functions for the business if you don’t provide support for installing and configuring software.

Think about it. For many of us, the main contact we have with corporate IT resources outside our immediate working area is when the powers that be are coming through and messing with your workstations and servers. How pleasant and efficient has that experience been at your company?

Software release management

Software release management is very similar to implementation and configuration management. The main distinction being that release management is the entry point for “new” configurations. These configurations need to be tested thoroughly to ensure there are no interoperability or system integration issues, and that the new configuration complies with all business requirements for feature/function, and security.

Software development release management is another function altogether, and will be discussed later.

Software licensing

Most server-based software has the ability to track and report licensing. Client-based software can be tracked with network scanning tools and agents installed on each workstation. From an enterprise IT management standpoint, the important thing is that the contractual licensing requirements are known and exposed, the licenses-in-use are collected in an appropriate manner with all relevant information (i.e. software release levels, machine ID, user ID), and that alerts are automatically generated when a risk threshold is exceeded. This involves integration with contract management, network security, application configuration, and software discovery agents. All of these different IT functions must work together in order to support a well-documented unified strategic vision of enterprise software license management. Policies must be set in place to ensure that each function meets the larger business needs. Standards must be defined for interoperability. Processes and roles must clearly define who is responsible for doing what, when.

Most of the licensing requirements begin with software contracts. It is imperative that you have a contract management system in place that collects licensing requirements in a central repository that can be accessed by processes that compare the contractual requirements to the actual usage.

Polices should ensure that no software should be installed within the enterprise until all three of these requirements are met:

  • The contractual licensing threshold information is entered into the centralized contract management system.
  • A methodology has been implemented to collect the licenses in use into a central repository.
  • Alerts have been set up to notify the appropriate roles when licensing thresholds are exceeded.

Standards will define how these policies are to be implemented. There may be different standards for server-based and client-based software or different standards for mainframe versus Unix servers. You will have to consider how you are going to manage cloud-based software. All of that is fine so long as all of the permutations have defined standards, and all of those standards comply with the corporate policies.

The processes and roles in your framework need to specify who is responsible for license compliance, what process they use to determine compliance, and how often that process must be performed.

Software security

Security is at the top of everyone’s priorities these days. Regulatory requirements exist in almost every industry to address the many breaches that have occurred. If you have not had a security breach, your time is coming. Alternatively, and more likely, it’s already come and you just haven’t realized it yet. Just this week I was sending an internal email regarding a CDI issue, and our email software auto-complete feature sent it to my wife instead. Because the email contained a Social Security Number, I had to follow our corporate processes to report this as a breach.

Software security is a discipline unto itself, one that would take a lifetime of study to master, and one that’s changing every day as new technology is created to address growing threats. The information, application, and technology IT professionals who are responsible for managing the security function day-to-day are experts in their field and tend to work independently, without consulting other areas. To some degree, that’s inevitable due to the level of technical expertise needed, but in other respects, that can lead to an ivory-tower, IT-built security solution that looks great on paper, but doesn’t actually meet the needs of the business. For holistic security function management at the enterprise level, certain aspects of the security function have to be managed in a larger context, not in a security silo.

In the larger context of an enterprise wide IT function management framework, there are a few important considerations to add:

  • Information. You need to carefully read and understand what it is you are actually securing. This is usually an interpretation of regulatory requirements by your legal department. Security usually begins with information. Typically, you only secure applications because they access and manipulate information, and you only secure technology because it contains or grants access to sensitive information. Security is information-centric.
  • Unified provisioning and de-provisioning. You need a central group within the enterprise that handles provisioning requests for new access across the enterprise. Using one team dedicated to this function across the enterprise ensures:
    • Proper separation of duties
    • Consistent, well-documented governance artifacts
    • Auditing of process flow and approvals
    • Integration with contract management (contractual user count metrics)
    • Regular review of active users, to ensure they still require access
    • Integration with HR functions so that terminations and transfers automatically de-provision access
    • Integration with user authentication products, such as LDAP and Kerberos
  • Support for labels. See page 162. The information domain will define which data elements fall under which labels, and may actually implement the label-based masking based on user authorizations in the database. In cases where this can’t be achieved, the application domain must implement an equivalent solution.
  • Support for categories. See page 167. Classifications define what data can be exposed under what conditions. These are often defined by a regulatory body. Any application that delivers information must comply with the rules for the security classification of the data it is exposing. Requirements might include authentication and authorization, logging both successful and unsuccessful logon attempts, logging who actually accessed which information, etc.

These days, security compliance is one of the most intrusive, disruptive, resented IT functions in the corporate environment. If you want to manage your overall IT infrastructure efficiently and effectively, you are going to have to find a way to address those factors. I believe that the use of a functional framework can help put security in context, ensuring that the integration points are well defined in terms of the larger business drivers. Remember, security decisions are ultimately business decisions, not IT decisions.

Software operations/support

IT operations and support functions are what keep the IT infrastructure running from moment to moment. While the rest of us sit around talking about IT complexity, the operations and support staff are frantically dealing with it all day long. Clearly, if you want to control the IT chaos, you are going to have to address IT operations, where the rubber of your architecture and governance meets the road of reality.

For the most part, your operations support polices, standards, processes, and roles are going to be specific to your organization. That said, there are many operations support best practices that you should be driving at an enterprise level.

Here again, there are several different approaches to thinking about operational support functions. During my years in the telecom industry, my own thinking was heavily influenced by the TeleManagement Forum (TM Forum) especially their eTOM Business Process Framework,39 which was an operating model for telecom providers. I understand it has since been expanded to be more industry agnostic.

Regardless of what framework you use, your application management and operations will probably include the following functionality:

  • Job scheduling. Running and monitoring both the core business applications and supporting jobs such as backups.
  • Production environment support. Such as running backups and bouncing servers.
  • Production job support. Identifying and resolving production issues. Traditional production support includes an “emergency fix” environment where production-outage fixes can be quickly tested outside the normal development pipeline, usually subject to later review by development after production functionality is restored.
  • Change management. Implementing upgrades and new software. The operations staff often also functions as the change management team responsible for moving from development to test, which can be viewed as a dry run of the production implementation. No code or database structure changes should enter the production environments without going through production support, with full audit trail and signoffs.
  • System Integration Testing. While user acceptance testing is largely under the control of the QA and test staff, system integration testing is a mini-production environment, something that used to be known as a model office. All production job schedules also run in this environment, supported by operations staff.
  • Operational requests. Handling requests for data patches and ad-hoc job execution, usually through a request management system replete with signoffs.

Provisioning and de-provisioning application access is considered by some to be an operational function. Though it’s often performed by the same resources that support operations, I would consider it a separate function under application security, not part of the application support function. This enforces a separation of duties between those who grant access and those who use that access.

The term DevOps was originally introduced by Andrew Shafer and Patrick Debois in 2008. As originally envisioned, Devops was simply an effort to bring the development team and the operations team closer together. It’s no surprise that Shafer and Debois introduced the term at an Agile conference. The fast cycle turnaround times involved in agile development and test methodologies require corresponding frequent iterations of operations and support production updates. Likewise, many of the functions in the software development lifecycle (i.e. release management, change management) seem very operational in nature. It was inevitable that someone concluded that these two teams, development and operations, needed to work more closely together and come up with some integrated commonality of processes and tooling.

All that’s well and good, but in recent years I think some DevOps proponents are taking things too far. Some suggest that the development and operations teams not just work together, but actually comprise the same team. “Who better,” they say, “to install and support applications in production than the developers who built them?” I don’t believe this is a good idea at all. There’s a reason why most companies mandate a separation of duties between development and operations (page 48). Developers are incented to turn projects around as quickly as possible, where the operations team is incented to maintain the stability and uptime of the production environment. These are conflicting goals; the good kind of conflict that creates much-needed checks and balances. Furthermore, your development staff is usually a highly skilled, highly compensated set of resources. Why would you want development resources supporting production rather than developing new feature function? One of the prime objectives of the Scrum Master role in the agile Scrum methodology is to remove roadblocks so that developers can focus on developing. Turning a developer lose in production invites a great temptation to game the system. The developer should never be responsible for final QA testing of the code they build, nor should they be responsible for implementing and supporting it in production.

The EAG program sets the vision and the processes by which IT processes are managed. One important part of that responsibility is to ensure that the proper separation of duties exists. Without this separation of duties, you have the fox guarding the henhouse, and no amount of paper policies are going to protect your IT chickens.

As more and more of your operational systems move to the cloud, the nature of your applications operations support functions will evolve. If the vendor is managing the software as well as the hardware, then the vendor will take over much of the traditional responsibility of release management and production support. They may even handle the ad hoc requests. You may only be responsible for monitoring that the service levels comply with contractual obligations. On the other hand, your company may be hosting the software as a service for other companies. In this case, the operations support function becomes your core business, critical to your business model. In this case, you’ll want very robust, well-defined, scalable processes that recognize the importance of this function to your business.

Software operation support isn’t a standalone function. Like all IT functions, it’s integrated with other processes across the enterprise. Operations is the heartbeat of the business where many events are generated that impact information and application security, contractual service levels, data acquisition, information quality, and hardware performance. These functions must be coordinated across the enterprise information systems and across domains. The functional framework helps create the structure which can be used to articulate and coordinate those integration points.

Software end of life

Application decommissioning is the process of identifying outdated, unsupported applications and removing or replacing them.

From a business process perspective, the application’s functionality may still be required by the business, in which case the functionality and information may need to be migrated to a new solution.

From a business data perspective, if the information is still relevant to the business, the application’s data may need to be archived to a repository where it can be accessed without the continued license and support costs of the application whose functionality is no longer used. In the context of an enterprise framework, keep in mind that these applications may contain information that must be retained for legal, operational, or analytical purposes after the application and the hardware it runs on are removed from the infrastructure.

From either perspective, decommissioning an application involves coordination between all three IT architectural domains. It’s unwise to let any single domain make all the decommissioning decisions, any more than you would let that domain make all the decisions when the application was first spun up. Software end-of-life must be coordinated with application and technology end-of-life. It might be argues that coordinating all three simultaneously is actually easier than coordinating the decommissioning and replacement of one domain while retaining the others; perhaps completely replacing the software infrastructure while retaining the hardware and porting the information.

Once an application is retired, you don’t want to be forced to continue to support the application infrastructure just to have access to the legacy information. There are many third party products that can help you archive information without continuing to support the application that created it. While each offering is unique, I think you’ll find two basic approaches. Both were discussed from a data standpoint in the information lifecycle end-of-life function, but will be revisited now from an application standpoint.

One approach is to archive the data to a single archive solution that has a model capable of supporting archiving from many different applications into a unified solution similar to your ELDM. While there are products that claim to do this, it’s a huge undertaking to map application data to an enterprise model. The only way I’ve seen this work effectively is when the majority of the data was already being loaded into such a repository already, in the form of an Enterprise Data Warehouse (EDW). In this case, though the destination model isn’t the same as the source application, the data concepts themselves are retained. One advantage of this approach is that, if the operational functionality is migrated to a new application that will also be integrated into the data warehouse, then integrated analytical reporting can be performed across both sides the migration. Another advantage might be that a good bit of the data you need to retain is already being propagated to the warehouse, greatly reducing the effort of implanting the archiving application solution. Note that this is not the same as a backup solution, as the data in a warehouse would not be suitable for restoring to the source application.

The other approach is to archive the data into an Application-specific Data Mart (ADM), usually simply a dump of the source data, using the original data structures. This is much less expensive to implement, but, without the application interfaces, the data may be difficult to collect and interpret. In this case you are literally shutting down the application but retaining the underlying application database.

The choice regarding which route would work best in your environment really depends on how many data elements need to be retained, and of those, how many are already being retained in an enterprise model.

The ROI of decommissioning an application is calculated by comparing the cost to maintain the legacy application (in continued hardware, software, and human resources, plus the costs of any compliance or security risks) to the cost of the decommission effort itself (analytical archiving plus operational migration). The analytical archiving and operational migration costs both have a one-time cost and an ongoing support cost. Whether or not application decommissioning makes sense is a business decision that must be made in the context of these relative costs, not based on architectural IT infrastructure purity.

If you plan to be in business very long, this decision is going to come up more than once. The architecture domains should consider whether a greater business benefit would be realized by establishing a repeatable process for decommissioning legacy applications. This repeatable process can take the form of a robust commercial product, or it can simple be documented enterprise policies, standards, processes, and roles that apply to future application decommissioning decisions in your functional framework.

Two kinds of people – motivated versus unmotivated

There are two kinds of people in this world: those that will do the job enthusiastically, and those that do it reluctantly.

I once joined a product development team where all the developers wanted to work on the new features and functions, but no one wanted to address the outstanding customer-reported issues related to the older code. Several of the original development team had long since left the company, leaving large sections of complicated code undocumented and little understood. No one wanted to spend their day trying to root out the cause of these issues and fix them. An attitude had developed that working customer support issues was a task for second-string developers who could not be trusted with the new stuff. As you can imagine, the product suffered for this.

A new manager joined the team at the same time I did, and initiated a very simple, effective program that turned this attitude around in a matter of a few weeks. Without saying anything about her plan, she went to a party supply store and bought a box full of plastic alligators (I’m still wondering why a party supply store needs to stock plastic alligators in bulk). At the end of the next team meeting, she asked one of the developers to stand up, and presented him with an alligator, in recognition of his willingness to “wade into the murky waters of legacy code, find that beast, and wrestle it into submission.” She explained that she had picked certain difficult problems that had been sitting on the queue for months to be alligator issues. At this particular company, we all worked in cubicles, and no one was allowed to put anything on the tops of the built-in bookcases where it could been seen from across the floor. This manager got a corporate exception for alligators. Imagine looking out at a sea of cubicles, and seeing nothing but one lone alligator.

The next week, this developer, who had been unlucky enough to get assigned support duty that month, found and resolved another long-outstanding issue, and was presented with another identical alligator. By the following week, developers were asking to work on those most difficult support issues. Everyone knew she was buying our attention with a dime store toy, but these alligators became badges of honor, like silhouettes painted on the side of a fighter pilot’s cockpit to proclaim past victories. It was silly, but it worked. People started wearing alligator themed t-shirts, and buying alligator coffee mugs.

I think this technique wouldn’t have worked everywhere, but it worked in this case because the people she was trying to motivate thrived on recognition of their coding prowess. I’ve thought about this a lot, and think there’s something going on here that lies at the heart of team motivation.

If your job responsibility includes managing a team to accomplish some business goal, odds are pretty good that the team members don’t leap out of bed each morning eager to get to work and help you make your incentives. You can crack a whip and force them to do the work, but to be really effective, you have to find a way to motivate them. The problem is that different people are motivated by different things.

Later at this same company, this manager left, and I threw my hat into the ring to be the replacement. One day I was a peer, the next day I was the boss. That’s a very difficult transition to make. As a developer, I could point to a stack of code printouts at the end of each day to show what I had accomplished. As a manager, these tangible outcomes are harder to come by. My uncle, a top-level executive at Marriott, sat me down and explained that, as a manager, I was still building things, but instead of building programs, I was building a team, and that was going to take a great deal of hard work on my part but was going to result in even more satisfaction. He was absolutely right on both counts.

Thinking back to those alligators, I realized that wouldn’t work for my new team. Not everybody was looking for recognition. I had one older mainframe developer whose greatest fear was that he would lose his job and not be able to find another at his age. I had a business analyst whose main reason for coming to work every day was to get out of the empty house and interact with people socially. I had another developer who only worked the minimum hours to get group health insurance – her husband had a small business and brought in most of their income, but private health insurance would have been very costly. She actually put almost 90% of her salary in 401K. And so on.

I realized that to really motivate a team you have to be able to cast a vision of your goal in terms of what really appeals to each individual team member. To those that really just wanted job stability, I had to cast a vision that showed that the new product we were developing was intended to be a competitive differentiator, very important to the company, and a key component of our long-term corporate strategy. To those that wanted social interaction, I emphasized how closely we would all be working together, and the big release party we would have when it was done. For those developers who wanted recognition, I promised that the new product would have a scrolling list in the Help/About box with the names of the team members who met their deliverables on the project.

I had one young mainframe developer right out of school who I couldn’t figure out how to motivate. She seemed to feel that she was in competition with the older developer who had 30-plus years of experience. He had been doing this so long that he could drop hands to keyboard and perfect code flowed from his fingertips. No matter what you gave him, it was complete and tested by the end of the day. He was assigned quite a bit more project tasks than the younger developer, and would wind up completing all of his and helping her figure out the all the problems in her code. Really, he could have written it faster, but that was OK. We hired her out of school and expected this to be a learning period. However, she didn’t see it that way. She started coming in later and later, and leaving earlier. She called in sick a lot. She didn’t talk much, and, I kid you not, even began dressing in very drab clothing. It was clear she was miserable and something had to happen.

I happened to hear her talking one day about how she did a lot of volunteer work for her church because of her familiarity with the Microsoft Office product suite. She showed us some of the publications she had developed, and they were truly impressive. They looked very creative and professional. It was clear this young person had untapped skills. I consider myself a power user of those tools, but that doesn’t mean I can make the font, color, and other choices that set a publication apart as professional. Hers were truly polished.

As it happened, we also had a task on the plan to develop a customer-facing spreadsheet that our sales teams could use to gather all the information used to calculate the properly sized technology infrastructure for our product at a client location. Seeing her work, I asked her to take that task on, even though it meant reducing some of her mainframe development workload. The result was like night and day. She started coming in earlier, staying later, and being much more outspoken in meetings. Her “health” improved, and I swear, she started dressing in bright, flashy colors. I’ve never seen such a dramatic turnaround. Moreover, the work she produced was brilliant. The sales team loved her, and begged for more. I still had her on programming tasks, but those tasks no longer defined her. This woman was motivated by respect. She needed it like she needed air to breathe. When she received real, honest respect, she flourished and the team benefited.

My uncle was right. Managing a team is hard, hard work. But it’s one of the most personally gratifying things I’ve done. I can no longer remember all the software I’ve developed over the years, but I remember every one of the people I’ve managed. At the end of my career, I’ll be much prouder and happier to remember making a difference in someone’s life than in having written a particularly elegant algorithm to make widgets.

Architecture involves change. In particular, it involves changing the way people do their jobs. Coming up with the architecture and governance documents is long, hard work, but it will all be for naught if you can’t motivate people to make the changes. Standing over people with a stick isn’t viable in the long term. You will literally drive off all your best people, who know they can get a job somewhere else where they aren’t micromanaged every day. After a while, all you’ll have left is the people who either have been beaten down until they don’t care anymore, or people who hate their jobs but aren’t good enough to get hired somewhere else. Is that what you want?

It seems like a contradiction, but if you want a team of people who are self-motivated, you are going to have to supply them with the motivation. You’re going to have to cast this vision of the future in terms that let them see how that vision will benefit them personally. How is this going to make their lives easier than the way things are done today? How is this going to be less work instead of more? How is this going to make the company more viable and their jobs more stable? Yes, it’s very true that we’re all adults and that, if you cash the paycheck, we should be able to expect you to come to work and do what you’re told. However, you aren’t the only employer in town. People aren’t going to stay in a position where they believe they’ll be miserable when they could easily go work for someone else. Back in August of 2015, Victor Lipman published a famous article in Forbes magazine entitled “People Leave Managers, Not Companies.”40 In that article he attributes unmotivated employees with $450 billion in lost productivity annually in the US.

Motivation isn’t just a buzzword. This is real business impact. If you want to retain the best talent, it’s up to you to cast the strategic vision in attractive terms. It’s part of your job. If you can’t do that, you are not fulfilling the responsibilities of the architecture role.

Software development lifecycle functions

Software development seems at first to be project-centric; something that could be done in isolation without a lot of thought about the larger enterprise. You have your business requirements. As long as you meet them, what difference does it make how you got there, right?

If you’ve worked in the real world very long at all, though, you know this is incredibly naïve. When you’re working on hundreds of projects, with hundreds more already completed and in production, things get very chaotic if you don’t have some shared vision and common processes to coordinate the work and assure compliance, consistency, and smooth integration.

The accepted approach to coordinating all of these development projects is to think of the various software development functions as part of a Software Development Lifecycle (SDLC). I want to step through some of the functions in this lifecycle. I don’t want to discuss how to develop software, but rather how to integrate the software development process into an overall EAG program. This will simplify the integration of all the work efforts into a unified whole that efficiently manages the IT software development functions.

Initiating and tracking projects

Software development begins with a request. Depending on the methodology you use, the request process can take many forms, but in the end, all software development begins with a request. Even at that early stage, things can go horribly wrong very quickly.

The first hurdle is ensuring that all of the projects actually benefit the business and are the best use of the business’s investment. The NIST model insisted that all project requests flow downhill. Corporate executives set the company’s goals for the fiscal year. Then executives responsible for each information system (page 57) design strategic projects for how their area of responsibility will support those corporate goals. Any project that doesn’t directly support the corporate goals is not in alignment with the corporate objectives.

That’s certainly true for the big strategic projects, but in the real world, the owner of each information system also has the responsibility to maintain and improve the business functionality they’ve put in place over years past. Problems need fixing, workflow needs automating, business growth must be accommodated, and new regulatory requirements need addressing.

Let me describe a typical project initiation pipeline from years past. Hold on to that criticism for a minute.

All kinds of project requests will be coming in; far more than can possibly be completed. All of these project requests should already have business requirements, stated in business terms. There needs to be some preliminary process of weeding them down and prioritizing them, only allowing a few to proceed farther into the project pipeline. There are several terms for weeding, but I’ll call this Gate 1, where projects are prioritized based on their value to the business. Strategic projects are the first through the gate. Then follow the departmental objectives that don’t directly support the corporate goals, in the order of the value they provide. At some point, the next holding area is full, and the gate is shut. Projects that can’t show any business value don’t make it through gate 1. Who’s the gatekeeper? Ideally it would be the EAG program, but realistically, at least until those strategic projects make it through, the Information Systems business owners will want to be very involved. But these business owners are high-placed executives, who don’t need to be bogged down in evaluating all the little projects that follow.

This doesn’t sound very agile, but you can’t really turn a development team loose on a project backlog until those projects have been approved and prioritized. You might think of this as an agile “Sprint 0.” In rare cases, you will be have a development team that receives only vetted project requests, something like a development helpdesk, where the requestors are all authorized to initiate projects, and no Gate 1 is necessary. I haven’t found this to be all that common among development teams that support large, complex projects.

Typically, what happens is that the business owner will first decide what their strategic projects are for the next year, and will put those in the budget. Some of those strategic projects will be the ones that support the corporate goals. Others will be the business owner’s pet projects for their area of responsibility. There will hopefully be a pool of budget dollars remaining for keeping the lights on, which is set aside for all the project detail that the business owner doesn’t care to be involved in, so long as the strategic projects aren’t at risk.

Therefore, the business owner already knows which projects are strategic and identifies them as such. I like to think of this as a field full of project sheep, waiting to get through the gate. The business owner goes through and paints his favorite sheep to identify them. The business owner then leaves all the work of sorting out the sheep to the EAG team shepherds, who know those painted sheep have to be at the front of the line. Once they get through the gate, more sheep are let through using some rough prioritization based on business value, until there is no room for any more.

When Gate 1 closes, the EAG team will begin designing very high-level architectural approach requirements for each project, followed by a level of effort. This level of effort is like a t-shirt size: under 80 FTE hours might be small, under 200 FTE hours might be medium, and under 1000 large. This whole process shouldn’t take more than an hour per strategic process. This isn’t a detailed estimate; it’s just a level of effort. Based on the t-shirt size, the sheep might be reprioritized, and a few sent back to the outer pasture. This is Gate 2. Some projects also have hardware and software purchase costs. Usually, the EAG knows about these strategic projects ahead of time, and has good estimates for those costs. You may or may not know which vendor you will use, but you know the cost range they are going to fit in.

Once through Gate 2, each project gets the architectural requirements. Several project management methodologies call these the functional requirements. Architects don’t need to be a bottleneck here. The architectural requirements that apply to all projects should be documented in your standards, and don’t need to be repeated in each architectural requirements document. All that is needed is to capture the relevant information about the desired architectural approach for each project that isn’t in the standards document already. If you find yourself documenting the same thing repeatedly, it probably needs to be a standard. Several EAG domains might contribute domain-specific architectural requirements, all of which should be agreed upon within the whole EAG team before releasing. In many cases, the original business requirements did not contain sufficient detail, and the architects will need to work with the requester to flesh them out.

The architectural approach documents (the functional requirements) are often quite simple, and may be no more than a paragraph or two specifying which infrastructure is to be reused and which is to be developed. If the project is going to play a part in rolling out some new strategic functionality, hopefully that design was put together by the EAG team long beforehand.

For traditional project management, this is the point where the project plan would be created and the lead developer assigned. The architectural requirements are given to a lead developer or project architect, who will expand them into a detailed design, also known as technical requirements that could be given to a developer to code. Since the EAG architects are matrixed to the EAG team from their day jobs where they are embedded within an information system, the person writing the technical requirements might actually be the same person who created the architectural requirements, wearing a different hat.

For large t-shirt sized projects, the lead developer may need to create a detailed estimate from the detailed design. This detailed estimate isn’t a t-shirt size; it should be as close as possible to real hours. The detailed estimate may take several days to produce, which is why they are usually only performed for the large t-shirt sized projects. The detailed estimate may then go through a final Gate 3 to get approval for the spend, now that an accurate estimate of the final cost is available. Gate 3 may not be required for small projects.

At this point, the project initiation function would be complete, and the software development function could begin.

This gating process was very typical of the way projects were initiated twenty, perhaps even fifteen years ago. However, when the Agile Manifesto41 was signed back in 2001, many other approaches to project management were already being practiced. Agile development processes quickly became very popular, for many reasons.

I like Agile a lot, and use it myself for many types of projects. But for other projects, I find that other approaches work better.

Agile versus Waterfall – which is “better”?

One of the biggest discussions in the realm of project management for the last decade is Agile development.

Waterfall development methodology refers to the practice of first gathering all of the requirements. When all of the requirements are formally documented, the project is passed to a design team. When all of the design is finished, the project is passed to a development team. When all of the development is completely finished, the project is passed to a test team. This methodology, when displayed on a Gantt chart, looks like a series of stages, each of which end with a drop to the next stage, similar to a waterfall dropping from level to level. Waterfall development has been practiced for many decades and has many advantages. There are many project management tools in place designed for this methodology, and it is very easy for management to tell whether the project is on track or not.

Agile development methodology isn’t a single project management approach; it’s any project management approach that complies with the 12 principles of the Agile Manifesto. A team of IT and business professionals must work together with the business, communicating constantly, to deliver iterative content showing incremental business value using any one of these methodologies.

Like many techniques and technologies, agile development processes work really well for some types of problems, and not so well for others. It might be more fair to say that Agile works better for some phases of projects than others.

Agile is fantastic, for example, in taking on small projects that don’t build on one another – a help desk queue for new report creation, for example. Agile also works well for incremental enhancement to a mature product whose architecture is already well established.

A pure agile development approach begins to break down, in my opinion, when dealing with large projects that will be building out complex new architecture. In these cases, it’s often inefficient and problematic to design the architecture incrementally, without looking ahead at all known requirements.

In a pure agile process, you tackle and deliver a small chunk of functionality before looking ahead to the next chunk. I’ve found that this only works well when the architecture for the current chunk is relatively unrelated to the requirements of future chunks. By “relatively,” I do acknowledge that no architecture is every set in concrete. It’s always subject to change. However, if you’re designing a brand new system that will need to integrate information or business logic from several similar, but different information systems, you’re often better off looking ahead and finding out all the related requirements before going ahead and implementing a design based solely on the first set of requirements.

For example, perhaps your company just acquired another. Both companies have more than one commercial application to manage your accounts and products, and you’ve been asked to create a new front-end processing system that integrates all of these products into a single interface. You can break the project down in many ways, but even if you start with some limited functionality such as an account inquiry, do you want to do a quick sprint to expose the functionality of one application, then, after moving that to production, start the next, only to find your architecture isn’t going to work for the second source? So you change it, get both applications working and move that to production only to find in the next sprint that the third application is going to cause you to start over yet again?

For projects that require the design of complex new information and business logic, it’s often better to spend some time focused just on design. If you can find relatively isolated areas of functionality, you can slice the business request vertically so that you design and move into production one component before beginning to design the next. This will deliver business value sooner than if you designed all components before turning anything over to development. This is part of why we recommend loosely coupled subject areas within any application (page 127). But designing even this limited functionality may take much longer than an agile sprint would allow.

You can make compromises, stretch your terminology, and pretend that you’re still doing agile development, but you’re just kidding yourself. Agile is a great tool, but it’s not the only tool in your toolbox. Sometimes it’s the right tool, and sometimes it’s not.

As time goes on and your information and business logic becomes more mature and complete, you don’t expect small projects to result in major design modifications. At this point, an agile process may be more feasible.

In the interim, you don’t necessarily have to settle for a pure waterfall approach, either. It’s mainly the architectural analysis and design that is problematic. You can and probably should consider an SDLC that takes a more traditional waterfall approach to high-level business requirements and high-level design for projects like this. Once the core architectural decisions are made, you don’t necessarily have to stick with waterfall for development, testing, and implementation. The actual development may be rolled out in a very agile manner.

The choice of whether to use waterfall or agile development process, or a hybrid of the two, really depends on how many potential subsequent sprints require architectural changes so large that the initial sprints have to be re-done. This is more likely when initially building out large architectural components than when incrementally adding small changes to a mature architecture. As your architecture matures or as the business changes and forces you to build out new architecture, the type of SDLC that is most appropriate for you may change...and later change back...and then change back again, unfortunately.

More than likely, at any one point you will have both types of projects: large architectural changes, and small incremental fixes and enhancements. You should recognize this very real distinction and provide more than one project management track for projects to follow, with some guidelines to help your teams know which is most appropriate in any given case.

This goes back to the discussion of the problems with a rigid development process (page 206). No rigid development approach, including Agile, is going to meet every need. You need to provide a way to take alternate paths through the possible development steps based on the needs of the project.

You can’t leave it to the individual developer to decide which methodology to use. The EAG architects are going to have to give some guidance regarding the appropriate development methodology to use for these large, complex projects. Of course, any project can ask for a waiver to the project development methodology, so you don’t have to consider every remote possibility beforehand, but you’ll probably define two or three different “standard” approaches. Perhaps:

  1. One path for strategic projects that are critical to the business, and need more formal, dedicated project management to provide the necessary management insight. Because these projects are rolling out critical new infrastructure components, the initial design stages of this path would look very waterfall, but the later development and testing may look more agile, provided enough management oversight is included.
  2. A second path for small projects well suited to a pure-agile project backlog, like non-critical bug fixes or new report requests. Every application in production use is going to have a backlog of small bug fixes and enhancements that would be well suited to an agile process.
  3. Possibly a third path to fast-track urgent projects that are relatively small in scope, but are needed to move very quickly into production.

If the EAG team can initially tag each project by type during the gating process, and provide formal governance for each type, this still allows a lot of flexibility. This may even help reduce the EAG team effort if you can say that the pure agile projects don’t really need high-level estimates, or Gate 2 decision, or Gate 3 decisions. Once the project tagged as pure agile passes Gate 1 approval, it can go straight to technical design. The point is that you have to strike a balance between trying to cram every project into the same development approach, and letting the developers do whatever they want. That balance may be different for different types of projects, and may look different in different organizations.

Agile is a more efficient approach – if it makes sense for the type of project you are managing.

Agile for the wrong reasons: communication

I’ve seen several organizations adopt agile development processes for what I would consider the wrong reasons.

In some cases, organizations get frustrated with a waterfall approach because they end up doing so much work in isolation, then throwing it over the fence to the next link in the development chain. Developers don’t participate in the requirements phase of the project, and refuse to start coding until requirements are fully complete. The business may wait months before seeing the product for the first time. If the business claims the results were not as expected, the developers blame it on poor requirements. Management eventually becomes aware that there’s a communication problem. They have heard that there’s a lot more cross-functional communication when using agile development processes, so - out with the old waterfall methodology, and in with Agile! Now all our problems are solved!

This is a very real, very serious, and, unfortunately, very common problem. Many organizations make the decision to move to agile development purely to fix a communications problem. This is a terrible reason to move to Agile. There may be many good reasons to adopt more agile processes at your organization, but getting people to talk to each other isn’t one of them. There’s no reason in a waterfall development methodology that a developer can’t talk to a business analyst, and vice versa. If your company is strict about logging all time against project tasks, as might be required when your revenue model involves billing customers for hours, then you may have to add tasks to your plan for this communication, but the exact same kind of cross team communication can occur in a waterfall project as in agile methodologies.

If your team isn’t communicating, it is unlikely that moving to an agile development process alone will address the underlying problem.

There is an episode in the Star Trek, New Generations TV series titled Hollow Pursuits. At the beginning of this episode, an engineer named Barclay is having trouble fitting in with the elite Enterprise crew. His academy scores indicated that he had the aptitude, but on the Enterprise he constantly appeared hesitant, nervous, and intimidated. The chief engineer decided to have the young man transferred to another ship where the pressure to perform would not be so high, and frankly, Barclay appeared relieved. However, Captain Picard refused to approve the transfer. “That sort of thing might be acceptable on another ship, but on this ship, you deal with your own problems, you don’t pass them on to someone else.”

Ditching your current development process and moving to Agile purely to force your people to talk to each other is just passing your problems on to someone else instead of dealing with them. If your people aren’t communicating, odds are good there are other issues you need to deal with.

Two kinds of people – process-oriented versus innovation-oriented

There are two types of people in this world: those who would rather follow a predefined process and those who would rather find their own path. You need both types of people on your team.

Some people are most comfortable when given well-defined rules and a process to follow. They are happiest when they can come to work each day and carefully check off each box in their process. It’s ok if things go wrong, as long as there’s a process for handling that. When new challenges arise for which there is no existing process they tend to get very stressed, locking up until a new process can be defined to deal with the situation.

Other people are most comfortable when every day is unique and different than the day before, presenting new challenges that must be overcome with innovation.

Some roles really need process-oriented people, including security provisioning, change management, regression testing, and production support. These are areas where there very much needs to be a well-defined process that’s followed every single time.

There are other areas, like strategic planning or developing new product features, where there can’t be a process. Here, thinking “outside the box” is a good and necessary thing.

The problem comes when you have a person in a job role unsuited to their temperament. If you have a creative person in charge of operations, they’ll quickly tire of long, tedious checklists. They’ll believe that more time would be wasted following the process than if you just addressed any issues that pop up as they happen. They will alter their environment to minimize processes and maximize creativity. Despite their best intentions, this cowboy mentality is not a good thing in job roles such as production support.

You’ve seen this before. The person who’s supposed to just follow the process doesn’t, and all hell breaks loose. “You had one job…” “All you had to do…” This cowboy actually seems happy about all the excitement, in high spirits, suggesting all manner of innovative solutions. You almost think they broke things on purpose just to have something more interesting to do than to check things off on their daily list.

Other times, you’ll have a process-oriented person in a position such as software development that requires flexibility and rapid adaptation to new and changing constraints. Rather than just dealing quickly with the changing world, this person will call meeting after meeting seeking someone to create a process that can be followed. They’re trying to change their environment from one that requires innovation to one that has a formal process.

You’ve seen this one too. The person who’s responsible for addressing the issue schedules a meeting for everyone remotely connected to the problem area. The discussion just keeps rehashing the issue, what happened and why, but no progress is being made. That responsible person is beginning to look a little green and increasingly desperate. Finally, one person suggests a solution, and hope dawns once more in the eyes of the person responsible. What’s the first thing they ask? “Can you write that down and send it to me?” Right? The relief is immediate and obvious, which is strange, because the problems still exists. But now there’s a process to follow. The problem was never the problem; not having a process to follow was the problem.

A good architect is, by definition, a creative person. But a great architect is one that understands the value of process, and is willing to write all that creative stuff down in a way that process-oriented people can understand and follow.

Some of the biggest problems I’ve seen in IT have boiled down to the wrong person in a job. Most of the time, the person is a talented individual, trying to do their best, but unfortunately placed in a position that does not suit their temperament at all.

I once worked for a software development company where we calculated a project cost by estimating the time to code and test the project, then multiplying that cost by a factor of fourteen to account for administrative overhead! In some departments, the overhead was a factor of twenty. A project that took half a day to code and test would cost the company two and a half weeks by the time you went through all the formal processes required by our Project Management Office (PMO).

I asked our PMO about this, and was told it was based on industry best practices. Finding that hard to believe, I asked to see those industry recommendations for myself. I was given several examples, one of which was a statement by Jack Welch, one-time CEO of General Electric (GE) between 1981 and 2001. Mr. Welch, who was named “Manager of the Century” by Fortune Magazine in 1999, had once stated that if a manager at GE made a commitment to deliver a project within a certain timeframe, within a certain budget, at a given level of quality, and failed, but was able to show that they followed all the formal processes, then “no harm, no foul.” It wasn’t credited against them and they were given another project. If, however, a manager made those same promises, and came in ahead of schedule, under budget, and with higher than promised quality, but failed to follow the documented process, then that manager was immediately fired with a black mark against their name, never to be re-hired at GE.

Our PMO organization used this and similar anecdotes to justify a process-centric culture. Follow the process and you’re safe from blame no matter what happens, but deviate from that process at your peril.

This anecdote really surprised me. Being an IT person, I’m not a huge follower of business news, but it was hard not to have heard of Jack Welch during those years. Our company had actually done some subcontracting work for GE. But the Jack Welch I had heard of was famous for fostering innovation and reducing bureaucracy. I heard stories of him automatically firing the bottom 10% of his managers every year, regardless of overall corporate performance, while lavishly rewarding the top 20%. He once said, “If you pick the right people and give them the opportunity to spread their wings and put compensation as a carrier behind it you almost don’t have to manage them.”42 That sounds like a man who is encouraging innovation and minimizing overhead, not a man implementing a process-driven culture.

Upon doing some research, I did find the speech that I believe our PMO department was paraphrasing. It was a speech on Six Sigma given at a university,43 and the context was a discussion of factory floors. GE, among other things, manufactured space shuttle parts. They did extensive time-and-motion studies of the factory floor assembly process, and had a very rigorous process that must be followed every time, widget after widget, hour after hour, day after day to ensure the highest product quality. They didn’t want some factory floor worker deciding that screw 143-B didn’t really need to be tightened more than hand-tight.

That makes some sense, but it applies only in an environment where the work is exactly the same, day after day, year after year. Work like our security provisioning or regression testing. And I think you’ll agree that there should be consequences if a front line employee takes shortcuts with the security provisioning process, even if it worked faster this one time. But I suggest that on the other side of town from the factory floor, GE had research and innovation labs where GE employees were designing new products, facing new challenges every day. Those workers couldn’t be tied to a rigid process, but needed to be allowed the flexibility to quickly adapt to changing conditions and to decide what processes were and were not needed.

In our software development company, there were jobs that more closely resembled the factory floor: our call center, our HR department, our mailroom. However, there were other jobs that more closely resembled the research lab, including software development. In fact, you can think of the factory floor as operations, where the results of design and development are supported in production.

In my mind, treating software development like a factory floor job results in a process flowchart that would (and did) cover an entire wall. Each time a project failed, another step was added to the process to make sure that the same failure never happened again. Every single project had to follow the entire process, whether it made sense or not. I remember ordering a writing tablet (basically a big mouse pad) and having to wait more than two months while our technology team followed all the processes necessary to ensure that no sensitive data was being stored on the hardware. It was a mouse pad!

I’m not suggesting process is unnecessary in the development environment. Process is good, a necessary tool for giving management insight on project progress, but the process needs to provide more business value than it consumes in productivity loss.

This company where I worked had several software development managers that seemed to fail at every project they were given. They just weren’t capable of overseeing new development, recognizing obstacles quickly and adapting the project to changing conditions. Yet often enough, it was the developers underneath the manager who received all the blame, while the manager was given yet another project to lead over the cliff. In my opinion, this was very nearly fatal to the company, adding an order of magnitude to the overhead of every project – projects which were our bread-and-butter. Our company was actually acquired by another company who wanted our product line. That acquisition resulted in extreme changes to our development methodology, most of which were sorely needed.

I see less extreme examples of this same death-by-flowchart, process-centric culture happening at other companies and consider it primarily a failure of middle management. In every case I’ve been involved in, this incremental growth of the development process flowchart of requirements was due to development managers who weren’t capable of managing their projects under changing conditions without an explicit process to follow. When their projects failed after rigidly following the current process, a post-mortem analysis was held, and a new step added to the process which every project would have to include from that point on.

In my opinion, these managers were simply in the wrong position. The problems were being caused by a lack of creativity, foresight, and innovation, but they were being fixed by creating more complex, more rigid processes. No flowchart is going to give a solution to every possible software development obstacle. Software development managers must be capable of seeing what’s going on and taking whatever steps are necessary to steer around the issues. A software development manager is supposed to be doing more than making sure their staff is entering their time. They are supposed to be managing the development projects, not just the developers.

The manager doesn’t necessarily need to be fired, but may be much better suited to a position where there are more rigorous processes which must be followed every day – those factory floor jobs. These jobs aren’t second-string positions for people who don’t measure up. These critical IT functions demand hardworking, responsible, integrity-driven workers who will protect the company from risk.

I’ve also seen senior management try to “fix” this extreme process overhead by throwing away the entire process and implementing an agile replacement. Agile is an extremely powerful methodology ideally suited to many projects. There are many very good reasons to switch some kinds of work to Agile (see details on page 202), but “manager incompetence” isn’t one of those reasons. If they need a process to follow, agile development is going to be terrifying. If your managers are failing because they aren’t flexible enough to manage projects under rapidly changing conditions, then you need to do the right thing and remove them from those positions, hopefully finding them a place better suited to their skills and temperament.

Agile for the wrong reasons: eliminating project overhead

This leads to another poor reason for moving to an Agile development methodology: to strip out administrative overhead. I mentioned earlier (page 206) having worked at a software development company that had such a complex SDLC that we would take our estimates for coding and unit testing, and multiple them by a factor of fourteen in order to estimate the full cost of the project. A company whose lifeblood was software development kept its developers in meetings almost thirty hours each week.

The complexity of the project management flowchart had grown steadily over a very long period. A project would fail miserably, usually due to failure on the leaders to detect and adapt to changing conditions, and, in order to prevent that from ever happening again, the PMO office would add a new step to the flowchart: a new test, a new review, or a new signoff. Instead of dealing with managers who were not able to adapt to each day’s challenges, they made the SDLC process more and more complex, making every future manager pay the price of that one manager’s poor performance.

I’ve seen this pattern repeated at many development shops. A manager insists on following the exact same process that he followed on the last project, despite the new project being completely different. Letting people go isn’t an easy task, especially when that person has friends and tenure. However, refusing to recognize that the person isn’t suited for managing R&D-type projects (and instead adding yet another step to an already overwhelming project management process) isn’t the answer either.

This practice is, unfortunately, quite common. In my experience, many companies move to agile development simply because they know that they can’t live with their current administrative overhead, and they’ve heard that agile processes will eliminate it. Again, your organization may indeed benefit from moving to Agile, but not for this reason. If you don’t have the courage to deal with the ineffective managers, retooling them for agile projects isn’t going to help. Agile development calls for far more flexibility and foresight than the waterfall SDLC processes you’re leaving behind. You will eliminate the overhead, but if you don’t take care of the staffing problem, your projects will just fail faster and more efficiently.

I’m not suggesting that these managers need to be fired. Again, different people have different skill sets. A manager who is detail oriented, who has high integrity and a great work ethic, but who is really more comfortable following a well-defined process is ideal for managing your productions support and operations, your security, and your change management. Putting a creative cowboy in those roles leads to disaster. However, putting a process-follower in charge of architecture or software development leads to failed projects and a heavy burden of administrative overhead.

Adopt an agile process if the work you do would benefit from that approach, but don’t retool your organization to Agile simply to solve communications issues or reduce process overhead.

Requirements gathering

You may recall from the discussion of the NIST framework that a hierarchy of requirements was recommended, with regulatory requirements at the top, followed by a handful of annual corporate goals set by the senior executives. Departmental VPs will set their own goals in support of the corporate goals, generally in the form of high-level projects, and so forth.

The bottom two layers, architectural review and technical details, are often labeled as the “design” phase of the development process, rather than the “requirements” phase.

The NIST requirements hierarchy looks like a very waterfall approach to requirements gathering, where each level of requirements must be completed before the next can begin. How valid is this approach to requirements gathering today, almost three decades later? It turns out that it is still very valid.

You aren’t going to have any influence on the industry, corporate, or even departmental levels of the hierarchy. What happens at those levels happens regardless of your development methodology and is beyond your control. Most of the different development methodologies still start with business project requests. The main differences in the various approaches are when and to what degree of formality the bottom three layers of the requirements hierarchy are implemented.

Even if you’re implementing a very loose, agile process, there are still business functionality requirements, architectural approach requirements, and technical requirements. They can be passed in formal documents, or they can be shared verbally as the business analyst and the developer sit in a cube iteratively testing changes, but at some level, all of these NIST hierarchy requirements still exist.

For our goal of managing IT complexity at the enterprise level, the most important area to examine is the functional (architectural) requirements. The business requirements and technical requirements are certainly important, but all the development methodologies have strong, though different, solutions in those areas. Regardless of development methodology, your architectural review and guidance is what will determine whether the project will integrate smoothly and efficiently into the corporate infrastructure. You need some kind of architectural review.

Implementing architectural review isn’t easy. There are many hurdles to overcome, including:

  • There aren’t enough architects to go around. Moreover, there are four domains; therefore you really need to get at least four of the most scarce resources in your IT infrastructure involved in every review.
  • Even if you can find the resources, in the real world, projects have almost unachievable deadlines, with little time to spare for architectural review.
  • Architectural review usually results in changes that make the project harder, meaning it will be more expensive and take longer.

That combination doesn’t paint a compelling case for architectural review. Yet, this is the single most critical failure point in the requirements process in most companies, driving millions of dollars in expenses and lost revenue due to short-sighted implementation decisions that result in inefficiencies and integration issues that are compounded with each passing year.

How you implement architectural review will depend on your development methodology, but there are a few key features you need to make sure your requirements governance includes, which are discussed below.

Make sure everyone understands the evolving long-term strategy

Most developers aren’t going to intentionally hijack your project and sabotage your strategic vision. The developers aren’t stupid, they’re just uninformed. They spend their days focused on project-level horizons that are weeks or months in length, not a strategic horizon that’s years away. Help them understand the strategic vision, and why that vision is a better place than where you are today – not just for the company, but also for them in particular. The antagonism between architect and developer is usually simply a matter of communication. Rather than hand down seemingly arbitrary mandates from on high, schedule regular meetings where you discuss the vision and solicit feedback. You have to be careful not to distract them too much from their day job, but these men and women need to be convinced you’re on the same team.

They’re also your pool of future architects, and keeping the architectural conversation flowing allows you to keep an eye out for those who really have the mindset necessary for the architecture role.

This natural tendency toward antagonism is one reason why I think it is always better to use matrixed architectural resources that are embedded in real development areas, not full-time architects sitting in an isolated ivory tower somewhere, working on a strategic plan while living in a vacuum unconnected to the reality of the business.

Provide well-documented and communicated standards

These take time to write, but not as long as it would take for an architect to review every single project for compliance with architectural concepts you haven’t bothered to write down. Every time you do review a project and find something wrong about the approach, you should consider adding that to your body of standards to help prevent you from having to document it again in the next project.

This may sound as if it contradicts my earlier advice against adding process steps to your SDLC every time you find a new problem. The difference here is that everyone is required to follow every step in the SDLC, where standards are only followed where they apply. Standards don’t create unnecessary overhead for projects; they just give guidelines for things you were going to do anyway.

Documenting standards is always a good idea and worth the time, but it must be combined with good communication skills and a plan for continuous review. Most standards reflect a best practice captured at a point in time, under a particular environment and technological capability. As the environment and technology continue to change, you can’t let your standards be what holds you back. Architecture and governance frameworks like TOGAF and ITIL that provide the most detailed standards suffer the most risk of becoming outdated. Standards are a living, breathing part of your governance. Make sure to invest in keeping them alive and relevant.

Pick your battles

As with architectural review of software purchases, you’re going to have to pick your battles. Look at the project pipeline. Which projects are the most critical to the business, both for immediate operations and as a foundation for the future? Which of these projects have the most potential for architectural review to make a difference? Some very strategic projects may actually not have much architectural risk. Some projects that are less critical in the near term are actually laying down very critical infrastructure groundwork for the long term. Remember that you need to pick your battles based on providing the most value to the business, not based on architectural purity. This is one reason why the business architect is so critical on the EAG team. They are the one with the best understanding of the short and long-term business needs.

Prepare critical architectural approaches well before they are actually needed

Architects usually have some idea what key projects will be ramping up in the next six months, and whether any of these projects will hinge on new strategic infrastructure (information, applications, or hardware). This is an ideal time for the architects from all four domains to work together, without the pressure of project deadlines, to discuss the long-term strategy internally, get training and attend trade shows, and talk to peers, consultants, and vendors. Then use this information to thoughtfully prepare a strategic approach document well before the project begins. These documents will be directed at the most critical new infrastructure components, and may be the single best value that an architect can provide the company.

If an architect doesn’t begin working on the approach until a project is launched and technical resources are ready to begin, the project isn’t going to get the best the architect has to offer. Using the functional framework is a great way to identify these gaps in strategy, prioritize them, and assign them out to architects to work on. When you have this kind of time, you can assign a small group of architects the task of leading the approach development, and bringing it back to the architecture team for review and feedback. Ideally, once you decide on a framework and get the bulk of your governance in place, fifty percent of the EAG team meeting time is spent reviewing these fast-approaching strategic business initiatives. A good bit of the remaining time should be spent discussing how to handle the potential infrastructure challenges of emerging breakthrough technology.

If most of your architectural meetings are spent taking minutes of who is working on what back in their real jobs, you have a failure in leadership. Rather than providing valuable strategic guidance, you’re doing little more than distracting the company’s top resources from projects where they are sorely needed.

Perform the architectural review without becoming a bottleneck

One of the most common mistakes is to over-specify. Most architects came up the development path. Before they became an architect, they were top-talent project resources: data modelers, developers, and software and hardware admins. They’ll find it hard to simply flesh out the high-level architectural approach and move on. They’ll want to use their experience and skills to specify what are more properly technical requirements.

Too much detail is almost as bad as no architectural review at all. Trust your technical people to do their job. In truth, the architect may well be able to do a better job creating the detail technical design than the lead developer on the project, but if you keep doing someone’s job for them, they’re never going to learn, and you’ll be stuck with it forever. Mistakes will be made – costly mistakes. However, they are less costly in the long run than having only one person who knows how to do the job. Remember all the costly mistakes that were made to get you where you are today, and show a little patience with the next generation.

A good architectural approach is seldom more than a couple of pages for most projects. For key projects implementing critical strategic infrastructure, the architectural approach may be ten or twenty pages per architectural domain, but those would have been developed far in advance. The goal is to document enough to keep the architecture on track, headed in the right direction. You aren’t trying to have the architect do all the technical work.

Consider adopting an industry standard application model

ITIL contains descriptions of a large number of business processes. Just as you can purchase commercially developed industry-specific data models for the information domain, some industries have developed detailed business operations models which can be used by the application domain. The TMForum, for example, has a two-layer model for business processes: a TAM (target application map) for high-level application functionality, and a Business Process Model, also known as TOM (target operational map) at the more detailed process level. The TOM model is like the information domain’s data model diagrams, but instead of mapping tables, fields, and relationships, it maps the business logic and business process flow in the application domain.

A model like this can give you a huge leg up on creating a well-tested list of application domain functions for your functional framework. An industry-recognized model such as this will facilitate interactions with vendors, streamline industry regulatory compliance, and ensure that your internal policies are comprehensive and your roles clear. It can also aid in the collection of business process requirements by leveraging time-tested industry process descriptions. Just remember the limitations of these models described in the discussion of the ITIL framework.

Gathering application requirements isn’t new. You already have processes in place that are working just fine to ensure the efficiency and success at the project level. To lift your management of project requirements to focus on the success of the larger enterprise, make sure you consider adding the EAG controls mentioned above. Here again, a functional framework is a great tool for keeping all the IT functionality in mind, preventing you from slipping into a project-focused architecture mindset.

Developing

There are several different approaches to software development, some of which are discussed beginning on page 200. Those various methods are each the subjects of many fine books authored by some of the brightest minds in our industry. Rather than try to summarize the pros and cons of each here, I want to discuss two aspects of software development that you need to focus on to serve your company well:

  • How to integrate your existing software development functions with the other IT management functions for a coordinated, efficient, automated, enterprise-wide IT infrastructure.
  • How to think about applications and how you develop them in an increasingly distributed world.

Developing in an integrated functional framework

A functional framework is designed to keep you aware of the larger picture, an integrated series of IT functions working together in well-orchestrated, automated interactions. The framework serves as a reminder to consider all of the different IT functions that must be coordinated.

You don’t have a great deal of control over the integration capabilities of your legacy infrastructure. You might have better luck finding software that meets your needs when you are buying it off the shelf. However, when you are developing software solutions from scratch, there’s no excuse for not considering how the solution will have to integrate with all the various IT functions in the larger corporate environment.

Most of you are probably pursuing various security certifications for your infrastructure. One of the phrases you’ll hear a lot is, “security by design, and by default.” This means that security isn’t an afterthought, something that’s patched on after you build your solution. Security must be part of the design of every project, right from the beginning. As with any compliance audit, you can’t just ensure the auditors that you think about security all your waking hours. You need to show them that your software development process includes security considerations in every step of the lifecycle including design, development, testing, and implementation. Every step along the way, security is an integral, inescapable part of the process.

Not only do you have to show that it’s part of your process, you also have to show that you are following that process by handing over audit logs that are automatically captured by the security-related steps in your development process.

As an EAG team building the governance artifacts, you need to provide the policies, standards, system utilities, corporate repositories and the process templates to lead the projects into compliance, and the collection and analysis of the collected audit records and metrics to ensure compliance and continually improve your processes. Your SDLC process documents need to spell out those security consideration points. Your project plans need to include tasks for those activities. Design those considerations into your development processes.

Security is certainly on everyone’s mind these days, but there are several other cross-functional integration points where you need to provide similar “by design and by default” governance:

  • Software development projects that promised certain Return on Investment (ROI) should include, by design, that means of collecting and analyzing ROI. Increased sales, reduced fraud, faster response times – whatever was promised must be measured. This is integrating the development of the infrastructure to support a business function with the tactical roadmap layer of your framework, where the function was justified.
  • Software that must meet certain Service Level Agreements (SLAs) for performance and availability should collect SLA metrics by design. This is integration of the software development function with other functions in your framework such as contract management and customer support.
  • By design, software should access data through loosely coupled DaaS services based on the enterprise logical data model, and should access data from the corporate system of record, rather than batch copying data to a locally-accessed copy. Spell out the activities that you want to occur during software development to ensure your developers are complying with your architectural strategy for the functionality they are developing. This is integrating the development function in your framework with the architectural strategic vision layer of that same function.
  • These are just a few examples. You’ll want to take a look across your functional framework and integrate into your software development processes the hooks to insure “by design and by default” integration with framework functions such as data quality (use of master reference data, integration with QA workflows, etc.), data modeling (compliance with ELDM, correct system of record, no data duplication, etc.), and others. The functional framework gives you a simple way to ensure you consider the full spectrum of issues required to build a fully integrated, automated IT infrastructure in support of the business.

If you think about your standards, many of them take the form, “If you do this, do it this way.” Other standards and all of your policies will take the form, “Always do this.” Those “always” tasks are the ones you want to consider making “by design and by default.”

This has to be balanced against the aforementioned danger of a software development flowchart that contains thousands of steps and creates a drain on the project. Don’t just demand that developers add a step to their process; give them the tools, the APIs, the systems infrastructure, and the templates to make compliance easier than non-compliance. Make compliance automated and default, rather than manual. Make sure that any overhead you add to the IT management process is outweighed by the value it brings.

Developing for a distributed future

Software development is about to be transformed. Our infrastructure has suddenly fragmented and scattered to the winds. This is happening in all three IT domains.

  • Distributed Information (see Consuming, page 132)
  • Distributed Applications (see Trend: Disappearance of the network edge, page 222)
  • Distributed Technology (see Trend: The virtualization of hardware, page 239)

In one sense, these trends started several years ago and have been gaining more ground each year for some time. In another sense, we’re maybe a decade from the day when these trends will represent the majority, rather than the minority of your infrastructure. The 2016 Gartner Hype Cycle for the Internet of Things44 predicts many of these distributed infrastructure components becoming mature and commercially viable within two to five years.

Yes, all this bears close watching, but how much should it really impact your development decisions today? After all, this is a business, not a science project. This sounds like a really interesting thing to play around with, but is that good stewardship of the business’s time and money today?

The viability tipping point is a decision each company will need to make for themselves. It’ll depend on many factors, all of which are in constant motion. My suggestion is to be cautious, but not blind. Using the functional framework, you should go ahead and start planning your strategy for how your infrastructure will evolve as these factors mature. On paper only, figure out as much as you can foresee. You’ll likely need to educate yourself in order to flesh out that picture. That’s fine – you have a little time. But start now.

As pieces of the architectural strategic plan picture start to fall into place, you can start to develop your tactical roadmap. Again, each step in the roadmap must provide value immediately. Are any of your major business applications in the cloud yet? This is a good time to get your feet wet:

  • Distributed information. Maybe you have a new analytics project that can benefit from one of the powerful hosted machine learning products, like IBM Watson. This is a good way of taking on cloud-based infrastructure without risk to the operational applications.
  • Distributed applications. Maybe you’re upgrading your HR software, and can consider a hosted solution. Nearly every vendor has a cloud-based solution now, with hundreds of active users. When thinking about our IT application infrastructure, we tend to forget about those HR and accounting apps, but they can be a smart way of moving to a mixed local and cloud solution without the risks that accompany early adoption. Since HR software is applicable to all industries, the software vendors have a lot of installs, and are looking for economies of scale. They can afford to invest more research and development in cloud-based development than vendors of industry-specific software.
  • Distributed technology. Maybe you’re overhauling your disaster recovery solution and can consider a hosted DR recovery site. There’s a terrific use case for letting someone else manage the entire DR infrastructure and, as an added bonus, during your POC and annual DR testing, you get an excellent preview of the power and challenges of moving your current infrastructure to the cloud.

Each of these examples provides immediate business value while giving you a limited-risk sandbox to figure out issues like security, performance, integration, and support that accompany a cloud-based architecture.

Once you conquer a couple of projects like these, then you might begin to think about turning your development staff loose on a home-grown, cloud-based business application that provides immediate business value. The lessons you learned in the previous examples will stand you well as you try to build out products of your own.

A friend of mine working at a utility company told me that one of their first real forays into cloud-based development was implemented in lower environments (development and test) only. The nature of their core applications resulted in the need for up to six different test environments, used at various times for various purposes. Most of the time they needed only one or two. During a major system upgrade, they decided that it made sense to purchase cloud-based virtual infrastructure for these environments. They could be spun up when needed, and decommissioned just as quickly. The utility company didn’t pay for environments when they weren’t being used, and the hardware/software infrastructure updates and security patches were the headache of the cloud host, not my friend.

It’s still difficult to justify the monthly cost of some cloud-hosted infrastructure solutions for permanent production solutions, but it’s much easier to make the business case for solutions whose resource requirements fluctuate dramatically over time. The solution worked so well that they’re now exploring moving their DR solution to the cloud host as well.

I spoke with a peer at a conference recently who stressed the importance of a loosely coupled architecture that communicates via service calls over a standard IP network. When they moved one application’s environment to a cloud host, the tightly coupled, proprietary communications points were the main failure point. The business logic that had been implemented using a Business Process Management (BPM) engine orchestrating business services over an IP network worked flawlessly and performed so well that the testers of the system often forgot it wasn’t local anymore.

Among the tightly coupled components that caused problems was all the batch file data movement to instantiate local copies of data each night. The system still worked, but the advantage of a local copy of data on the cloud solution was more than offset by the degraded performance of all the large file movement. He found that in a cloud solution, it made far more sense to switch to data services against a single system of record. An infrastructure built for distributed applications worked best with an infrastructure built for distributed information.

Your development team can handle the mechanics of developing, or you’ve hired the wrong team. As an architect, your job is to tell them what to develop, not how. If you want to serve your company well architecturally, you need to be creating a vision for a distributed future.

Change management

There are many fine change management tools available to help you automate the migration of software development projects from environment to environment with the appropriate approvals and back-out controls. These tools are also called software configuration management tools (SCM). They often include features like source version control, workflow management tools, and test harnesses.

I won’t attempt to describe all the functionality of these tools. From an enterprise IT management perspective, what’s important is the integration of those tools with your other IT functions (such as security and testing) and your ability to leverage a single solution across multiple product lines.

In addition to all the requirements you would need to consider when building a departmental change management solution, you’ll need to make sure someone is asking the following enterprise-level questions:

  1. Can you use a single corporate change management tool for multiple application teams? Even if they work on different platforms? At the very least, all your mainframe development across all application development teams would use the same change management tool; though perhaps your Java developers might all use another. There are several change management toolsets out there that claim to support all these environments. Using a single toolset will reduce administration costs, training costs, and infrastructure costs. You may be able to justify a site license and save dramatically on licensing costs. Many audits these days require a review of your change management processes, i.e. to make sure that no changes can be moved to production without approval. Having a single solution can reduce audit costs, and reduce the risk of failure to comply. If you do need to implement new steps in your processes, they can be made in one location and affect multiple applications. A change management tool can be part of your enterprise regulatory/security compliance.
  2. In an increasingly integrated world, your change management tool not only needs to work on multiple development platforms, but it may actually need to coordinate projects that span multiple platforms. You may have a mainframe application functionality change that includes Java services to expose the functionality to your ESB, and includes accompanying changes to reports in your enterprise business intelligence tool – all of which need to move up together. Can your change management solution handle this?
  3. Integrating application and information changes. A single project often involves both changes to code and changes to the data model. When you migrate from development to test, both types of changes need to migrate at the same time. If a problem is found in test, both need to be backed out and taken back to development. Does your change management solution force you to manually coordinate different propagation paths for different parts of the same project? How much time is wasted on this duplicate work, and how many errors are caused by the need for manual coordination?
  4. Likewise, a project may include changes to internal and external documentation. You need a change management system that can move these artifacts along with the code changes. This includes project documentation, testing documentation, data dictionary and data lineage information, end user installation and user manuals, marketing literature and more. Can your change management tool integrate with your various documentation repositories?
  5. What about test cases? Can your change management tool move the test data from environment to environment along with the code?
  6. Can your change management tooling be integrated with your project management system so that approval of a project in the project tracking system for one environment automates the migration to the next? Change management involves many signoffs by different roles at different points, and may involve a workflow management tool to route tasks and track responses. Ideally, this would be the enterprise-standard workflow management tool used across all IT functions.
  7. Can it be integrated with your defect reporting/tracking system so that each software release can automatically be tied to the defects it addresses, and the defect log updated with the status of the fix as it moves toward production?
  8. Does your change management automate the build of each environment, including both code and database changes? It is good practice to have separation of duties. Developers should not be able to change code in the test environment, because the migration to test is itself a test of the process that will be used to migrate to prod. You aren’t just testing a code change; you are testing the propagation of a code change. But separation of duties doesn’t require human intervention. Automation is, in fact, preferred, as it’s more predictable, less prone to error, and ensures an auditor that a process was always followed. A DBA, for example, shouldn’t have to be involved to make manual changes to the database during code propagation. These changes should be implemented automatically by software. The DBAs should review the automated change, and perhaps reject the way it was done, forcing the cycle to be repeated, but the change itself should be automated.
  9. Can your change management solution be integrated with your testing solution so that when a project is moved from development to test, an automated test cycle is run?
  10. Is your change management solution part of your disaster recovery planning, or will all development grind to a halt in the event of an emergency? Is your change management repository part of your regular backups?

As you can see, your change management functionality must integrate into the larger enterprise-level plan to manage all the IT functionality. Change management has ties to project requests, bug tracking, information management, testing, disaster recovery, security, and more. Mature enterprise-level change management is difficult to achieve. Many companies find change management and testing the hardest parts of achieving CMMI45 certification.

Here again, the functional framework is a convenient way to see all the integration points at a glance, to help insure your change management solution is fully integrated into all the other IT supported functions.

Testing

Software testing is difficult to do well. I’ve seen many cases where the software testing area might as well have been sent home. You can easily work very hard at testing and still be wasting everyone’s time and money. Most of the time this occurs because everyone believes they know how to test, but very few people really do.

Most of the other software development processes at your organization are being done quite well, though perhaps not consistently across the enterprise. Software testing is the one function of the software development lifecycle where most of us are fooling ourselves into believing we know what we’re doing.

If you want to step up to more efficient, more productive testing across your enterprise, I would recommend you set up a corporate software testing center of excellence, and get them trained and certified. Get an independent audit of your testing processes.

The ISO 9000 and ISO 9126 standard (now ISO 25010:2011) focuses on testing functionality (including accuracy and security), reliability, usability, efficiency, maintainability, and portability. Think of it as your dimensions of data quality, but for business logic and processes rather than information.

For testing security functions, you should check out the Open Source Security Testing Methodology Manual (OSSTMM). It consists of peer-reviewed methodology for assessing operational security in data networks, telecommunications, wireless, physical security, and human security.

Of course, there are several stages of testing:

  • Unit Testing (UT). Performed by the development team in the development environment, focused on whether the code changes themselves function as designed.
  • User Acceptance Testing (UAT). Usually performed in a test environment, where the resource who initiated the project request will determine if the solution met expectations.
  • System Integration Testing (SIT). Usually a simulation of the production environment, where all processes are run, and regression testing performed to ensure the change didn’t have any unexpected impact on other functionality. Often referred to as the “model office.”
  • You may also have performance testing, penetration testing, and any number of other specialized test environments.

This short bullet list is actually quite complex and, contrary to most development organization’s beliefs, is something you’re probably not doing very well today, even within the scope of a single project.

That’s something I seriously urge you to look into, but it’s not in the scope of this book. Instead, we want to discuss what differences there are between testing done well on a project-by-project, or even information system by information system level, and testing that is coordinated across the enterprise. The challenge here is that our IT systems are, at the same time, becoming both more tightly integrated and more distributed.

In your production environments, the different information systems don’t function in isolation the way they did thirty years ago. Information and processing requests constantly flow between them in a complex web of activity. Even when performing testing within a single system, you have to test these interfaces, both outbound and inbound.

In the increasingly integrated world, you have to figure out how to coordinate this testing across your enterprise. For example, any decisions about masking data in lower environments must be made at the enterprise level, so that the masked values in one information system synch with the masked values in the systems that talk to it.

Likewise, any decisions about making the lower environments a subset of production in order to speed the test performance must be coordinated across all the system testing environments at the corporate level through the EAG team. You need to come up with a long-term strategy and work to achieve it over time. This isn’t going to happen overnight.

To complicate matters further, in the future, many of the services you call may be hosted in the cloud by other companies who are unlikely to share your strategies for masking and for making subsets.

How are you going to coordinate upgrades? When one application changes, the impact can cascade throughout the enterprise. This is another good reason to design an architecture consisting of information systems that are loosely coupled to each other via services. If one application accesses another application’s database directly, then the two are tightly coupled, and very prone to failure if the data mode changes. But if a services interface is used, then each application is insulated from internal changes in the other (as long as the API is preserved).

How are you going to coordinate maintenance windows across multiple integrated SIT environments? Actually, probably the same way you do in production. Unfortunately, production systems often have failover and redundancy that make them higher availability than lower environments.

As systems continue to grow in volume and velocity, how are we going to provide realistic lower environments in the first place? Does Google have a pre-production SIT environment? Does Amazon? Does eBay? Do they rebuild their entire infrastructure every night? If they don’t have a static image of the system, how do they know if a change worked or not? How do you regression test in a non-deterministic, distributed system where each test run happens over dynamically fluid information, application and technology?

As software architecture changes, becoming increasingly distributed and outsourced, and the individual infrastructure components continue to grow in scale, we are going to have to re-think the way we test. That thinking is the responsibility of the EAG team, and should be part of your architectural strategy for the testing function on your functional framework.

Trend: Disappearance of the network edge

If you consider the implications of the increasingly distributed architecture described earlier, you may be struck by how this will affect application design in the years to come. In years past, we designed applications with three different levels of scope:

  1. By far, most applications were designed to serve the needs of the internal business. The application sat inside the firewall and was accessed only by some of the company’s employees. The applications were designed to support that volume of workload, and, while there was certainly user authentication, the internal network was considered pretty safe.
  2. A far smaller subset of applications exposed very modest functionality to external customers outside the firewall. The company likely has a much larger number of customers than employees, so these applications had to scale higher, a goal made possible mainly by the limited processing required to support the minimal feature/function exposed through these customer portals. Authentication in this domain is much more strict, and the risk of breaches much higher.
  3. A very small set of generic, no-risk information might be supported on a public interface such as a corporate website. This site may have to support a user base even larger than the customer base, but in most cases, the information is basically static, the risk low, and the processing demand minimal.

In this scenario, there was a very real edge to our network, and most of our applications, both built and bought, operated strictly inside the firewall, in a relatively secure, relative low volume zone. The applications were designed for that environment.

Today this network edge that our applications have hidden behind is blurring and disappearing. Increasingly, our applications are moving across the firewall into the cloud, and the demand for more robust, interactive customer solutions is driving our customer base deeper into the territory once reserved for our internal apps. Our applications are increasingly integrated across the web, calling and being called by partner companies outside our firewall.

You can no longer afford to design applications the way we did twenty years ago. Applications must be loosely coupled via services, and they must be designed to support the much heavier demands of an increasingly integrated, yet geographically distributed world. From a scalability and authentication standpoint, can you really afford to develop under the pretense that your application will remain forever tucked away behind a protective firewall, accessed only by a few trusted users?

Two kinds of people – conflict embracers versus conflict avoiders

There are two kinds of people in this world: those that embrace conflict, and those that avoid it.

I don’t know of anyone who particularly enjoys conflict. I’m sure those people exist, but fortunately, I haven’t run across them. There are, however, people who thrive on conflict. They realize how uncomfortable people are with it, and have learned to use that to their own advantage.

During the cold war, the United States found negotiating with the Soviets to be very challenging,46 primarily because the two nations approached the table with such different mindsets. Because of the way our political system and four-year election cycles worked, American negotiators felt a lot of pressure to come home with some kind of result. The Soviets, on the other hand, were perfectly willing to walk away with nothing rather than give up anything important. While the American political system was built on democratic compromises, the Soviets viewed compromise as a weakness. Their attitude seemed to be, “What’s mine is mine, and what’s yours is negotiable.” Due to the differences in political planning horizons, national objectives, and the relative value placed on keeping secrets versus transparency, negotiations were tense, often baffling affairs. One reason many political historians believe the Soviets did so well in these negotiations was that the Soviets understood how to use the stress of the negotiating table to their advantage. An American negotiator would feel more and more stress building throughout the meeting to come home with something to show. The Soviets knew this, and would patiently stonewall; the Americans would put more and more on the table to make their offer more tempting. The Soviets knew that the longer they waited, the sweeter the deal would be.

This is how I see the people who thrive on conflict. They’ll stir things up, causing a great deal of stress in those who just want to avoid conflict, knowing that they’ll be offered increasingly desperate solutions. The more stress they cause and the longer they can keep it up, the better their position will become. All they have to do is generate conflict, and their opponent will be willing to agree to almost anything in order to end it. I’ve worked with several people like this over the years: abrasive and deliberately belligerent, professionally unpleasant people who love to stir things up. This kind of conflict is poisonous. I don’t care how good these people are at their job, if they don’t share your corporate values, they have to go.

On the other hand, avoiding conflict is also bad. Conflict happens even when neither side is intentionally provocative. We all need to be able to remain professional, but supervisors and managers have a special responsibility to act as a point of escalation and resolution. In my experience, the hardest working person on the team is the department administrative assistant, who probably has a lower salary than anyone else on the floor. Most of the time a supervisor or mid-level manager gets paid pretty good money for how hard they actually have to work. Part of the reason they get paid that salary is that there are times they have to step up and make the difficult decisions, or take care of things that are getting ugly, or enforce unpleasant corporate policies. They may not love conflict, but they are literally being paid to deal with it.

Companies that promote staff into supervisor and manager positions based purely on tenure often face the problem of having turned a very good front line employee into a very poor supervisor. Skills as a developer have little to do with skills as a supervisor. If someone really doesn’t want to deal with conflict, that’s fine, but in that case they shouldn’t take the supervisor job. If you cash the paycheck, the company has a right to expect that you will do the job. Saying “no” to a supervisor position because you know you will not be able to handle the conflict resolution is quite brave. Saying “yes” and then avoiding every possible conflict even when the result damages the company is cowardly. If you have a coward in a management or supervisor position, you need to cut them loose immediately and replace them with someone who will do the right thing for the company, even when the process is unpleasant. There is no place for cowards in an enterprise architecture and governance program.

Problems don’t just go away because you avoid them. They get worse. While not enjoyable, conflict brings opportunity. If you keep your head, remain professional, and don’t compromise your decisions solely to make the problem go away, conflict can actually be a means of making progress much more quickly than would otherwise have been the case. Things that might have lain festering in the dark for years will be brought out into the light where they can be dealt with. You’ll probably never enjoy the process, but you will learn to use conflict to move things forward.

Software services functions

In the application domain of the functional framework, we have discussed two functional areas: the software infrastructure lifecycle, and the software development lifecycle. Occasionally, you will find a need for a third application functional area, software services.

As with information services, there’s no need to call out each software service, as long as they fall under the architecture and governance in the application infrastructure lifecycle. Most of your applications can be considered services in the ITIL sense, regardless of whether or not they were developed with a service-oriented architecture. That doesn’t mean you need to call each out as a separate function in the framework. The goal of the functional framework is to simplify the complexity of managing IT functionality on behalf of the business, not to make it more and more complicated.

If the release management, security, and operational support for a software application or service fall under the policies, standards, processes, and roles built into the software infrastructure lifecycle, then it’s counterproductive to call an application out as its own service function, with its own requirements for separate architectural strategy and roadmap, and separate application governance policies and processes.

If the application does have separate policies and processes than your other software, you have to ask yourself the question, “Should it?” In most cases, the answer should be, “No.”

The few applications that I’ve seen really justified as separate services have always fallen under the heading of “systems functions.” These are services written to centralize the functionality that exists in many applications.

In the early days of corporate computing, applications were created as monolithic infrastructures, containing all the software services that were needed, because, frankly, they didn’t exist anywhere else. Every application would have its own database management software (DBMS), its own report writer, its own customer management system, and its own print management system, and they were all unique and different. Since all their internal components were dedicated, these business applications could be managed as completely isolated units of functionality, with no components shared with other applications.

Over time, as technology improved, we developed the capability to take some functionality out of the individual solutions, and manage it an enterprise level across all solutions. In the information domain, customer data was removed from individual applications and managed in a cross enterprise CDI/MDM (Customer Data Integration/Master Data Management) system. In the application domain, enterprise print management solutions replaced application-specific solutions. Technology domain storage Area Networks allowed the creation of virtual dedicated disk storage out of an enterprise pool of disk resources.

It would be an unusual application today that included an internal, custom database management system (DBMS). That’s really the point. In the old days, the DBMS was considered part of the business software. Your account management software would have its own embedded, proprietary code to read and write its data to disk. Today, the DBMS is considered a software support system and managed separately from the primary business application software. These are almost always third party infrastructure products, and in fact are often managed by the technology domain.

Even in this day and age, though, you may still find yourself occasionally creating cross-application system service applications. Internally developed systems-infrastructure applications are a different breed of application than your core business application functionality, and are one of the few good cases to call out as application functions of their own. Enterprise-level system applications often have different policies, standards, processes and roles than business applications.

As enterprise assets, you may wish to call these services out separately, so that the unique architecture and governance of these cross-functional services don’t get buried within the functions that make use of them. Another way of thinking about this is to consider them “systems” functions. Most large companies have many “software” developers, and a small handful of “systems” programmers whose job it is not to write software for the business, but to write software utilities and APIs for use by the other developers. These “systems” applications are not business-facing, but are used internally by IT to help make the business facing application development more consistent and efficient. If the development and support of these systems applications is significantly different from the development and support of normal applications, you may want to have them operate under different policies, standards, processes, and roles. In that case, you would want to consider them a separate function in your functional framework. If the policies and standards aren’t significantly different from the rest, then there’s no need to needlessly complicate the framework by calling these out as separate functions. The idea is to keep the high-level framework as simple as possible.

Contact management

One example of this internally developed, enterprise-level systems-application I’ve worked with was a system to integrate contact information. In one sense, this was an extension of MDM, but at the business logic level.

Many applications contain customer contact information, including things like:

Preferred contact information:

  • Channel information – email address, phone number, and physical address
  • Preferred channel – email, text, phone, postal
  • Preferred contact language
  • Time of day (for phone and text)
  • Opt in/out information

Offer management:

  • Actions
    • Campaigns and offers (generated by campaign management)
    • Real-time offers (generated at time of call)
    • Event-driven actions (i.e. welcome letters for new customers)
    • Other actions (i.e. a note to ask customer to confirm email address)
  • Responses
    • Rejections (may terminate offer, or may simple suspend it for a time)
    • Acceptance
    • Other responses (customer provided email address)

Contact notes

  • Call center notes
  • Emails from customer
  • Social media posts

Customers get very frustrated when they call in and get transferred around. It’s even worse when the customer is told over and over by each successive transfer that they have been “specially selected for an exciting new limited-time offer.” Or when a customer calls in about an offer they received in the mail, and the call center has no idea what they are talking about.

We had this problem at a company where I worked. Our business architect asked us to design an integrated solution that could (like the DBMS of old) completely replace the application-by-application logic that had grown over the years. Yet the solution had to be integrated. We didn’t want the call center staff to open yet another application window; the contact management functionality needed to be embedded in the existing call center application interface.

The architectural vision for this system function was much different from for our core business applications. The governance policies, processes, and roles were also much different. Instead of supporting business users, we were supporting business applications. Licensing and security were quite different from that of a customer facing application. Service level agreements for supporting applications were materially different than for supporting customers. The development and support roles were not embedded in an any one information system, but were instead a pool of matrixed developers working outside of the core information systems. Upgrading a systems function is much more complicated than upgrading an application embedded in a single information system.

This is a case where it might make sense to call out the application as a separate software function rather than simply one more business application subject to the software infrastructure lifecycle functions.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.15.237.123