Chapter 12. Sage Musings

In deep meditation ... can I reach me in a parallel universe?

In the din and bustle of life’s frantic pace, we cannot seem to slow down, take a step back, and reflect over the experiences we learn and gather. Nor can we always make an effort to look for avenues to share our invaluable experiences with the broader community and foster a collaborative ecosystem of knowledge sharing. I share my part in these vices of not doing the same—at least not as much as I would have liked to. I wonder how much of a prisoner of events I have become and whether I can own and drive my own action plans.

This chapter, albeit a short one, is my effort to practice what I would like to preach. That is, my goal is to share some real-world experiences that I find helpful in grounding myself in times when I fail to see the bigger picture during the madness of project execution.

I humbly share some of my revelations with you and my architect colleagues, to whom I remain forever grateful. My learnings and experiences are not elaborate and extensive; rather, they highlight some bits and pieces of over-a-drink musings!

Agility Gotta Be an Amalgamate

We are now at a point where the industry is convinced that there is enough value in applying agile principles to software development for companies to consider the adoption of agility as an enterprise manifesto.

We tend to develop our own views of agility and how it may be incorporated and adopted into IT. No two individuals I have talked to have the same point of view! I have started to wonder whether it is about time for us to get to a simple and crisp viewpoint that a team can agree on and hence simplify putting agile disciplines into action.

Being agile is as much a cultural statement of intent and mindset as it is to have an underlying and supporting IT framework to support its instantiation.

The culture of agility, in my experience, can be boiled down to four fundamental statements of intent:

• Clarity over uncertainty

• Course correction over perfection

• Self-direction over command-and-control teams

• Talent density over process density

Clearly defined, well-documented, properly understood, and appropriately communicated project objectives are worthy of being pinned to the walls of IT project team members. Reading them out loud before the start of the day often helps in clearing noise from the head and focusing on working toward the stated objectives. I believe that every single team member has an equal right to stand up and question any deviation from the stated project intent. Setting clear, precise, and objective goals goes a long way. Look at what the Fitbit did for me: I get out of my seat and walk around every now and then just to reach my daily goal of 10,000 steps!

You cannot strive for perfection; expecting that every single project artifact will be on target in its first incarnation is unrealistic by the very nature of this thinking. Instead, an environment that fosters quick learning and prototyping and that does not penalize failure but rather encourages failing fast and correcting course promotes a dynamic, fast-paced, and self-driven project setting, one in which team members thrive and perform better.

A project environment in which team members believe and practice course correction over perfection automatically builds self-driven teams. As a result, team members are not only crystal clear in their understanding of the project objectives but also know how to prototype, learn, course correct if required, and bring innovation and dynamism in attaining their project goals. Such teams do not require much hand holding or commanding and controlling of their work activities; micromanaging them usually proves to be a deterrent.

Organizations these days are more geographically distributed than ever. We see projects, whose requirements are understood and documented in one country, and then their development is shipped to another country; some organizations even go to the extent of shipping testing phases to yet another country. Projects in which there is such a clear delineation between project team activities often end up building isolated teams and hence skillsets that become highly specialized. The project’s resource and skill profiles become waterfall driven (that is, much like sequential and often specialized tasks in a project plan, skillsets too become specialized with team members entering and exiting project phases)! One aspect of infusing agility in IT projects is to cross-train individuals to pick up adjacent skills. Consider a scenario in which the same team members who gather the requirements also perform system testing. Such cross-training of skills not only helps the team members become multifaceted but also builds a knowledge continuum. I have seen teams that focus on talent density through team colocation and cross-training to be more successful in their project execution than their counterparts.

In my experience, while the culture, mindset, and execution modus operandi is necessary, appropriate measures should also be put in place to convert agile thinking into agile deliverables. One of my colleagues once pointed out that there is often a tendency to treat agility as being different from DevOps. Agility is used to deliver business value, not just IT projects. Commensurate tooling and infrastructure support, which fosters iterative and incremental software system development, is critical if you want to harness tangible outcomes from practicing agile adoption in IT projects. Management ought to invest in setting up a framework and not only let individual project teams leverage the framework’s capabilities but also be empowered to customize it to further fit their development methodology. Some of the tooling infrastructure aspects may include

Environment Setup (Development, System Test, Production)—These elements may be leveraged by similar projects on a shared platform; for example, Docker containers or virtual machines in the cloud.

Test Automation Engine—This tool supports continuous testing of regularly developed code.

Automated Build Engine—This tool supports and fosters continuous integration between new codebases with existing system features.

Automated Deployment Engine—This tool supports continuous deployment and testing.

I submit that the infrastructure framework for agile development is an amalgamation of the mind (culture and mindset) as well as the means (rapid development, testing, and deployment tools) to realize its true benefits.

Traditional Requirements-Gathering Techniques Are Passé

For business analysts, the process of gathering requirements and formalizing them into painfully long documents has been trite for the past few decades. We generate reams of textual documents and package them up into use cases and functional specifications.

What has struck me in the past few years is that this traditional approach to gathering requirements is not as effective in the present-day setting wherein mobility and mobile applications have become the de facto standard for humans to interact with machines and systems. In a few experiments that I have personally undertaken, in a few software development initiatives, team members were encouraged to assume that technology has no bounds and it can do anything and everything. Team members were then asked to engage with the user community; that is, the people who would be the real users of the system. The engagement approach had simple objective outcomes:

• How do the users like to interact with the system?

• What information would they like to have available to them?

• In which ways should the information be rendered and viewed?

The focus changes from textual documentation to information visualization and user interactions through intuitive infographics. A premium is placed on the intuitiveness and innovativeness in visual rendering and on elements of human psychology as it pertains to how visual processing of information triggers synaptic transmission. Easing neurotransmission in the human brain is a natural way to increase human acceptance—in this case, the intuitive adaptation with the IT System and having user acceptance as a given before even constructing the system! (Yes, I know what you’re thinking about nonfunctional requirements, or NFRs; let’s keep that issue aside for a moment.)

Design thinking advocates such a philosophy. Apple, as an enterprise, practices such a philosophy; the world’s acceptance and usage of its products is a standing testimonial!

Next time you come across an IT System that you have to architect, try considering the design-thinking approach.

The MVP Paradigm Is Worth Considering

If you are adopting agile development practices, shouldn’t the product being built ship sooner? After all, you are being lean in your execution, using prioritized epics and user stories, and managing your backlogs efficiently. (Epic, user stories, and backlog are foundational concepts in agile methodology.)

In my experience, it is critical to think of a product or a project as having continuous releases. Traditional product release cycles have been six months to a year. Although the cycles need to be shortened (no one waits that long these days!), the principle is very much applicable. I have seen that a six-week release cycle is a nearly ideal target. However, there are some questions and challenges:

• What would we be releasing?

• What would the first release look like?

• Who would it cater to?

This is the point where the concept of an MVP, or what I call, in this context, the Minimal Valuable Product, comes in. The definition of MVP should take center stage. An MVP addresses the leanest product features that should be packaged and made available. The leanest aspect can be dictated or influenced by different factors, most of which are more business than IT imperatives and are usually value driven. Here are some of the dictating factors I have come across:

• Establish a presence in the marketplace as the first mover.

• Establish a presence in the marketplace as one who is not lagging behind in the industry.

• Develop a set of features that have an immediate effect on either decreasing operational costs or increasing revenue.

• Enable a particular workforce (certain set of user personas) that has a compelling need for some features—for example, those users who have been taking significant risks in their decision making and traditionally have had to face tough consequences.

Features and capabilities are only as good as the means through which users interact with a system to exploit and leverage them; this is where the design-thinking paradigm assumes utmost significance, along with the standard analysis around data and integration, of course. Be sure to drive it through design thinking!

An objective focus on deciding on the MVP feature list and its corresponding user interface drives such a behavior and increases chances of getting something (which is business value driven) out the door and in the hands of the practitioners and users as early as possible.

Subsequent iterations will obviously follow the MVP!

Try it out if you like, and if the opportunity presents itself, lead by using an MVP paradigm. Or better still, create the opportunity!

Do Not Be a Prisoner of Events

As a software architect, as you make projects successful, you will increasingly attract the attention of your organization, the cynosure of their eyes. It is human instinct to gravitate toward success. You will be in demand and hence pulled in multiple project engagements and strategy discussions.

As I reflect on my personal experiences, I have had to stumble upon repeated realizations regarding my valiant attempts to simultaneously execute on multiple fronts, making every effort to satisfy multiple parties and juggle (possibly disparate) activities all at the same time.

Popular wisdom advocates focusing on one task at hand and executing it with your best effort. If you are bored working on one major activity, it is okay to work on two (at most) activities simultaneously. Some people find it refreshing to have a change. However, in general, the cost of context switching is very high and often proves to be detrimental.

Time will evidently prove to be the most precious resource: something that you will increasingly be chasing. However, you will be doing so in vain if you try to address too many tasks and satisfy multiple groups. You will end up just being a prisoner of events. If you cannot manage yours effectively, people will dexterously take a stranglehold of your time.

I have finally learned to say no, to stop spreading myself too thin into multiple areas, and to be objectively focused in a prioritized manner. Being able to manage your time requires you to start by taking possession of your time!

Predictive Analytics Is Not the Only Entry Point into Analytics

Many organizations and consulting firms advocate that the primary entry point into the analytics discipline is through predictive analytics—the power to predict some critical or important event in the future. Most organizations start off with data mining and data science activities to find that elusive nugget of information that presents significant business value. We are often led to believe that developing a powerful predictive model is the ultimate goal, and hence, building such a model is the point where all our analytics endeavors and engagements should begin. While I have no reservations against embarking on the journey with predictive analytics, a few of my observations from the field are worth sharing:

• Building predictive models is not easy. Whether the predictive power of the model is good enough to inspire the organization to believe in its prophecies is neither definitive nor easy to achieve.

• Predictive models commonly must deal with issues around data availability and data quality, which is where the majority of the time is spent rather than focusing on building the model itself.

• It may not be the quickest hit to business value.

My experience leads me to believe that, at least in some industries more than others (for example, in the industrial and manufacturing space), it is critically and equally important to harness the potential of operational, or real-time, analytics. The point is to monitor the operational assets in real time to generate key performance metrics and manifest them into intuitive and interactive real-time infographic visualizations. Operational analytics often serves as the key to optimizing an asset’s overall equipment effectiveness (OEE). Also, in many cases, it may turn out to be easier to generate key performance indicators (KPIs) and focus on their interactive real-time user experience rather than churning through months’ and years’ worth of data to generate a really insightful predictive model. Some of the reasons may be, but are not limited to, the following:

• Computing analytical metrics (for example, KPIs) in real time is definitive (that is, formulaic driven) by its very nature.

• Generating key metrics in real time offers the advantage of taking corrective actions as and when some critical operations are underway; that is, actions can be taken at the point of business impact.

So, while there is no takeaway from predictive analytics, we should also consider, as an example, the value propositions around real-time operational analytics as an equally powerful and value-driven entry point if deemed applicable in the industry context.

Leadership Can Be an Acquired Trait

As the adage goes, leaders are born. And they very well are. However, The Almighty is not too generous in encoding that into the genetic blueprint of all worldly entrants! So does that mean that the common man, like you and me, cannot become a leader? I used to think along similar lines until I picked up a Harvard Business Review article on leadership (Goleman 2004).

The summary of the message was that a leader must possess (if born with) or acquire (for someone like me) five essential leadership traits: self-awareness, self-regulation, motivation, empathy, and social skills. The article qualifies self-awareness as being aware of your own emotional traits, strengths, and weaknesses, along with being clear on drives, values, and goals; self-regulation as being in control of your impulsive reactions and redirecting them toward positive actions; motivation as your drive to achieve something worthwhile; empathy as your keenness to be compassionate about others’ feelings when making a decision; and social skills as your ability to manage relationships effectively enough to seamlessly influence them to move in a desired direction.

The ability to practice these leadership traits, even after being thoroughly convinced and believing in them (as I have been), requires exercising a good dose of conscious free will. Since I was not born with natural leadership qualities, I had to consciously practice exercising these traits—some in a planned manner and some in a more ad hoc manner. Conscious exercising morphs into default and instinctive behavior, which becomes second nature—the power of habit!

Leadership traits can indeed be developed through conscious practice. As an architect, you are looked upon as a technical leader. You are expected not only to lead software development but also to engage in C-level discussions. You are the upcoming chief architect, chief technologist, or CTO; the leader in you will be called upon to shine in resplendence sooner rather than later!

Technology-Driven Architecture Is a Bad Idea

IT projects and initiatives have many different points for initiation. Some start purely from a set of business drivers and goals that require an IT System to take it to fruition; some others are incubated in IT as IT-driven initiatives that often include good ideas and intentions, but not always! I have seen many IT projects initiated in IT with the intention to try out some new things, some new technologies in the market, or some cool aids that someone may have read up on or thought were cool. Beware of the last type (that is, the cool aid drinkers) and ensure that any such initiative can be directly traced back to a business sponsor with clearly defined business objectives that it is expected to fulfill.

Many technical teams start by declaring the availability of a technology platform on which the system ought to be built. Architecture constraints and considerations are laid out based on the requirements, specifications, and constraints of the technologies that have been declared. In such settings, when business requirements are gathered, impedance to acceptance creeps in from the technical teams: technology constraints need to be violated to meet the requirements, for example, or the technology or product components may not be able to satisfy the required capabilities.

I’m happy to share a couple of my own experiences, illustrated in a short-story style:

• A business intelligence (BI) reporting tool or product was chosen to be a part of a technology stack before we understood the type of reporting needs expected from the system. When the requirements were gathered, there emerged a class of visual widgets that needed to visualize some operations data in real time; that is, when the data was flowing into the system; more and more of the visualization expectations started falling into this category. The BI reporting tool supported widgets that were only able to render and refresh the user interfaces by periodically querying a database; it did not have the capability to render and refresh widgets from data that was streaming into the system. Big problem, bad idea! The team had to perform a deep technical analysis to arrive at the conclusion that an alternate visualization technology would be required to support the business needs. Declaring the use of an existing BI reporting tool to be the technology of choice was not a good idea after all.

• An enterprise chose a Hadoop platform with the expectation that it would satisfy all analytic needs and workloads. When the enterprise needed to develop a couple of complex predictive models for its manufacturing assembly line, the data scientists were put to the task of building the predictive models and running them against the data in the Hadoop cluster. Surprisingly enough, running the queries, which were required to train the statistical models, took an inordinate amount of time. It took up a lot of time, jumping through multiple layers of frustration, disgust, and finger-pointing contests, to finally figure out that the chosen Hadoop platform was not conducive to running complex queries on petabytes of data and expecting them to be adequately performant. Big problem, bad idea! The team had to go back to the drawing board before finally figuring out that a database management system would be required to provision the relevant data sets needed to build and train the predictive model.

When you confront such scenarios, working through them is not only frustratingly painful but also quite detrimental to the image of IT in the eyes of the business. As an architect, you need to be aware of such scenarios and speak up with confidence and conviction to not put the technology before the business needs. This is why it is critically important to start with a functional model of the solution architecture and align the functional needs to the technology capabilities and features. Yes, you can develop the functional and operational model in parallel; however, you should never declare the technology stack before the vetting process is completed to your satisfaction.

You may get lucky a few times, but just so many. The pitfalls become quite significant to warrant keeping an eye out for them!

Open Source Is Cool but to a Point

One of the best things that could have happened to the IT industry is the proliferation, acceptance, and use of open source technologies. Consortiums such as the Apache Foundation and companies such as IBM, among others, innovating and then donating technologies to the open source community have remarkably transformed the nature of software development. Technology has reached the hands of the Millennials (that is, the new generation) far more ubiquitously than we have ever seen before. For example, the ten-year-old child of one of my colleagues competed in an advanced programming contest, built a JavaScript-based web application, and won the first prize!

Open source has fueled tremendous innovation in IT. Many organizations have embraced and adopted complete open source technology platforms. I play around with quite a few open source technologies on my own laptop and find them fascinatingly powerful.

However, there is one word of caution that I must not hesitate to throw out. While open source technology is fantastic for prototyping a new or innovative concept, fostering a prove out quickly or fail fast (if at all) paradigm, a careful, well-thought-out technical analysis needs to be exercised to ensure that applications built on open source technologies can be tested and certified as enterprise strength.

Let me share one of the examples I have come across:

• An innovative simulation modeling application that addresses a significant problem in the industry was built on an open source database engine (its name purposely obscured). While the system was powerful in demonstrating the art of the possible to multiple potential customers, it hit a snag when it was time to implement that system for a very large customer. The sheer weight of the data rendered the core simulation algorithms nearly useless because the open source database engine could not keep up with the query workloads in time for the simulations. The entire data model and data set had to be ported on to an industrial-strength database engine (which had database parallelization techniques, among other features) to make the system functional for the enterprise.

As an architect, you need to carefully analyze the use of open source technologies before formalizing them for enterprise-strength applications. As massively powerful as these open source technologies are, they may not all be able to run industry-strength applications supporting the expected nonfunctional requirements and metrics.

Write Them Up However Trivial They May Seem

You may find yourself doing some fun programming experiments that are so interesting that you just cannot get your mind off them until you’re done. At other times, you may get stuck solving a problem that needs to be addressed in order for the proposed solution to be declared viable. Such problems can either manifest as programming or configuration or design problems; they are problems nonetheless.

You may invest some intense, long, often-frustrating hours before you finally and inevitably solve the problem. Now that the problem is solved, the dependent tasks that were stalled get to move again. Now what? Of course, you need to move on to address the next problem at hand. However, what if you first ask yourself “How difficult was it to solve the problem?” More often than not, the answer you hear from inside was that it was easy, something quite simple at the end.

Let me share a personal story with you. One day, some 15 years ago, one of my senior colleagues asked me how I finally solved a specific problem on which the team had been stuck for more than a week. The problem was how to configure a J2EE application server to work with a directory server for security (that is, authentication and authorization of users into an enterprise portal). I explained to my colleague that solving the problem ended up to be quite simple, and I laid down the steps I took to finally get it done. He listened to me quite intensely and then asked me: “Why don’t you write it up as an article?” I thought he was crazy to ask me to write an article on this topic and publish it in a technical journal. He insisted that I write it up just the way I had explained it to him, and although I did not believe in its value, I went ahead and did it to gain some credibility with him.

The article got published in a technical journal as my first ever technical publication. It is hard for me to believe how, even today (although much less frequently than it used to be), I get emails and inquiries from IT professionals all over the world. They tell me how the article helped them to get ideas to solve similar problems they were faced with.

I had come to a startling realization: no matter how trivial you may think solving a particular problem could have been, there may be many individuals who are stuck with the same (or a similar) problem and who would benefit tremendously from your experiences.

I never stopped writing from the day I had that realization. Knowledge only grows if you share it with others. If you write and publish, not only will you be known and sought after, but also there will be a growing user community who will follow you. And in today’s ubiquitous socially networked world, you don’t even have to write a 10-page article to share your knowledge; just tweet it!

Think about some of the problems that you solved, restructure your solution in your mind or on paper, and then write it up. Publish it!

Baseline Your Architecture on Core Strengths of Technology Products

As a part of developing and defining a system’s architecture, you will have to choose, in a certain phase, the appropriate technologies: middleware products and platforms, hardware, networks, among others.

Choosing the right or most appropriate technology can be a challenging if not a daunting task. Competing vendors may have similar products, each touting why theirs is better than the rest. Competition is steep, and vendors often are forced to add some capabilities to their products just to answer affirmatively “Yes, we do it too!” One of the challenges for architects and technology decision makers is to assess and evaluate vendor technologies to differentiate between the features that form the core and foundational elements of a product from the features that are just add-ons or bolt-ons in order to keep their products on par with competitive vendor products.

In my experience, it is always safe to choose a vendor that focuses on its core product strengths instead of trying to support a multitude of other features that do not really form the core product. While you are architecting a solution, it is even more important to base that solution on the core product strengths and not try to use each and every feature just because they exist. An architecture that is built on the core strengths of a set of technology products, along with a sound integration architecture facilitating data and information exchange between them, would inevitably be more robust and scalable than one in which all product features are used just because they are available. As an example, if you are evaluating a vendor technology to decide on a real-time stream computing engine, try to focus on its ability, scalability, and versatility to ingest data in volume and variety and from multiple data sources instead of focusing on a feature that states it also does predictive modeling!

Summary

I wish I could summarize a chapter wherein I took the liberty of sharing some of my experiences and reflections. There is no summary to them; they can only get more elaborate.

The only thing I may say is that it is important to take a step back once in a while, reflect on some of the experiences that you may have gathered or some nugget of wisdom you may have stumbled upon. Sharing your hard-earned experiences and wisdom with your colleagues and with the community at large is as philanthropic as it is satisfying.

I can only hope that you subscribe to this thinking and build your own fan following!

References

Goleman, Daniel. (2004, January). “What Makes a Leader,” Harvard Business Review. Retrieved from http://www.zurichna.com/internet/zna/SiteCollectionDocuments/en/media/FINAL%20HBR%20what%20makes%20a%20leader.pdf. This article illustrated the five traits of leadership that I mentioned.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.119.159.178