Category Archives: Software Development

Why user stories make sense?

User story is a popular mechanism used by most agile methodologies to communicate user requirements. Actually, this is only partially true. User stories are meant to be a placeholder to initiate an early conversation between the product owner and the team on key hypotheses so that a quick validation of them could be done to refine/confirm/reject our understanding of what the customer really wants. To that end, user stories are not anything like the requirements of yesterday – they are not even remotely meant to be complete/comprehensive and so on. 

The reason user stories (and not ‘well-formed’ requirements, if that is ever possible) makes sense is because we human beings are not experts in following the written word, and are likely to misinterpret requirements even when they are explicitly written down, especially when the communication is a one-time event and not an ongoing process. (if you don’t believe this, just visit this interesting site that celebrates real-life examples of how something as simple as icing on the cake can go horribly wrong despite really simple and absolutely clear instructions!). And we are talking about building complex mission-critical systems that must operate 24/7 with top speed and efficiency. If only the written word could be consistently understood by the receiver as intended by its author… 

On the other hand, if there is a continuous two-way dialog between the product owner and the agile team, such purposeful brevity leads to curiosity which then leads to a constantly improving and better shared understanding that refines as the product gets developed incrementally and gets periodic user feedback. 

User stories reflect the reality that we might not have 100% clarity about the requirements on day one, but with constant dialog, we might be able to improve our shared understanding of customer's wants and needs.

User stories reflect the reality that we might not have 100% clarity about the requirements on day one, but with constant dialog, we might be able to improve our shared understanding of customer’s wants and needs.

The key benefit of user stories is that instead of waiting for weeks (even months!) to get a fully-bloated and rather unprioritized PRD which is supposed to have fully-baked requirements while the development team sits idle all this time, is that identifying highest priority user stories and using them  to develop a prioritized product backlog allows the teams to get started much earlier and start developing features which could then be put out in front of customers to get real feedback (as opposed to individual ‘opinions’ that might/not best guide us in an uncertain environment). This is especially important because in the past, when the world was little less complex and much more underserved. I purposefully use this term ‘underserved’ because it is not that people have suddenly become much more demanding about what they like or not, but were just told there were exactly three carmakers to choose from, or just two operating systems to choose from, and so on. However, with rapid advancement of computing paradigms and constantly lowering cost of ubiquitous communication devices, they suddenly have the opportunity to demand what they want, and hence classical ‘forecast’ models of requirements elicitation, design or production (at least in the manufacturing sense) don’t work so effectively as much as the newer ‘feedback’ driven models that allow for developing key hypotheses (and not hard requirements carved in stone) which could then be quickly and cheaply delivered as ‘done’ features so that customers could tell us if they like them or not. Based on such valuable customer feedback, the team could iterate to either refine the feature, or to pivot, as the case might be. In the past, this might take the entire product gestation cycle, by which time, many, if not most, things might have undergone significant changes, and the entire development costs being sunk by that time, not to mention complete loss of any window of opportunity.

As Facebook says – Done is better than perfect. User stories allows developing a small slice of the product without really perfecting the entire product, but facilitates the process of validated learning that eventually helps develop a product with much higher chances of meeting customer needs.

In today’s world, you probably might not have the luxury of having investors wait multiple years for an exit, or customer waiting several quarters for a perfect product when the competition is willing to serve them faster (even letting them lay their hands on an earlier version and give feedback thus making customers a co-creator of the product), or employees working month after month without getting any meaningful feedback from customers. And we know that absence of any feedback eventually leads to erosion of trust. Incremental product development could help you bridge such trust deficit by delivering highest value to customers early and periodically, and user stories might help you get your project a quickstart.

Role of Integrative Thinking in Project Management

I just finished reading the brilliant “The Opposable Mind” by Roger Martin. He introduces the notion that we all possess the so-called  “opposable mind” which has this amazing capability to simultaneously hold two contradictory views about a problem.

The conventional wisdom is to try to find a via media but that is perhaps meekly surrendering to complexity by taking a short-cut to a suboptimal solution. He argues that the some of the most exceptional leaders do not succumb to the obvious “either/or” thinking but rather work patiently towards synthesizing the best from both of these opposing views to create a best-of-breed solution that is far superior to either of these. He calls it “integrative thinking”.

Martin writes, and I quote:

“…the leaders I have studies share at least one trait, aside from their talent for innovation and long-term business strategy. They have the predisposition and the capacity to hold two diametrically opposing ideas in their heads. And then, without panicking or simply settling for one alternative or the other, they’re able to produce a synthesis that is superior to either opposing idea. Integrative thinking is my term for this process – or more precisely this discipline of consideration and synthesis – that is the hallmark of exceptional business and the people who run them.…Human beings, it’s well known, are distinguishes from nearly every other creature by a physical feature known as the opposable thumb. Thanks to the tension we can create by opposing the thumb and fingers, we can do marvelous things that no other creature can do – write, thread a needle, carve a diamond, paint a picture, guide a catheter up through an artery to unblock it. All those actions would be impossible without the crucial tension between the thumb and fingers.…Similarly, we were born with an opposable mind that we can use to hold two conflicting ideas in constructive tension.”

The book is a treasure trove of how go develop integrative thinking, but I was struck by how much it is relevant to each one of us at all levels, not just for the leaders at the top. The field of project management is all about making sensible but often hard trade-offs. Effective management of Project Management Triangle is literally the holy grail of project management – one that not only is the nemesis of every project manager but is also, in fact, the genesis of modern project management. In layman terms, what we mean is that out of Scope, Cost and Time, you may pick up any two, but can never meet all the three of them at the same time. So, if the requirement is to build replica of Eiffel Tower in just three months and ten thousand dollars, then yes, it can be built but it might look more like a school project. If it is a fixed-scope project like migrating all tax payers to a new software by close of the current financial year, then one might need to look at costs involved without which either one of the scope of the time might not be possible to meet.

However, what ends up happening in reality in many situations is that a project still ends up taking the amount of money that is takes, and yet the scope delivered is too little, too late!

If one were to look at modern project management (though most of us in software industry would probably call it as ‘traditional project management’), I think we just don’t have any similar notion of integrative thinking. The only treatment I have seen is perhaps in the Theory of Constraints where the approach is more holistic.

In software development, the conventional project management is modeled around industrial model of a production process – the so-called waterfall. In a linear, sequential flow done by silos that specialize in a single functional area, there is natural tendency and inclination to optimize the area of responsibility. For example, a design team might be most concerned with how elegant their design is, whether is meets the future issues of maintainability or portability or not. Similarly, the high wall between developers and testers is industry-famous – which is a major reason why developers don’t trust testers (“they will come up with only Sev 3 bugs and kill us with volumes towards the release, and expect us to fix them all before GA”) and in turn, testers don’t trust developers (“they can’t write clean code”, “if only they did better code reviews and unit testing and found all code-level defects, we could do such a better job of finding all Sev 1 bugs and focus on performance and reliability aspects”). Clearly, such a process breeds a mutual trust deficit, and the narrowly focused project roles simply perpetuate it further. What happens is the tragic story of optimization of parts at the cost of sub-optimizing the whole – completely anti-lean approach!

Agile software development (the concept, not a particular methodology) helps by offering to eliminate such artificial boundaries that force “either/or” thinking. Instead of taking a monolithic approach of meeting 100% of the project triangle, it offers to simultaneously meet part of the scope at part of the cost in fraction of time that what would normally be needed in a waterfall model. However being recently introduced to this concept of integrative thiking, I am still thinking if agile philosophy fully addresses it, and on the surface, it doesn’t look like. However, I have been proven wrong in the past, so we will see…

Project Management vs. Program Management

Program Management is often seen as the next logical step for seasoned project managers looking to take on bigger challenges. While project management is more about managing within boundaries of a project and gatekeeping it against anything and everything that threatens the status quo, program management is typically all about breaking those very boundaries and managing across them by taking up anything and everything that threatens the status quo. In this post, I will examine how they differ in its approach on two important aspects – scope management and people management.

Scope Management

Like I mentioned above, a project’s success depends on its ability to retain focus against all odds. Once a project scope is defined, its estimates made, resources  allocated and commitments made, the project manager is pretty much focused on gatekeeping everything else out of the scope lest the project success is threatened. In reality, most projects would have a CCB, or a Change Control Board, of some levels of formal authority, there is still a tendency to bulk up the entire requirements in first-pass and do as much as possible in one breath irrespective of how much time it takes and whether the final outcome is acceptable to the customer or not. While most of the traditional world still likes this model for various reasons, software development community identified this as a bottleneck and created the suite of so-called ‘agile methodologies’ that exploit software’s ability to incorporate late changes to specs without seriously endangering the project or its deliverables. Still, at a high level, a project must work around a reasonably stable set of requirements to ring-fence itself against any potential changes to the ‘core’ of a project – the premise being how can you build a successful product on a shaky and wobbly foundation. After all, don’t we pay product managers to do a better job of defining those requirements upfront rather than changing their mind later in the project and calling it as customer change request to cover up what they failed to think of in the first place! Surely, everyone understands changes in workflow or bells and whistles, but I am talking about the core architecture – the fundamental DNA of a product that must be understand before any further allocation of time, money or resources is made. So, clearly, scope is sacrosanct to a project.

A program is a different beast. As the highest level of body chartered to translate an organizational’s strategic intent to reality, it can’t box itself inside any boundaries of defined or undefined scope. Anything that could impact a program’s ability to accrue full ‘benefits’ envisaged from it must be taken up. While in theory, a program must have a defined scope to plan its activities and resources around its deliverables, in practice it is not so trivial. Any reasonably large program has sufficient number of moving parts, uncertainties, conflicting requirements and rapidly changing priorities. It is very typical in a program for the component project managers to carve out their pieces very sharply. A lesser reported fact of life is that developer’s motivation to work in newer and sexier technology often dictates the choice behind a project taking up (or refusing to take up) a given problem. Program organizational structure and governance plays a very important role in ensureing that component projects are not only cleanly defined, they also identify inter-group dependencies and secure commitments to address them. To that end, a project might safeguard itself by rejecting an inter-group request, but eventually the program needs to address it! Similarly, a project manager might complete the work within her boundaries but it takes much more for the program to be ‘done’, let alone be successful. I have seen many situations where project managers would be so focused on their project that they won’t recognize that unless the program was successful as a single entity, their individual progress was meaningless. This could get aggravated when teams are geographically or organizationally dispersed. However, none of those can hide or discount the fact that a program is only as  successful as it ability to influence things that might not be in its line of control but whose impact is definitely in a project’s line of success, especially if those things were to backfire.

While a project is like a fortress that must protect itself against all invasions to survive and eventually be successful, a program is more like a university, a rose garden, or a mission – they all deal in ‘soft power’ and maximize the ROI of their mission by keeping their doors open and by teaming up with their potential adversaries.

People Management

A project manager in a functional organization wields significant ‘positional power’ and thus has very high ability to influence team members behavior. While today’s organizations are highly flat and democratic, there is still an asymmetric balance of power, for the manager who writes your focal and annual salary revisions also potentially has the power to decide when you should update your resume! Surely we have come a long way in management-worker power-sharing from Taylor and Ford era, make no mistake that there is no such thing as a perfectly symmetric world where a manager and her team member have equal rights. Thus, a project manager has much higher ability to define the work and assign people to it as she feels appropriate. She also has a much higher level of responsibility towards training and career development of her team members, and being the closest face of management to the team, she is the official spokesperson of the upper management. A team will likely listen more to her than to the CEO. She must balance two opposing sets of expectations that, if not aligned properly, can set the project on fire. To that end, people management for a project management must be one of the toughest job.

In high contrast, a program manager manages at boundaries of participating organization in a highly matrix environment, and hence must manage by influence and not by any formal authority. In a software team, a program manager needs to get Development, QA, Product Management, Usability, Documentation, Marketing, and several other functions on the same page. In most organizations, they are organized along the functional lines and hence report to a solid-line manager in the same skill-set pool rather than program manager. Given that many of these resource might be timesharing on a program, their eventual loyalties are still with their respective line managers and hence a program manager can only rely on collaboration and influence as the key measures to get everyone onboarded. In some cases, there might even be a conflict between a component’s goals and the program’s goals and if such issues remain unresolved, the only recourse to make the program successful might be to move the problem up the command chain. Still, there is a big value in managing large and complex endeavors as a program, for it allows an organization to manage its resources and inter-group dependencies and conflicts in a more systemic and transparent manner. Some of the best program managers I have seen were not the ones who stopped managing the interfaces, but went over and above what their jobs required. They established a direct contact with key team members in component projects and created an alternate informal channel to validate project risks and plans, and to feel the pulse of the organization. They would do it very unintrusively without creating any friction between the line manager and them.

How does your organization view these two important functions?


What are agile practitioners thinking in 2011?

Shortly before the yearend holidays, couple of us from product development companies got together to discuss how software development process philosophy and methodology is changing. We had representation from global embedded companies, healthcare products, internet domain, automotive, semiconductor – you name it. What was interesting was that irrespective of the type of software product being created, some common themes emerged in terms of practices that seem to make sense:

  1. Continuous integration: integrate early, integrate often seemed to be high on priority list for most.
  2. Test Automation was considered equally important agile practice
  3. Collocating cross-functional teams was found critical for end-to-end product development
  4. Being nimble to requirements was critical not just for customer-facing products but even in enterprise world
  5. Some (many ?) engineers are not used to reporting daily – they think it is an unnecessary intrusion into their work, and perhaps even a productivity impediment! Making them think otherwise is an important culture change
  6. Techies don’t want to become scrummaster – they would rather work knee-deep in coding! Some companies tried to hire ‘scrummasters’ – career project managers without specific domain skills but that was not very successful. The challenge is to find right set of people to play the role of scrummaster.
  7. Daily stand up seems to be a great way to make sure teams stay on the same page. I recounted my own experiences – way back in ’98 at Philips, we used to have daily stand-ups and it was highly effective.
  8. Simulating interface components was a high need for companies in hardware-software co-creation cycle. This especially became critical as the hardware was generally never available when doing software
  9. Can’t work from home in a truly agile world – have to co-locate teams. This seems to be a rather side-effect of using agile for teams – since teams have work closely and for things like daily stand-ups, everyone should (preferably) be in the same room. However, given the realities of modern day life, working from home is not only inevitable few days a month, but might actually be a productivity booster (ask those of us from Bangalore!). So, having a rigid agile discipline seems to be at crossroads to people’s ability to balance their work-life.
  10. Blank sprint after 3-4 sprints is a generally used practice. People felt back-to-back sprints would fatigue the teams, and make the work monotonous, and hence a blank sprint. A blank sprint was simply another timebox with housekeeping activities, vacations, etc.
  11. There seems to be an growing chorus for having internal agile coaches – however, no one in the group was using them. There is a general disdain for external coaches who might only give bookish prescriptions – after all, you need someone within the system to own the action items and not someone who simply makes powerpoint of the status. There was a great discussion that such agile coaches can’t simple be a single-axis professionals whether process guru, people manager, or techie. Rather, they need to be a bit of everything and then some more – project manager + process guru + people coach + techie + communication expert + …
  12. What is the role of manager in an agile world? This question has never been addressed well by agile community. How do people grow in their careers in an agile world. In the traditional system, whether good or bad, there is a hierarchy to aspire or grow into, but how do you acquire different skills that prepare you for taking on higher-level responsibilities in the career? If scrummaster is the closest role for a manager, then what next? Scrummaster or scrummasters? There was no clear direction or best practices that have stood the test of time.
  13. Pretty much no one believed that bookish or a biblical approach to Agile is the right thing – everyone seems to tailor agile as per the unique combination of business needs, nature of products and business and culture, etc.

This was definitely an interesting session that gave an opportunity take stock of some of the things that are working or not working. We also discussed the fact that Agile is really addressing a subset of the entire business problem – to address the entire problem, we need to embrace systems thinking and lean thinking.

What are you thinking in 2011?

Why are we in this mess?

Prior to the industrial age, the world was essentially an agrarian and a trading economy. Production methods were often a craft and top secret, fiercely protected within a family and handed down from a master craftsman to his sons, and with no machinery for mass production, pretty much every product was handmade and unique, perhaps also customized, for its intended user. Industrial revolution made mass production and rapid movement of goods possible, and among other things, catapulted Britain into forefront of global economies. Gutenburg’s printing press was perhaps the first mass production system built by man. Subsequent inventions like harnessing of steam power made railways possible, spinning machines, and other advances in iron founding and chemicals pushed the envelope. However, a lot of these advances were limited to Europe and even more within the UK, which thrived on these advances and became the economic (and imperial) superpower well until the start of twentieth century.

However, by the start of twentieth century, America too woke up to industrial advancements and contributed to some of the most important advancements that still continue to touch our lives. The pioneering work by Eli Whitney on ‘interchangeable parts’ on his now classic cotton gin introduced the concept of modular design, followed up by Frederick Winslow Taylor’s groundbreaking work on scientific management led to the concepts of standard work and division of labor  (even if somewhat questionable and controversial in today’s context) and created the foundation for Henry Ford to envisage a mass production system with a moving assembly line where finished goods could be assembled from standard parts by semi-trained operators, e.g. Ford’s most famous Model T car in black color (“Any customer can have a car painted any colour that he wants so long as it is black”).

In essense, a production run, say in an automative parts production or even a car assembly line, is the repetition of a process that produces similar (or similar-looking) objects. Once the ‘process’ is ‘designed’, the job entails repeating the process till the desired number of objects have been produced. Clearly, the faster you produce those objects, the sooner you can put them up in the market for sale and start getting money. The more you produce, the more you are able to amortize the capex, and get lower per unit price over the long run. Intuitively, if you have to produce objects that are exactly alike in properties, shape, size, color or any other physical attributes achieved by the production process, you can ensure that your production machinery will need to be ‘programmed’ once and re-used several times later. So, if you run the paint shop (which seems to be the largest bottleneck in terms of time in a modern production setup) and need to produce purple colored chasis, once you set the process, stock the paint to desired levels, you are pretty much all set. Now imagine if you have to produce first 200 cars in purple, then next 30 in wine red, then next 70 in pearl white colors. Surely there needs to be some way (manual or otherwise) to alter the production process to suit such job order. Similarly, instead of producting 300 sedans, if you have to produce a mix of automobiles – say, 50 SUVs, 50 compacts, 150 sedans and 50 hybrids, your process will have to be different from the one that just produces 300 sedans in a single production run. While the customer desires options (don’t we all?), the manufacturer incurs additional time, money and resources in creating such options. From the manufacturer’s point of view, producing each piece exactly similar as the last one makes such great economic sense that he can create huge economies of scale made possible by principles of mass production. It simplifies the machine (and machine operations) required in plants, it standardizes the components required, there is no downtime to alter the production process, people don’t have to be retrained every now and then on different type of products, and all this make the entire production process very ‘controllable’ from throughput and quality perspective, and hence highly predictable. Elaborate statistical charts can be created based on prior experience on how much time it takes for a given production run, how much men (and women) and materials are needed to meet a given production target, and what levels of quality can be achieved based on statistical experiences. 

After WWII, an economically-broken Japan trying to rebuild itself too threw its hat in the ring and set out to Europe and US to learn about principles of mass production. For reasons that are well-research and well-chronicled in books starting from “The Machine that Changed the World” by Womack et al, companies like Toyota created a brand-new way of mass production that focused on just-in-time production as opposed to utilizing the production machinary as the way to achieve economices of scale. But we are getting ahead of ourselves here a bit, so let’s just ignore it for a while (because the world would not ‘discover’ these techniques until late 80s or early 90s).

Around the same time, computer science was born, and the earliest of software started to be written. Writing software was an extremely hard job, what with enough complexities of huge (and very costly) machines. Software had to be written in the so-called low-level or machine language and required very high level of cognitive abilities. With large endeavors, software creation soon became a techno-managerial problem involving several dozens or even hundred of people. However, unlike the semi-trained operators in Ford’s factories, these were highly educated and perhaps the first generation of knowledge workers of the digital economy.

At that time, there were perhaps the two best methods to organize men and machines: military was where men were employed in thousands and mass production was where machines were deployed. So, what happened next was only the most logical way to extend the thinking: software was treated as a ‘production’ problem and the techniques of mass production were used to develop a ‘waterfall’ model that allowed for enough (?) time and effort at the start of a project to gather all requirements and do the ‘design’ and then rest of the development was contrued as the ‘production’ and hence principles of mass production could be deployed. To organize men, the traditional command and control model was pereived as the best way to separate decision-making executive from the heavy-lifting workers.

Today we debate and critisize these as the worst things that could have happened to software development, but I don’t want to be unfair to what was done more than five decades back based on the state of art knowledge, tools and practices back then. I am sure people back then also wanted to do the best thing – as do the people today claim :).

This was a short overview of how some of the important advancement had impact on thought process on software development (and there are many more important ones that have influenced our world). In the next article, we will discuss how it actually impacted the software development, both positively and negatively.

Our methodology is 100% pure, our result is another thing!

What is worse then an anarchy ? You might say that is the absolute abyss, but I think blind allegiance is even more dangerous (and that includes following the letter but tweaking the spirit – things like ‘creative accounting‘ or its parallels in every field). Anarchy at least allows for things to become ‘better’ in order to survive – whether it is the idealogy, resistance, or even musclepower, or any other ills (and hopefully at some point, social forces of constructive destruction take over). But in a land where unquestionable compliance and blind allegiance rule the roost, IMNSHO, is like a terminal patient off the ventilator support. When people are on their deathbed, they don’t regret things that they did but much rather the things they did not do!

In project management, life is no less colorful. We have process pundits (read “prescription police”) shouting from the rooftop with a megaphone on how heavens will strike them bone dead with lighting if they ever as much as strayed from the ‘standards’. When projects are being postmortemed, we don’t often ask what or why the project did something that they did, but why they did not do things that they did not do. And quite often, you find answer in the map itself – because the map did not factor-in those conditions that were actually encountered on the terrain, the blind followers just followed the Pied Piper and danced their way into the river of death. What a terrible waste of human talent.

Why do we get stuck with methods so much that our result-orientation takes a back seat ? I think there might be many answers, but some that deserve merit:

We fall in love with ‘our’ methods

Let’s face it – introducing a new methodology in an organization, or a team, is not easy. People have their favorites, and it already is a fairly serious political game to get such groups to a decision (whether or not there is a consensus). After you have sold the executive sponsor to go for a particular methodology, you have to now somehow sell to middle-managers and the teams – without their support, you are toast. Sometimes you might have legitimate power (“position power”) to thrust the decision down their unwilling throats, but most often that doesn’t work anymore (even in jails!). So, you have to use your charm offensive to somehow make the other stakeholders believe that this methodology is the god’s greatest gift to mankind, and how it will take their next project to unprecendented glories. If only that were half-true! What follows next is a qunitessentially Bollywood storyline – the project is in tatters and the project manager and teams are exhausted and some even ready to bail-out. You go and put some more pressure on them, ask for weekly reports to be given on daily basis and take away even more productive hours into endless meetings to discuss yesterday’s weather. In short, we do everything but realize that our methods might be wrong…perhaps…for a change? Most people don’t get it, but sometimes, the smarter ones do get it. But by then, they are so much neck-deep in love with their methods that they can’t afford to take most obvious and the only correct decision – take a U turn, and admit their mistakes. Our professional ego checkmates us. Falling in a bad relation is bad, staying in it even after realizing one’s folly is worse.

We believe blindly in marketing claims

These days, there are dozens of marketing gurus, each extolling the virtues of their methods. In a way, it is like entering a village market – each vendor selling the same portfolio of vegetables, but each has to ‘differentiate’ from his neighboring vendor, and hence some make wild performance claims, quite often unbacked by any reliable data, and almost always trusted by gullible buyers at face value. So, some of us just go by the tall claims in glossy marketing brochures. You don’t believe me – tell me, how else you can explain those knowledgeable investors getting duped by Madoff’s elaborate Ponzi scheme in this age and day! Tell me how most banks fell for easy money in the subprime market. And if we were to assume, for a moment, that everything these marketing brochures said was indeed true, how come rest of the world had not quite figured it out already ? I think intelligence is not the elite preserve of a learned few – the rest of us lesser mortals also deserve to be treated with some professional respect that we can figure out what works and what doesn’t. In a previous blog, I once did an interesting analysis of one such marketing claim Blame your flaccid developers and your flaccid customers for your poor quality products !.

We are are too scared to experiment and take sensible risks

This is a real classic. Many organizations, certainly many more than we think there are, are so stuck in ‘command and control’ that they create a system where compliance is rewarded and creativity is shunned. This might work well in some industries or companies, but certainly doesn’t work for most of us. A beauty about such companies is that such outright disdain to new ideas might not just start and end at the top – depending on how deep-rooted the indoctrination is, it might be running in every employee at all levels. In the zest to display unflinching loyalty and to project oneself as a corporate citizen second to none, many employees (and many managers) simply abort new ideas because that might threaten the ‘status quo’ and makes them look very bad in front of the powers that be.

Because everyone else is doing it

This is groupthink at its best. Just because every Tom, Dick and Harry about town is doing it, I must also adopt (simply continue following mindlessly) the latest management fad. For example, we have all heard of (often unverified) stories how investors in the bay area during the dotcom boom era would only look at business plans that had an India component. Just because everyone is doing offshoring might not be good enough why I should also do it. Similarly, just because everyone I know is into Agile, does that make a strong case for my business also ? I think we should stop and think before succumbing to trade journals that routinely publish such forecasts and doomsday prophecies.

We are looking for easy cookbook solutions

Let’s accept it – some people are just plain lazy, or just too risk averse to do any meaningful research on what works for them. Instead of investing in figuring out what’s best for them, they are looking for some quick wins, some jumpstart, some fast food, some room service, some instant karma. They believe they can learn from other’s mistakes (which is definitely a great way to learn – definitely a lot better than not learning from anyone else’s mistakes!) and sometimes they might be successful, if they have copied the right recipe, but very often, that only results in wrong meal to wrong people at wrong time. What people don’t realize is that every problem is cast in a different mold, and whatever one says, there simply is no way one could airlift a solution and drop on the face of a second problem – in its totality – and expect the solution to work. Similarly, there is no way a cookboo solution might work. For example, Agile was designed around small collocated teams that have so high communication among itself that trust replaces formal contracts and communication replaces documentation – I mean they thrive on such a high-energy environment. But, sadly, simply to prove that Agile can fix any problem in the world, management consultants have stretched (should I say ‘diluted’) those very Agile princinples and they now try to forcefit those methods on a distributed, large team, that often has a heterogeneous ‘salad bowl’ culture than a homogeneous ‘melting pot’. So, you land in the situation that just because it worked for some random cases, some among us just naively believe it could work for our random case too. Does it get any more smarter than this 🙂

We believe compliance leads to repeatable success

Standards are often treated like insurance against catastrophes – both the termites and the tornadoes types (think of tornadoes as something that just comes out of the left field and kills a project overnight – like a meteor hits a neighborhood, and think of termites as slow and steady erosion that goes on and on and goes undetected until the time the damage is complete, comprehensive and beyong damage control). They ensure that no one deviates from the beaten track and tries something adventerous (without getting rapped on the knuckles), because that could lead to unpredictable results. And we all look for standards and certifications – ISO-this and ISO-that, CMM and CMMI (and the sunsequent obsession with all other 5-level models), Scrum, PMP, PRINCE2, MBA, PhD, IEEE, …so much so that even in the field of creative arts, there are schools that specify what creativity is allowed and what is out! There are success formulas for Bollywood movies – rich girl meets poor boy and the anti-social forces strike boy’s family. Eventually, our hero saves the day. Similarly, in Hollywood movies, it has to be the hero saving the nation from external threat (very often, coming from the space). In software development, ISO championed the cause of non-negotiable compliance and blind CMM practitioners only perfected it. Agilists were born out of the promise to create a world free of compliance, but it seems they have also ended up growing their tribes with their own mini-rules that give an instant gratification by useless things like Scrum Test to massage one’s ego that my Scrum is more Agile than your Scrum! Do, if you score a certain score in the Scrum Test, you are almost ‘guaranteed’ to get some x level of productivity improvement! Does that remind you of yesteryears, or does it make the future look more scarier?


Human behavior never ceases to amaze. For every one rational thinker, the world has to produce thousands of blind followers. Our schools and colleges teach us to learn the knowledge but they don’t always teach us how to convert it into wisdom. So, when we reach workplace, we are deeply apprehensive of trying out new stuff. We are excellent followers, but simply shudder at the mere thought of questioning status quo. We often behave like the monkey whose hand is stuck in the cookie-jar but refuses to release the cookies even when it knows that the only way he can extricate his hand out of the jar is without cookies. When those workplaces ignore result-orientation and only worship the compliance, the story only gets murkier. Think of a state where compliance is handsomely rewarded and questioning it seen as full and frontal attack, and its timid citizens are only too happy to oblige. They think a lifetime of blind obedience to methodology is far more superior than a moment of experimentation, even if leads to bad results.

After all…our methodology is 100% pure, our result is another thing!

(Inspired by a slogan on a tee: Our vodka is 90% pure, our reputation is another thing. very inspiring, indeed :))

Codifying Agile Skills or creating more checklists ?

Did you check out the Agile Skills Projects yet ? It seems to be a new and interesting initiative to “… establish a common baseline of the skills an Agile developer needs to have, including a shared vocabulary and understanding of fundamental practices”.

They talk about Agile Skills Matrix that has seven essential skills, or the Seven Pillars, organized into five skill levels.

Seven Pillars include:

  1. Product: Agile teams must share a vision of what they are working to achieve; the business problem they are trying to solve.  When a developer understands the problem domain, they can help the Product Owner evaluate upcoming features.  Understanding of the product is the first step to creating a fully cross-functional team.

  2. Collaboration: Teamwork is the heart of Agile software development.  A truly collaborative team shares all its information freely.  They share their individual knowledge and skills by working closely with their teammates.  Teammates who fully rely on one another become much more effective as a whole, yet simultaneously increase individual skill levels.

  3. Business Value: The purpose of any software development effort is to create features that deliver business value.  A good Agile developer will focus on delivering that value.  They do not add complexity just to make their program ‘cooler’.  Instead they will base their work on business priority.  They produce a steady stream of running, tested software with real business value.  They will build only what is needed to solve today’s problem, knowing that building too little or too much is waste. 

  4. Supportive Culture: Any highly productive enterprise is founded in learning. Practitioners in an effective agile team view everything they do as an opportunity to learn. Every step is an experiment intended to make real progress, and to clear our vision of what to do next. We accept and embrace the fact that when a task is done, we’ll see better what we should have done. That helps us see what to do next.

  5. Confidence: Those on an Agile project strive to know the state of their code and the state of their project.  They do not share their work until they can prove it functions properly and is well designed and implemented.  They report progress based upon the actual rate at which fully tested features of real business value are being created.

  6. Technical Excellence: Developers understand, and choose from, many possible technical ways to satisfy business needs–choices that reflect a craft that balances design, use, and support.  They provide the technical underpinnings that enable us always to move forward at a steady pace.  They do this using principles of truly simple design, combined with a grasp of technical debt and the means to keep it under control. They use the best techniques for keeping the design under control without excessive work or rework.

  7. Self Improvement: An Agile team member seeks new ways of doing things, while keeping existing skills as sharp as possible.  They know that ours is an ever changing world and they strive to be prepared to take advantage of anything new.  They are both introspective and aware of what’s going on around them, be it in the team, or the larger business context. They will take action to fix things that aren’t right, and to help those who are in trouble. 

And the skill levels are:

  1. Learning: Individuals at this level have been exposed to a skill, but do not yet have firsthand experience with it.  This may come through reading or conversation, through a formal class or casual presentation at a local user’s group. A training course should provide at least this level of attainment.

  2. Practitioner: At this level, one has practiced the skill in a ‘safe’ environment.  They may have taken a course with hands-on exercises, or recreated the examples from a book.  The Novice level represents the first big step from abstract to concrete.  A wise organization will not let a Practitioner loose on their own.  They do not yet know what they do not know. They do not yet know what they do not know how to do. A training course with a sufficient hands-on component could provide this level of attainment.

  3. Journeyman: A Journeyman is known to possess the specific skill.  They have demonstrated a practical knowledge of the skill in various environments.  They can work on their own, or to increase the competence of a team.  A Team member of a lower skill level will learn from them, a team member of a higher skill level will recognize their expertise. One cannot be taught to be a Journeyman, one can only learn it.  Regardless of the skill category, maintaining Journeyman status, one must practice their current techniques and seek out new and better ones.

  4. Master: A Master must possess unquestioned competence in the particular skill.  They know it cold; it is second nature to them. But, it does not stop there.  To be accepted as a Master, they must also accept the additional job of bringing up the skill level of their team mates.  They do this in several ways. They serve as an example of proper practice.  If you want to know how it is done, observe a Master.  They partner with other team members and actively share their knowledge and experience. They seek out teachable moments.  When asked a question, they prefer to engage in a dialog that will lead the questioner to an answer, rather than making a pronouncement.

  5. Contributor: To be a Contributor, one must have added to our community’s understanding of how to practice our craft.   Contributors bring new ideas to light.  They develop and evaluate new techniques.  They are the silverbacks and young turks, who advance the state of the art.  They may be seen as today’s heretics.

I think this is a good effort to break-down the craft into little more tangible traits that developers need to acquire, thus making the body of knowledge available, one step at a time. Further, it is also refreshing to see things other than technical excellence being part of the Seven Pillars. I have long believed that Sociology should be part of the computer science curriculum now that software development is a serious team effort, and eventual success being largely influenced by human dynamics in the team than individual programming effort alone, howsoever supreme and important that might be.

However, all good things have a shelf life, and all good ideas become fertile grounds for next set of consulting and assessment industry to take-off (and Agile Skills Project makes no secret of its desire to support any effort to certify developers, even though they won’t officially endorse nor condemn any particular certification). I suspect it won’t be long before people will be certified on this (yet another!) 5-level scale of individual proficiency. Look at how successfully we have reduced Scrum to a series of checklist items in the so-called Nokia Test. Then there is this perpetual debate raging in the Agile / Scrum community on merits and demerits of certification, with everyone sort of nodding their heads that certification can’t guarantee competence, but even then everyone continues to offer some form of certification. I even suspect this might soon find its way into job descriptions. However, my bigger question is: will having a bunch of certified developers on your next Agile team ensure, or at least improve chances of success ? Are we unwittingly creating a class-hierarchy in Agile teams where developers will have to be of a certain class (or caste, if you like that way) to be part of the team ? How does one judge things like ‘Confidence’ or ‘Self Improvement’ ? Does high rating guarantee domain competence, or portability of skills? What is better in a 5-people team: 5 Masters, or 3 Masters and 2 Practitioners? Does someone at the skill level ‘Learning’ stand a chance to be in the team of ‘Contributors’ without adversely impacting the team productivity? Further, this is by no means an exhaustive checklist of critical success factors, and hence, chances are that people will fill up some checklists and declare themselves some 86% compliant (on some arbitrary and controversial scale) and their managers will expect the next project to be a resounding success. However, you and I know better than that :).

I think these are highly contentious issues that could further create more checklists and thus degenerate a good intent into a big mess.

The best way to use this tool will be to self-use it and set personal targets to acquire higher levels of competency as one goes by. Any effort to use this as a yardstick to measure and compare individuals is, IMHO, against the basic ethos of a human endeavor, let alone software development. So, shun the checklist-driven software development. 

Are you solving the wrong project management problem ?

I just read a book titled “The eMedha Paradigm – A Project Manager’s Billion Dollar Odyssey” and felt terribly disappointed and shocked.

The author paints a make-believe world in which a sadist CEO does insider trading and makes his kith and kin richer, while his technically incompetent, control-freak and sexually-deprived project manager has a field day sinking the project. The team spirit is in tatters but because of the three-year job bond, they can’t leave their jobs just yet. Sales has promised to deliver the project in 1/3x time period, and now the customer is shouting from the rooftop on grand promises that remain grossly unmet. In short, all real-world ills happening in all permutations and combinations at the same time. While this might not be entirely implausible, I am yet to find such a worst-case view of real-world. This is such a picture-perfect scenario – can you think of anything else going wrong in this ?

The best is yet to come. An honest professional at the client side, Kalpana, with no significant credentials in getting a team out of such worst-case mess enters the scene, thanks to her scheming manager, and gets an anynomous mail from one of the team members on what all ails the project. While she is enroute India thinking about it at 40,000 feet mid-air, she has an encounter. Not a small encounter mind you, but The Encounter. God and his heavenly assistant (literally and figuratively, we are made to believe) Kamayani is an expert in some non-descript technique known as ‘eMedha’ that has the potential to transform any toddler into a veteran project manager. Even though these techniques are so obvious (or so the author would have us believe), for some reason our knight in the shining armor Kalpana doesn’t know these old tricks, and needs divine intervention to bestow that commonsense in her.

The endgame is not difficult to predict. Our legend-in-the-making hero goes with the newfound wisdom and changes everything in just two weeks. Bollywood style happy ending.

So, what’s wrong with this story. After all, isn’t this the cool stuff dreams are made of – a magic wand to wave and the magic mantra to chant, and lo and behold, the world becomes a great place to live. Instantly. Painlessly.

I think everything is wrong in it. For one, the author thinks most software development (still) happens a la ‘Modern Times’ – command and control, incompetent and indifferent management, helpless and desparate team members, lofty promises….the list goes on. I mean, I realize there are no perfect workplaces or perfect teams, and the reality often leaves much to imagination, but I would be greatly scandalized if such workplaces – as the one depicted in this story - exist in software industry. But, let’s for a moment accept that there is indeed one such workplace. Now, you have a oversimplified model that trivializes the entire solution into a series of checklist-style action items to fix this worst-case problem – all in under two weeks. No doubt those action items will give you great quick wins (especially since the situation is so bad, the team performance is at rock-bottom), the author gravely misunderstands low-hanging fruits with the real issues in software project management. It can also send a very wrong message that not only there is a one-size-fit-all solution, even a dummy can do it. When was the last time you were sold such snake-oil ?

The hygiene issues are very different from the fundamental issues of what software project management is all about. I don’t think there are too many workplaces and teams left in this world that have basic hygiene issues. To me, that sounds like coal mines or scrap yard in a developing country. And even there, I suspect workers would put up with such control crap. These are simply not the real problems that we know.

How about dealing with real issues of teams with highly technical, young, assertive, choosy, achievement-oriented and mobile workforce that is not shy of confronting its manager when he/she is not quite right, or pick the bags and leave when its talent and efforts is not respected? Workplace where team members don’t feel threatened but much rather enjoy working on problems that challenge and stretch them. A workplace that creates conducive atmosphere for teamwork. Customer who demands nothing less than a Noble Prize winning effort, and yet realizes that there are inherent complexities in the task that leads to unreliable delivery estimates. Technology that threatens to self-destruct itself every few months, only to lead the way to something new, hopefully better, and a little more complex than the last time. That is the real world, and the game of project management begins on this pedestal. How do you deliver a software project with so many moving parts?

To me it is clear that the bar has risen higher, much higher. All low-hanging fruits have been plucked away. There is no scope for shortcuts, nor any use of snake-oil. The success won’t come by applying quick-win suggestions. If hygiene is a problem, first fix that, and don’t confuse it with project management (even though, I must concede, those issues might be included in the all-inclusive ever-broadening definition of project management). With due humility, if hygiene comes across as a problem, it probably is the small tip of the giant iceberg known as ‘Culture’ and as we all know, cultures don’t change overnight. Yes, they can be influenced, even adapted in a small tribe, but never changed irreversibly by a local improvement action. I applaud all such efforts, but we must understand no amount of localized quick-win efforts will lead to radical changes. A project manager might be able to do only so much, but assuming that this is a scaleable process is like extrapolating from a single-point sample.

Are you solving the wrong project management problem ?

PS: I have nothing personal again the author of this book :). I just feel software development community has been taken on a royal ride with so many silver bullets, I look at every prescription with due suspicion and professional contempt. The purpose of my blog is to share ideas for people to think about what is being sold to them (and not to sell shrink wrap solutions, especially the one-size-fit-all types), and hence this constructive criticism.

Does your project love cockroaches ?

Cockroaches just love projects. Several projects have loads of them, but most projects have at least a few of them. Software projects are notorious for breeding cockroaches that are not only hard to spot, they are harder to exterminate. They are a project manager’s worst nightmare, and despite our collective advances in project management theory and experience, we still are answerless in face of collective might of those otherwise innocuous-looking pesky little creatures that simple turn the best laid out plans and intentions into history.

What are these cockroaches ? I am not referring to the ugly kitchen pest that refuses to die (it is rumored to be the only living form with capability to survive a nuclear fallout), but the Cockroach Theory, defined as:

A market theory that suggests that when a company reveals bad news to the public, there may be many more related negative events that have yet to be revealed. The term comes from the common belief that seeing one cockroach is usually evidence that there are many more that remain hidden.

For example, in February 2007, subprime lender New Century Financial Corporation faced liquidity concerns as losses arising from bad loans to defaulting subprime borrowers started to emerge. This company was the first of many other subprime lenders that faced financial problems, contributing to the subprime mortgage meltdown. 

In other words, the fact that one subprime lender (one cockroach) faced financial problems indicated that many other similar businesses were likely to face the same issues.

In my experience and learning, the real problem is not so much about having a cockroach (or a few of them) in a project. The real issue is about an initial trivialization of the sightings of the first few of those cockroaches, until the time they have had enough time to breed and become a problem so large that no easy way to walk away from it.

When the project architect leaves half-through the project, it simply might not be a human resources’ backfill problem. So, if all you do is find the replacement without really finding other cockroaches (the architect is not a cockroach  his departure mid-way through the project is). They could include things like impossible deadlines that are simply not going to be met despite  all hard work; or unreasonable resource constraints placed on team or project resources; or slow and indecisive leadership by the product owner leading wasted cycles and subsequent rework; or a bug list that grows even when there is no testing being done (ok, that’s a poetic exaggeration, but you get the point). Similarly, if the project misses its alpha date, it might not always be because of some showstoppers. There might be other cockroaches lurking in the dark, waiting to pounce off the pizza bones left by team working late nights, as soon as the lights are dimmed.

The most common response to a project cockroach is denial. Check the data again. That new timesheet tool is still unreliable. Run the tests again….and so on. We almost always try every possible excuse invented by mankind to avoid confronting the cockroach (the “problem”). We fail to recognize the iceberg model – what meets the eye is only 10% of what is waiting to drown us down. So, we gloss it over. Let’s give a workaround to the customers. Let’s backfill the architect from outside. Make timesheet reporting mandatory for any task over 15 minutes. Get a new tool to do automated pass/fail test status reporting….and so on. Nowhere we try to find the source of this solitary cockroach (to be honest, he is your project’s best friend becuase he is telling you there are more of him somewhere deep inside the project).

Sometimes we kill him by brute force blissfully thinking that’s the last of him. But alas ! If only this were even half-true. Those who do any amount of kitchen cleaning do know it better 🙂

Any of these methods only leads to the cockroach factory getting more time to become bigger, stronger and eventually, unignorable. Unfortunately, by this time, the rot had spread to the entire empire, and there is only this much a project manager can do. The army of cockroaches has spread its wings to multiple failure-points in a project, and no matter how much you do to manage one or two of those failure-points, you can never handle all of them. If you ever get to the last of these cockroaches, several new ones crop by then.

Does it sound like a typical day at work ? If not, you are doing a few right things that we all could learn from. But, if this is anything like your typical workday, you might want to stop firefighting (rather cexterminating cockroaches) and think a while what’s happening around you. Instead of denying the existance of a cockroach, or triviliazing its potential or brushing it away as an one-off event, you might want to start peeling the layers to find why and where this cockroach came from. Try an approach known as Five Whys developed by Toyota that is all about commonsense wisdom. Don’t stop until you have found the root cause of the problem – and it is at this point that you can decide what to do next – if the trail of questioning leads to a dark alley that is infested with many other cockroaches, you probably know what’s the right thing to do. However, by stopping at any of the prior levels, you might never get to the real issue.

What else can you do ? Think of rewarding your folks who don’t always just complain, but come out with real issues and concrete suggestions on how to fix them.  Relook at your status reporting if that is just about reporting yesterday’s weather, or more like a weather guide for the coming week ? Are your team members more likely to raise their hands and report an issue and happily volunteer to fix it, or are they more likely to ignore the issue with a quick-fix and let someone else worry about it ? Do you reward early failures in your projects that help , or do you use early successes to build political support for your project even if they lead to faulty assumptions downstream the project ? Do you manage your customer, or the other way round ? Do you see your upper management as a project resource that you can enlist as required, or do you see them more like the stakeholders to be satisfied at any cost ? Are you, or our team members,more likely to compromise on the ethics for short-term gains on the project, or you prefer the old-school way of doing business with conscience ?

Will doing all this guarantee there are no cockroachs, like for the next 6 months ? No way. In life and in projects, there can’t be any guarantees. But doing some of these right things will create a climate where problems (cockroaches) are not ignored, there is a systmatic way to deal with first-sightings and the messengers understand how the information they bring is of potential use to the project.

So, does your project love cockroaches ?


Software trends survey…

This is an interesting update on software trends:

A recent survey of more than 6,000 senior-level business leaders and software development executives found optimism for higher IT budgets and a preference for outsourcing, agile methods, and enterprise applications. In the survey, sponsored by software consulting firm SoftServe, 60 percent of respondents reported increases in 2009 IT development budgets, despite the uncertain global economic climate. Some 26 percent indicated that budgets had increased by more than 10 percent over 2008. Some 38 percent said they use some type of software development outsourcing. Of those, 67 percent used locations in India, followed by the emergence of Ukraine, China, and other Eastern European countries.

Nearly three-quarters (71 percent) listed new product or software development as a top priority, while cost- and expense-cutting followed with 51 percent, and improving usability ranked third with 49 percent. Forty-two percent favored agile methodologies as their chosen development model, with only 18 percent preferrng the waterfall method. About one-third (36 percent) employed Capability Maturity Model Integration (CMMI) as their process maturity and quality model with Six Sigma used by 25 percent. Some 62 percent of respondents were focused on enterprise applications, and 51 percent on Web-based applications.

Based on this survey, these are my observations:

  • Investments in IT budgets continue to rise, despite economic conditions. I guess the fact that any serious IT development is a long-led-time endeavor, the motivation might be to sit in the labs and work on getting stuff out just when the market comes up.
  • 67% are using India as outsourcing destination, which is a significant number, what will all predictions of anti-outsoucing sentiments, or a shrinking cost arbitrage, or low design skills in low-cost destinations such as India. They all can’t be wrong at the same time. I think hard numbers are almost always a stronger evidence.
  • CMMI is (still !) not dead! If one-third of industry uses CMMI, clearly there still must be benefits of it, and I think the non-believers might be well-advised to stop harping on CMMI as being a so 20th-centurish heavyweight, beaurecratic and document-intensive process and start learning how it adds value to the the compaies that still believe in it.
  • 42% favoring Agile methods is a good news, and I guess the 18% favoring Waterfall is the last post of resistance that will unfortunately still continue to wiggle its severed tail for many many years to come. I am more interested to know how these 42% of companies / professionals use Agile and how effective it is for them. If the only reason they are doing Agile is because that is the new ‘in thing’, and they have no clue how effective it is to them, I would argue that those 18% Waterfall loyalists are perhaps better than those neo-Agilists becuase they probably have a strong enough reason to cling on the their bad Waterwall ways – one of them could even be that it works for them! After all, these is no such thing as one single right way to Software Nirvana, and no one single method can claim to have monopoly over all the best practices 🙂
  • I could not understand how some 25% were using Six Sigma in the same breath as 36% were using CMMI as their process maturity and quality model. I think Six Sigma is not a ‘model’ in the same sense as CMMI (or even Agile for that matter). Secondly, if you are a Motorola making cellphones (perhaps not anymore), then six sigma production means less than 3.4 defects per million. If you are using CMMI for software development, you know that you are using a set of staircased practices (at each maturity level) that guide you on ‘what to do’ at each step. I can also understand using Six Sigma principles and methods to measure and statistically management processes like a peer review or a testing process, but I don’t undersand how Six Sigma qualifies for a full-lifecycle quality model in the same league as a CMMI. Or, am I completely missing the point???
  • I am surprised to find no mention of Lean, unless the Agile umbrella includes all shades of original Agile methods, as well as Post-Agile methods.

All surveys are sampling exercise and must be discounted for its inherent flaws, design constraints and execution issues. Despite this, it is always interesting to analyse trends over a period of time, as they indicate a movement and help us assess the wind direction and speed.

Addressing the issue of “social loafing” in large teams

 Large teams might be inevitable in certain large endeavors, but there are several benefits of small teams. A small team can build and maintain a strong culture and a character that gets better with time. Small teams quickly learn the invaluable skills in teamwork and interdependence that lead to higher efficiencies while ensuring that individual team members don’t end up competing against each other but rather collaborate on the common objectives. Small teams also mean small egos 🙂

One of the biggest motivations of making smaller teams is to provide higher levels of transparency and task accountability to individual team members. A large team tends to hide inefficiencies, both of its structure and of its people. One particular problem in a large team is the problem of “social loafing” – something that is perhaps best described in this poem by Charles Osgood:

There was a most important job that needed to be done,
And no reason not to do it, there was absolutely none.
But in vital matters such as this, the thing you have to ask
Is who exactly will it be who’ll carry out the task?

Anybody could have told you that Everybody knew
That this was something Somebody would surely have to do.
Nobody was unwilling; Anybody had the ability.
But Nobody believed that it was their responsibility.

It seemed to be a job that Anybody could have done,
If Anybody thought he was supposed to be the one.
But since Everybody recognized that Anybody could,
Everybody took for granted that Somebody would.

But Nobody told anybody that we are aware of,
That he would be in charge of seeing it was taken care of.
And Nobody took it on himself to follow through,
And do what Everybody thought that Somebody would do.

When what Everybody needed so did not get done at all,
Everybody was complaining that somebody dropped the ball.
Anybody then could see it was an awful crying shame,
And Everybody looked around for Somebody to blame.

Somebody should have done the job
And Everybody should have,
But in the end Nobody did
What Anybody could have.

This is a great description of how so many ‘obvious’ things don’t get done – either due to miscommunication, or misunderstanding, wrong assumptions, or sometimes just shirking away from the responsibility. One of my best personal examples is working for a community team as a volunteer. I believe working for a community as a volunteer is the greatest way to hone one’s teamwork – if you can get people who are not motivated by money or power or promotions to work together for a job, you can do anything! So, I was part of this team of a really nice bunch of 15-odd people who was required to serve this 400+ families. This so-called executive committee was required to plan social events for the community. What I found was that 80% of the people on this team were there just for the meaningless social prestige. Over 90% of the work was done by just one individual (and I was doing another 5% and rest of the entire team doing the remaining 5%). Those 80% of the people were otherwise regular nice people, but when part of this large team, they could not be counted upon to deliver the goods. We organized some wonderful events, but it was mostly the two or three of us who did maximum running around and the rest of the team just travelling First Class. After a few months, I was ready to quit (I eventually quit that team at the next team election. What I find is that the new executive committee team has very similar effort distribution – so it was clearly not me who was an aberration :)).

In my professional experience, I have been involved in some really large software teams, up to 190+ people on a single product. While these efforts were large and simply required those many number of people, we used small teams, not more than 7-9 people each, to divide and manage the work. Each of these programs was divided into such a number of small project teams, each a self-contained and autonomous unit that could deliver its functionality with minimum external dependency. Small teams have smaller number of communication paths, and allow fostering of meaningful teamwork rather than poisionous politics. It also is a great way to groom technical leadership and managerial expertise in the teams. A large team is no fun for people to volunteer and train for roles and tasks that require building special skills. But a small team must often replicate several skills in each team, and hence is a great way to groom future leadership apart from also acting as a derisking strategy to counter impact of attrition. Agile practices advocate small teams for achieving high team throughput, and the issue of social loafing is indirectly addressed by things like daily scrum meetings where social shame and the feeling of letting down the team ‘forces’ team members to get their act together. An excellent discussion of social loafing can be found here.


Social Loafing is a real team dysfunction not restricted to a given country, culture, society, industry or team size. It has been observed in all types of societies and all kinds of groups. Making the team size small is one of the ways to address social loafing. In the context of software teams, everything depends on how individuals make commitments and live up to them. And hence, it becomes extremely important for a manager to be aware of this team dysfunction and evolve strategies to deal with it. Having a small team with clear roles and responsibilities, and setting common standards for work evaluation are some of the ways that can reduce the extent of social loafing in teams and improve morale and team productivity.

How Lean thinking improves productivity in software teams?

At its core, productivity for a software team (often wrongly termed as programming productivity because software development is much more than mere programming) looks like a great idea. In its simplest form, it compares output of the team (amount of useful and usable software created, amount of unnecessary rework done, number of defects produced, etc.) to the input (time, effort and resources invested in producing software) factored-in by the uniqueness of the given software  endeavor and other external environmental factors (complexity of the software being produced, impact due to team sizes, nature of team spread and its familiarity with the problem at hand, problem domain, and so on). Unless you are willing to discount software development as a non-business critical activity undertaken purely for the labor of love that should not be ‘measured’ lest it dilutes the very ethos of software development as a creative cognitive endeavor, one might agree, albeit reluctantly, that some measure of how well we are doing is clearly in order. Can you think of a single business activity where such a measure should not be done?

In last few decades, researchers and statisticians have tried to create quantitative measures of productivity. However, due to an often unilaterally percieved inherent ‘uniqueness’ of every software endeavor, practitioners have consistently rejected such measures contending that there are either far too many assumptions in its definition and far too many variations in practice, or both. Instead of focusing on the ‘vital few’ commonalities, it is only unfortunate that practitioners have virtually rejected productivity measures in all forms by considering the ‘trivial many’ differences. What could have gained the industry by considering a big-grain definition of productivity was unfortunately lost because of focusing on decimal-digit precision of productivity measures.

In this blog, I will discuss the subject of productivity from a conceptual plane, and explore how Lean thinking helps us understand it little better by establishing a correlation between wastes in software development and productivity of the team. Lean thinking is much more than identification and elimination of the seven types of wastes, but I will only take up the issue of wastes in this blog.

In the simplest form, we can call a process or an activity productive if it creates an output higher than its input. Though the input and output are often in two different and inequitable currencies, we are still mostly able to compare them. What that means at a conceptual level is that:

  • If Output < Input, we can say that the productivity is low
  • If Output > Input, we can say that the productuvuty is high
  • If rework is high, we can say that the productivity is low
  • If rework is low, we can say that the productivity is high 

These are highly generic views of productivity that most of us understand, but unfortunately they are not very usable to understand the factors that impact an output to become low, or the rework to become high. Developers might give one view, and testers might give another view as to why the output is low. There is no common vocabulary that can explain at a fundamental level what is it that is causing the productivity to dip. Context-specific variations (like differences in programming environment, percieved differences in complexities, developer competency, tester efficiency, etc.) might be too many to be usable to form any meaningful conclusions.

Lean thinking offers a more specific understanding into the wastes that typically make a process costlier (in terms of time or effort or both) than what it ought to be. Lean identifies seven types of wastes in a manufacturing setup. These wastes are known to introduce avoidable wastes, and Toyota pioneered a production system and an organizational culture that seeks to sysmetically eliminate root causes behind these wastes. Software practitioners, led by Mary and Tom Poppendieck, have investigated borrowing from Lean thinking into software development in last couple of years. Let’s examine how Lean encourages eliminations of following seven types of wastes in software development, and how they individually impact productivity of a team:

  1. Overproduction: Implementing a feature is an irreversible decision that takes lots of time and effort from the product engineering team, and delivery of a feature must wait until the next release cycle (whether the big-bang release in conventional software development, or the next real shippable release in agile development). If there are extra features in a software that remain unused because they are not of high priority but the engineering team is anyway spending effort to develop and test it, it is probably lowering the productivity of the team that could have used its effort elsewhere to develop other important features. The customer is also not benefiting from such overproduction of features, because he not only must wait for the desired features to be delivered, he must also pay the delivery team for those unimportant features that are not being used.
  2. Inventory: Anything that has not yet been delivered to the customer is work-in-progress (WIP) or inventory. The higher the inventory in a system, the higher is the number of resources locked-up and unavailable for any other activity. Even the work that has been completed but still sitting on the shelf is an inventory that is not creating any tangible value either to the team or the customer. If there are too many requirements yet to be implemented, it involves doing requirements analysis, design, development and testing without any of that effort being delivered to the customer, and hence increasing the work in progress and thus lowering the productivity at a business level. Even if a critical feature has beem completed, but it waiting simply because several other not-so-important features must be included alongwith in the next release, it is also adding to inventory because that important feature is not immediately delivered to the customer.
  3. Extra Processing: If the software is implemented in a complex manner that requires extra processing (e.g., a design that requires more storage, or duplicates data or code needlessly, etc.), it is either increasing the effort required to develop and test the software, or increasing the usability complexity, or both. Either ways, it is lowering the productivity. However, the opposite of extra processing is not smart programming, which might solve the problem in short run, but create maintenance problems in the long run.
  4. Motion: If the information is not readily available to the engineer, he/she has to spend unnecessary and avoidable time and effort that it takes to fetch the required information (which could anything – clarification about requirements, latest design specs, or locating the correct module from another team for integration, etc.), thus effectively lowering the productivity.
  5. Defects: Any defect that is passed on to the customers requires the engineering team to once again open up the code and spend effort to fix the problem. Since no new output is generated in this process, and the only output is fixing the functionality that should not have been broken in the first place, the net productivity over the entire product engineering lifecycle gets impacted adversely.
  6. Waiting: Perhaps no other project resource is as irrecoverable as lost time. Brooke’s Law reminds us how difficult, if not impossible, it is to recover delay from a project. Waiting for any resource for any reason creates unpredictability in software development apart from creating activities of zero or low output, which again reduces the productivity.
  7. Transportation: Handoffs are perhaps a Fordism legacy to the waterfall development where a team of business analysts would complete their work in isolation and handover the specs to dev team. Dev teams in turn would work in isolation and pass on the software to test team for testing, and so on. If all these teams are working in silos, such practice of working in isolation would often result in information distortion (or loss) down the stream, which could result in missed requirements, misunderstood requirements, unclear design, and so on. All these will eventually require extra (and unplanned) effort to implement the right functionality or fix the broken functionality, thus lowering net productivity over the entire engineering lifecycle.


As we can see, analyzing wastes in software development using the Lean thinking provides us a technology-independent and function-neutral view of the inefficiencies of the software process that can be used to achieve substantial improvements in productivity of the teams. None of these ‘attack’ an individual for lower productivity, and only look at the systemic deficiencies that must be eliminated for a team to perform at higher level.  However, of what possible business value would be the process of elimination of such wastes if we can’t picture ‘before’ and ‘after’ ? It is extremely important to assess the state before and after introducing a change – how else do we know if our efforts are yielding the desired results ? So, measuring the productivity in some absolute numbers is like the feedback whether an improvement is happening in the right direction (or not).

Lean thinking or not, do you measure productivity of your software teams? If yes, what for, and if not, why not?

Why Agile doesn’t sell with Management ?

Agile thought movement has been around in different stages of evolution (and not to mention, different vocabulary from different preachers and schools of thought)  from 1960s – perhaps it always co-existed with its better-known but half-effective cousin, Waterfall, all those decades without ever becoming a mainstream / preferred method of solving complex problems. Declaration of Agile Manifesto in 2001 was a watershed event, and the body of literature that has grown since then is simply mindboggling. There is a huge experience base on various Agile methodologies and methods that seems to be gaining firm ground globally every single passing day.

Divide and Rule

There can’t be any doubt that the fundamental idea to approach a problem in smaller pieces is great – it is logical, it is result-oriented, it is economical and it is highly intuitive. After all, the Imperial British conquered half the world by their brilliantly cunning doctrine of ‘divide and rule’. It allows the human mind to focus on what is ‘visible’ here and now as opposed to worrying endlessly about the entire length and breadth of the problem which may never ever be crisply definable, or might require a disporportionately large amount of time (and effort) to clearly define it – by which time it is quite likely that the problem might have undergone changes due to natural ageing or changes in environment around it, or an improved clarity into the problem has given rise to another thought process about its solution, or both. Frankly, none of them is bad in the sense it is always better to know more clear definition of the problem or a more clear definition of the solution. The only problem is that it takes an inordinately large amount of time to reach there, with no mechanism to know if what is being considered will actually work as envisaged. While executing, one could encounter several more unknowns on the way that force changing the laid-out strategy. However, when you slice the problem in meaningful increments, you allow the solution to grow, one slice at a time. Of course, you still need to have the strategy to accomplish the top-level vision – but the working plans must not attempt to address that entire strategy in one single-pass. There must be a mechanism to (a) breakdown a problem into meaningful slices that (b) can be accomplished individually and incrementally to (c) make a definitive progress towards the eventual goals (d) without unduly front-loading the progress by later-day uncertainties. Agile encourages establishing a beachhead and then reinforcing it, slowly and steadily as opposed to landing in the middle of a battlefield and getting fired at from all directions. As a concept, this is splendid.

Sensible Trade-offs

It also allows a sensible trade-off between cost (resources), time and scope of work to be performed – the good old ‘triple constraints’ of any project. A small team could be highly efficient and could be easily composed of experts that don’t have to worry about a lot of non-work related problems that will eventually happen to any larger team. However, a small team can only achieve so much in a given time period. So, if you want to accomplish more work than what a bunch of experts can undertake at a time, you either must be willing to have a larger team to deliver in the same short time period, or allow the highly-efficient team to work for longer period. There is enough historical evidence to suggest that small teams are the best in solving just about any problem that requires intense human interaction and usage of individual human cognitivate skills. However, not every large task could wait for a small team to complete over prolonged time (as also attested to by Fred Brooks in his classic, The Mythical Man-month). Commercial pressures (“time to market” to meet the shopping season and to beat the competitor), rapid obsolesence of technology, fluctuating customer needs in short-term and uncertain preferences over long time, etc. are just some of the market-driven reasons why running a long development-cum-release cycle is often a bad idea. Add to that indirect factors like manpower attrition risks over long-term, uncertain economic situation in the coming quarter, etc. and you have a complete set of risks that have nothing to do with the problem at hand! However, when we work with long-cycle single-pass development, we tend to create a Frankenstien of project management because most such problems will now have to be ‘supported’ just because of choosing a particular problem-solving method! Mind you, most of these were not the problems originally stated as the technical challenges, but we piled them on because of our management decisions. Developing products incrementally allows us to filter out such indirect risks and only focus on core risks related to problem at hand.

Short feedback cycle

Agile promotes ‘doing it now’ over ‘deliberating it forever’ and rely on continuous feedback on working software as a faster and more reliable means to improve software than doing a rather late integration and a back-loaded testing cycle that is always ‘too little, too late’. Agile makes it possible for teams to develop (and deliver) software faster thereby providing early visibility to Customers by way of ‘working software’ and get an early feedback on important aspects like functionality, usability, scalability, workflow, and so on. Customers are happy to see a working software because they are mostly used to documents that don’t mean much to them. Management is happy because the customer is happy, and the teams are happy because they get an immediate feedback on their work that allows them to improve their products. There is clearly a merit in having a short feedback cycle.

Painless Change

Agile handles changes elegantly throughout the development cycle. Short of handling a mid-flight change, it allows just about any change to be introduced at any stage of development. If we believe that it is beyond the human capability to correctly specify every single requirement of a system unambiguously – not just in one go but ever – then this tweak of handling changes on the go makes very smart sense. As we have seen from RUP, the ‘phases’ of a project don’t really mimic engineering activities undertaken at a given point in the project lifecycle – it might as well be the case of shifting weights in each of the phases, but it can never be the case that we could be done with all Requirements in any phase and then move on (if that were the case, Waterfall would be little more effectives, perhaps). Of course, this strategy to handle changes flexibly means that there is never a clear and firm requirements baseline, and hence it might be near impossible to define a fixed-price work contract (or an outsourcing contract), but if the customer and the delivery organization both understand the new way of doing business, it will just come out as good, or better. The point is, a work contract exists to support the project and not to dictate how a project must handle changes. If it is the fundamental nature of the beast, we better have contracts that recognize this hard truth and allow both the parties to engage in a win-win relationship than a contractual relationship that frustrates everyone.

Team Ownership

Traditional teams have long followed the model of ‘division of labor’, or some such variant. In a small team, everyone does everything, but the larger the teams grow, the more is need for ‘vertical differentiation’ and ‘horizontal differentiation’ and it allows people to make decisions for themselves commensurate to their functional skills and their level of competency in them. Again, this is not bad, for we can’t confuse between ‘desire’ and ‘competency’. So, in theory, the traditional distribution of responsibility works well, and perhaps worked well in yesteryears (or maybe it didn’t, but that is not the point here). It certainly doesn’t work anymore. The historical advent of ‘trade unions’ in traditional industrial environment was largely motivated by the desire to participate in management process, and we see similar traits in knowledge industry as well. I think this is a positive thing, and allows ‘workers’ to become better informed, and hence better engaged in decision-making process, have better ownership and accountability of the results. Agile has done tremendous service institutionalizing this so important issue.

Agile improves team ownership of tasks and creates ‘self-organizing’ teams. A team that collectively makes its decision is more likely to own it and support its team members in realizing their individual goals that add up to the team goals. A ‘mutually exclusive and collectively exhaustive’ cross-functional team will also be highly interdependent on its members to achieve their shared goals, hence likely to have a heathy mutual respect. All these enhance a team’s sense of pride and ownership. While I strongly believe just a mere adoption of Agile practices is not likely to make teams self-organizing overnight, it still is a tremendous improvement from the traditional management structure.

So then, what is the problem ?

These are great strategies to make software development more meaningful and value-adding to customers, and less painful to developers. Still, how come most companies (read ‘managers’) have not warmed up to the idea, let alone adopted it ? In a recent blog, Allan Kelly says the adoption of Agile is probably in the range of 5%-15%. I would imagine the number more closer to lower bound than upper bound, and even then, I am little sceptic about it real implementation and actual effectiveness. How do we know that the Agile implementation in those companies is not just limited to peripheral pilot projects ? How do we know that when a 10,000 people company says they adopt Agile, they are not saying that only some projects use Agile ? Joe Little maintains list of firms practicing Scrum, clearly the favorite Agile methodology, but there are just 160+ companies in it. Definitely, many companies doing Agile are not listed here, but I also know of companies listed in that list who just pay lip service to Agile. There were also recent discussions on implementations of Agile in large companies in some Agile / Scrum lists, and even though the debate is inconclusive (which is not so relevant), the fact remains that there are real issues in adoption of Agile in larger world. My question is, why is it so?

I think there are several reasons, and in this blog, I will not include the most favorite punching bag 🙂 but rather focus internally. I believe the way Agile buffet has been laid out, it offers considerable challenges that self-limits any further growth, at least in its current avatar. For one, Agile is clearly portrayed as anti-establishment. Now, this is not necessarily bad, but an outright contempt of management, so much so that they are not even wildly considered in the category of ‘chickens’, in my view, makes ‘management’ uninterested in knowing about it any further. If the so-called command and control was one-sided (towards management) and bad, Agile is equally one-sided (towards teams) and equally bad. I am pro-empowerment, so this statement needs to be understood well before I am misunderstood. There is all merit in the team owning up its own decisions and being accountable for its decisions, but they also owe to the management to be answerable and accountable for their decisions, progress and results. In traditional projects, the management had a grip on the overall project scope and schedule, even if there was an element of uncertainty in those commitments. In Agile projects, we took away the big picture, denied the sense of control to management (again, neither good nor bad – just stating the facts as they are) and worse, we are in no better condition to give them a sense of the overall project. Yes, we have a better grip in the current iteration, but are we any better informed with similar margin of errors as before, in predicting the overall projects ? I doubt.

So, if the result of an Agile adoption is that management is not involved, is getting no better commitment from teams on the final project delivery, is kept away from periodic progress, is not allowed to be part of project meetings, and has no say on the work being undertaken or when dropped from a sprint without any prior two-way discussion, I don’t see how that same management is going to bless the adoption of Agile in his teams? If it doesn’t help him having a better confidence about his business’s collective ability to solve problems, howsomuch it might help his teams, I don’t think any human being is going to be enthusiastic about it. So, it is not surprising that the Agile adoption continues to be so low, even when the benefits clearly outweigh the costs of adopting Agile. I think the bottleneck is not management, we are the bottleneck. We have failed to include management as part of the solution team. We just conveniently decided (unilaterally, I must add) that management is the classic villian who must be kept away if the team has to achieve something worthwhile.

I think in our zeal to fix the ‘problem’, we just let the pendulum swing to the other side, and it must eventually settle down somewhere in-between to be of use because that is the real world. Perched on extremes, it has no value because of the extreme positions it must take to stay put there which are not closely related to the real world. I believe that a New Agile will evolve (in my view, it is already there – just that some people are so steadfastly holding on to the remnants of the old Agile) to become a better fit in the real world and include all parts of the workforce as respectable members of the solution team. Smart among us will freely drop the dogmatic aspects of Old Agile and quickly adopt things that work for them, whether prescribed or not.

We already see people experimenting with Lean, Kanban, Scrumban…the smart ones will not limit their potential with someone else’s prescription.

The rest will perish together.

Blame your “flaccid developers” and your “flaccid customers” for your poor quality products !

This is the text from a recent announcement for a course by Ken Schwaber on “Flaccid Scrum – A New Pandemic?” (text underlining is mine):

Scrum has been a very widely adopted Agile process, used for managing such complex work as systems development and development of product releases. When waterfall is no longer in place, however, a lot of long standing habits and dysfunctions have come to light. This is particularly true with Scrum, because transparency is emphasized in Scrum projects.

Some of the dysfunctions include poor quality product and completely inadequate development practices and infrastructure. These arose because the effects of them couldn’t be seen very clearly in a waterfall project. In a Scrum project, the impact of poor quality caused by inadequate practices and tooling are seen in every Sprint.

The primary habits that hinder us are flaccid developers and flaccid customers who believe in magic, as in:

Unskilled developers – most developers working in a team are unable to build an increment of product within an iteration. They are unfamiliar with modern engineering and quality practices, and they don’t have an infrastructure supportive of these practices.

Ignorant customer – most customers are still used to throwing a book of  requirements over the wall to development and wait for the slips to start occurring, all the time adding the inevitable and unavoidable changes.

Belief in magic – most customers and managers still believe that if they want something badly enough and pressure developers enough to do it, that it will happen. They don’t understand that the pressure valve is quality and long term product sustainability and viability.

Have you seen these problems? Is your company “tailoring” Scrum to death? Let Ken respond to your issues and questions!

Ken will describe how Scrum addresses these problems and will give us a preview of plans for the future of the Scrum certification efforts.

Here are my observations and thoughts from this synopsis:

  • line 2: what does “when waterfall is no longer in place” mean ? So, when waterwall was still in place, there were no issues ??? Somehow, one gets a feeling all the problems came to light only when Waterfall was “no longer in place”…so, why not get waterfall back in its place 🙂
  • line 5-6: In a Scrum project, the impact of poor quality caused by inadequate practices and tooling are seen in every Sprint ? …in every Sprint ?….now, wait a minute….I thought  Scrum project did not have any of these problems because there were no inadequate practices and tooling issues. And you certainly don’t expect to find such issues in every Sprint, or do you ?
  • line 7: OK, so now we have an official reason: “flaccid developers” and “flaccid customers” for all ills of the modern world. Wow! I am not sure if that is the best way to build trust either with teams or with customers by fingerpointing and squarely blaming them…without giving them a chance to even speak for themselves. And I thought Srcum was cool about trusting developers, not fixing blame on individuals, interacting with customers…but flacid developers and flaccid customer ??? Does it get any better than this ?
  • OK, I will probably agree ‘unskilled developers’ in the context of building an increment of product in an iteration, but what the hell is an ‘ignorant customer’ ? Try telling a customer that you are an ignorant customer…Scrum or no Scrum, you are guaranteed some unsurprising results ! And which Customer paying through his nose waits for the “slips to start occurring” ? In these tough economic times ?  
  • Is your company ‘tailoring’ Scrum to death ? I thought there were only two shades of Scrum – either you did Scrum-per-the-book or you did not. Since Scrum is the modern-day Peter Pan which refuses to grow up, we have only one version of Scrum to play around with, and unless some group of people decide now Scrum can grow to the next version (perhaps more because of commercial interests than anything else, whenever that happens). So, how can one ‘tailor’ Scrum – by any stretch of imagination, that is NOT Scrum. I mean, that is not allowed ! Right  ? So, why are we discussing it and wasting time – we might actually be acknowledging and unknowningly legitimizing that there is a secret world out there where Scrum can be ‘tailored’ and still be called Scrum ! That is sacrilege !

I always hate marketing messages that overpromise miracles, offer snake oil, belittle the intelligence of people, ridicule people…why can’t people simply stick to facts and figures ? Why don’t we talk in a non-intimidating language that encourages people to look up to what is being talked about ? And in the context of current subject, I doubt Scrum community is going down the right path if its next target is developers and customers. One of them does the work and the other one pays for it. Who gives us the right or the data to talk about any of them, in absentia ? Customers are customers, and even if they are irrational, they are still the customers, and whether paying or not, they alone are going to decide how they will behave. Yes, we might not like it, but is ridiculing them as ‘ignorant customers’ going to turn things into our favor ? Other industries like retail, hospitality, transportation, etc. have millions of years of collective experience in managing customers, yet they never break the singlemost cardinal rule of customer service: the customer is always right! And the first chance we software developers get to explain a poorly designed or a bad quality product and rightaway we blame the customers for it ! As if that is not enough, we then look inwards and blame the developers ! GREAT !

I think I will just develop software without blaming anyone else for my mistakes and limitations 🙂 

When are you planning to fail ?

Yes, you read it right…when are you planning to fail ?

In the world where insatiable hunger for ‘success’ is an obsessive-complusive disorder (OCD), we don’t think of ‘failure’ much. It is shunned, scoffed at, systematically eliminated (or mitigated, at least), avoided, bypassed, ignored….everything but embraced with open mind and open arms. All management ideas are directed towards the age-old wisdom of “if you fail to plan, you plan to fail” and not on something like…”what won’t kill you only makes you stronger”. All project management philosophies are centred around safeguarding the projects from any possible failures…but has that stopped projects from grand failures? Every risk management action is towards making the project safe from failing…and yet we still see so many projects biting the dust, struggling for survival. Failure appears to be a social embarrassment that is best avoided at dinner table conversations. New-age enterpreunership, especially in Internet world, has helped a lot to eliminate the stigma that eventually comes with people associated with any well-known failure, but in everyday lives, we still continue to play safe, rather extremely safe. Of course, I am not talking about breaking the law and driving without seat-belts on or driving in the middle of the road jeopardizing everyone’s life. I am talking about thinking like Fosbury and challenging the established way a high jump is done – even at the risk of failing because what you are about to propose hasn’t been tested and certified to succeed. I am talking about taking those small daily gambles that strenghten you when they fail. I am not interested in those small daily gambles that are supposed to strenghten you if they succeed – honestly, they don’t teach you anything. In fact, those small successes might limit your ability to reach for higher skies because you remain contended by those sweet-smelling early successes. In my view, people who don’t want to risk gentle failures must prepare themselves for grand failure !

The word ‘fail’ is such a four-letter word that is evokes very strong emotional responses. In an achievement-oriented society and success-intoxicated corporate culture, fear of failure drives people to seek safer havens. When choosing what subjects to take in college, we ‘force’ our children to take the ‘safest’ subjects – they are the subjects that have maximum job potential ensure maximum longevity in job market. (I agree that ‘force’ is not the right word here, but it is not in its literal sense that I use this word here. The ‘force’ can come from parental expectations, societal pressure, peer pressure, ‘coolnees’ of a job, perks of the job, etc.). Traditionally, they have been Engineering and Medicine and anything else that ensured a government job (in India, and I am sure every country has had its own fixations at different times). So, the foundation to seek safety from failure gets laid right at the start of one’s career (well, in my view it is erected right after birth and we are still curing it by the time we start our careers, but that’s for another blog post). After graduating, there is once again a massive derisking operation: find some company that has a ‘big’ name (even if one is doing a fairly mechanical job there). Basically, trade any hopes or ambitions to do something new and creative in life with rock-solid jobb safety in a mundane assignment! As a rookie, you then become a link in this enormous chain where your job is dictated by the volumes of SOP (Standard Operating Procedures) that trade your urge to experiment / innovate with a higher ‘percieved’ safety of the given process. The logic being: this is the way we know it has been done before, and since it worked the last time, we expect it will always work and hence this is the standard procedure. Wow !

You then become a manager and have to now run a project. You have the ‘process police’ breathing down your neck challenging every single thought, let alone a decision, to deviate from those SOPs. You want to try an innovative recruitment…no, that is too risky. Why not first prototype this sub-system…no, the finance won’t allow that because we can’t bill the customer since nothing tangible has been delivered. How about a 200% make-or-break bonus for this team that is up against Mission Impossible….no way ! we will have mutiny in other projects. How about this….and how about that… Finally you give up and give in…and the project somehow starts. Most optimizations, if any, have been seriously watered down by now, and are highly local at best. The result is predictable – no substantial improvement from the previous project, at best, and an utter fiasco, at worst.

Let’s pause for a moment and quickly run through the script here. We are taking every beaten path that has individually been successful in the past (or so we have been told), and yet there is no guarantee of success in its next run. In each of these situations, the project manager faces the music – after all, the process did work the last time and if it is not working this time, this must be a capability issue (or, even worse, attitude issue) with the manager. Being at short end of the stick and facing an imminent loss of professional credibility, the manager pushes his team to burn themselves over really late evenings and long weekends and somehow gets past the finish line…but not before incurring emotional and professional scars, not before pruning down some of the original features, not without some ‘known’ bugs in the release and definitely not before going way above the original budget.

What went wrong ? We did everything to prevent ‘failure’ but we still failed to deliver a decent product on-time, within budget (even if overtime is unpaid, it still means shooting out of the budget), blah, blah, blah …???

This is what went wrong – in order to safeguard our project from any (all?) possible failures, we fortified the project. We poured money like water to avoid being mousetrapped. We used innovative processes that allowed us to take baby steps and gain early successes. Those early successes, after all, helped us validate our assumptions and only then did we go all out. And yet? In this process of scaffolding the project, we actually erected artificial life-support systems to make the project stand on its feet. Most assumptions were then tested in isolation for its validity under ‘standard test conditions’. Unfortunately, those early successes gave a false illusion that everything was fine, even though several of the chinks in the armor were not fully exposed. Absence of ‘gentle failures’ early in the project lifecycle ensured that the team thought they were on rock-solid footing. However, in reality, we could never fully guarantee that, especially when those early cycles were done with the intent to make something work and not make something fail ! Imagine you were testing a tool to detect landmines. To do acceptance testing of the equipment, you operate it under ‘specified test conditions’ and accept it. However, the enemy won’t lay mines under those very standard test conditions! Chances are extremely high that enemy’s plan will fail your equipment. I am only asking you to fail it yourself before someone else does it for you.

Here is something you might want to do on your next project. Identify all possible and potential ‘points of failure’ in the project (there are always more than one). Challenge everything and everyplace where Mr. Murphy can strike. Design the project plan to fail, fail softly, fail early and fail as often as required to ensure there is no grand failure (or at least a significantly lower chances of a grand failure). Design your test criteria such that success is measured by how many of those assumptions have failed. Cull out important lessons from those failures and now build your project plan to avoid grand failures. Of course, you won’t completely eliminate grand failures, but will have moved a couple of steps closer to avoid them or minimize their impact compared to what an early success approach would have given you.

So…when are you planning to fail ?