Category Archives: Scrum

Why is your agile still a lot like dogma on steroids?

I still continue to be amazed (thought ‘shocked’ and ‘dumbfounded’ will be more appropriate words here) by the amount of dogma in agile circles. Do this! Don’t do this! Wasn’t agile meant to liberate us from the tyrannies of the so-called big monolithic non-agile white elephant processes, and create a more nimble mindset, flexible culture and adaptive process framework where ‘inspect and adapt’ was valued more than ‘dogma and prescription’? I sometimes think the poor old waterfall, for whatever it was worth (and I do believe it was worth a lot more than most people are willing to give it credit for!), was more open-ended and invited innovation simply because it was not perfect enough, and was very clumsily built for a rainy day, and depending on how sunny your day was, you were allowed (rather encouraged and expected) to pack only that much ration that you felt might be needed for the jungle trip.

For example –

  • You could have had any team size.
  • You could choose to locate your team members anywhere they wanted to be.
  • You could tailor your process in whatever way that made sense for you.
  • You could choose to slice your functionality in any which ways that made sense.
  • You could stagger the releases in whichever way you felt offered better yield.
  • and so on.

In several ways, such innovation required one to master the nuances of software development. And for those who would apportion reasons behind failure of such project on the method, my response would be – when there is so much flexibility in the system, why blame the system for being so ‘rigid’?

I have experimented more with waterfall methods in my career than with agile methods (which also I continue to do, much to purists chagrin :)), and here’s a small list of key exeriments that I remember doing – something that gave me immense joys because I had the liberty to try out stuff to see if that solved my problems better –

  • In 1995, when we realized that we are in a technologically evolving and complex domain (Asynchronous Transfer Mode switches), we didn’t build castles in air with the ‘non-negotiable’ waterfall-based product development process that the company had mandated, but decided to build an early prototype what would allow us to validate some key assumptions about our architecture. Yes, the company’s process didn’t support us, but yes, we broke the rules :).
  • In 1997, when we ‘discovered’ that standard waterfall won’t help us speed up the development cycle while we wait for the previous phase to complete, we didn’t blame the process for it, we simply ‘invented’ sashimi model and kept going.
  • In 1998, when conventional estimations didn’t work out in a domain that was completely new and unknown to us (digital set top boxes), I was not obligated to follow some obsolete standard process (though we were a CMM Level 5 company), but encouraged to try out estimations using complexity weights using methods like PROBE to mitigate the risks in estimations.
  • In the same project in 1998, when the project’s technology was new to us, I was able to home-brew and define a process with five increments that recognized the experimental nature of the problem we were solving and the learning curve of the team rather than sticking to a one-size single-release process.
  • During 2000 to 2003, I liberally experimented with waterfall methods to build teams that delivered large products in telecom and datacom domains with high success rates. At one time, I had 190+ engineers on a single product in my team organized around 14 parallel projects running on a common timeline and delivered on-time product in complex 3G Softswitch space. Yes, all in waterfall :). At that time, we were ranked last in global market. Today that company is global leader in that space, and I can proudly say some of our efforts were behind that turnaround.
  • In 2004-05, I experimented with our conventional enterprise service pack release model by liberally adding the weekly cadence from Gilb’s Evo process to create a weekly delivery model, and by accidentally stumbling on the concept of limiting work in progress to create one of the world’s first kanban implementations without knowing kanban – to be fair, it didn’t exist at that time – all without any prescriptions but just with a liberal dose of enthusiasm and undying spirit of experimentation.
  • …and the journey continues.

And what has been my take on the agile theory and practice? Not so open to experimentation or innovation. Sad, but true. Take some simple things for a start:

  • Agile methods recommend a small team size. That’s good common sense, and backed by scientific studies and acendotal data from ages, and is a generally good advise. What’s no so good is then we insist that agile teams can only be in a certain numerical range, and any team size more than that is blasphemy! In fact, my extreme view is that the best team size is what you have right now and not an ideal something from the literature, howsomuch backed by data that might be! In several ways, it is same as the ideal body weight – most of us will never have it, but what we have ‘here and now’ is the most important number to start with. So, why waste time over building an ideal team and lose all precious learning opportunities in that process? If my team has a true ‘inspect and adapt’ DNA, irrespective of where I start, I will get to the finish line. Somehow. Isn’t’ that more important and being truly agile rather than finding the perfect take-off point?
  • Take user stories. The notion of moving to user stories makes lot of sense give the constant pace of world around us. PRDs could never cope up with documenting such copious amount of details – and if anything, they would only succeed in documenting history of what customer wanted a year back! Now on one end we want our user stories to be ‘negotiable’ (from the acronym INVEST) so that we can create meaningful conversations between product owner and the development team. This again makes a lot of sense in an imperfect world where documenting every single requirement with its myriad corner conditions might be practically impossible, and has diminishing rewards beyond a point. So, if we can create a quick and cheap way to get started and have both, the process and the humility to listen to development team come up with more questions and options, then this premise holds very high promise. However, as a philosophy, something that is non-negotiable might not be so good in the same spectrum. For example, the Scrum process that we want them to follow must be non-negotiable. Why is that? If Fosbury had listened to the best way of high jumping, he might have never broken the proverbial sound barrier in high jumping.

Hey…what happened to the big promise of team being allowed to figure out its own ways and means? Once we ‘tell’ them, shouldn’t we step aside and let the team find its true north? Do I hear you mention ‘Shu-Ha-Ri’ thingy? Do yourself a favor – go and find a student (even better – try it on a second-grader) and then keep telling him/her that they are still a ‘Shu’ and hence must obediently listen to whatever you are telling them. They are supposed to follow your instructions to the hilt and not even think of wavering a bit. Good. Now take a deep look at their reaction. Count yourself lucky if they choose to ignore you and decide to move on, for there are far more violent ways they could have chosen to respond to such dogmas. In short – this is not the time and age for dogmas. Kingdoms, Colonies and Communism are all long dead. Accept it and change your own coaching methods, if you want to be counted.

To me, agile is a state of mind that tells me how to proceed in an imperfect world. Not to somehow make a ‘perfect’ world and then proceed. To me, a successful agile implementation is not about finding the perfect team + perfect process + perfect customer + perfect timeboxing + perfect sprint planning + perfect retrospectives + perfect product owner + perfect scrummaster + perfect

When does experience get ossified into dogma?

When does experience get ossified into dogma?

engineering practices + many more perfections = perfect landing. To me, a successful agile means starting with team that you have at hand, with the process that you have under the constraints you have, with the requirements that you have on a best effort basis and a many such real-world realities that works under a mindset of taking things one after other and improving the journey with the hope to get to the destination better than without such effort. Remember, we are being melioristic, not idealistic. We are being adaptive, not laying down pre-conditions for take-off. And in that pursuit, the most important guide for decision-making is our own judgment. Everything else is just that – everything else, and while it might work at times, it might not work at other times. So, like the Swiss Army manual that says – when there is a gap between map and the terrain, trust the terrain, go ahead boldly and experiment. In the worst case, you will lose some time and dollars, but if your DNA is built on the premise of self-improvement, you will quickly recover and eventually find your own path. If you are not able to ‘find’ it, you will ‘build’ it. Even better…

In many ways, there were no royal guards so zealously guarding waterfall model that made it sexy enough to be experimented with and experimented upon. On that same scale of flexibility, I don’t find agile methods sexy enough. It appears to be a lot like dogma on steroids. And I think that’s a serious problem.

Is your agile still a lot like dogma on steroids?

Why user stories make sense?

User story is a popular mechanism used by most agile methodologies to communicate user requirements. Actually, this is only partially true. User stories are meant to be a placeholder to initiate an early conversation between the product owner and the team on key hypotheses so that a quick validation of them could be done to refine/confirm/reject our understanding of what the customer really wants. To that end, user stories are not anything like the requirements of yesterday – they are not even remotely meant to be complete/comprehensive and so on. 

The reason user stories (and not ‘well-formed’ requirements, if that is ever possible) makes sense is because we human beings are not experts in following the written word, and are likely to misinterpret requirements even when they are explicitly written down, especially when the communication is a one-time event and not an ongoing process. (if you don’t believe this, just visit this interesting site http://www.cakewrecks.com/ that celebrates real-life examples of how something as simple as icing on the cake can go horribly wrong despite really simple and absolutely clear instructions!). And we are talking about building complex mission-critical systems that must operate 24/7 with top speed and efficiency. If only the written word could be consistently understood by the receiver as intended by its author… 

On the other hand, if there is a continuous two-way dialog between the product owner and the agile team, such purposeful brevity leads to curiosity which then leads to a constantly improving and better shared understanding that refines as the product gets developed incrementally and gets periodic user feedback. 

User stories reflect the reality that we might not have 100% clarity about the requirements on day one, but with constant dialog, we might be able to improve our shared understanding of customer's wants and needs.

User stories reflect the reality that we might not have 100% clarity about the requirements on day one, but with constant dialog, we might be able to improve our shared understanding of customer’s wants and needs.

The key benefit of user stories is that instead of waiting for weeks (even months!) to get a fully-bloated and rather unprioritized PRD which is supposed to have fully-baked requirements while the development team sits idle all this time, is that identifying highest priority user stories and using them  to develop a prioritized product backlog allows the teams to get started much earlier and start developing features which could then be put out in front of customers to get real feedback (as opposed to individual ‘opinions’ that might/not best guide us in an uncertain environment). This is especially important because in the past, when the world was little less complex and much more underserved. I purposefully use this term ‘underserved’ because it is not that people have suddenly become much more demanding about what they like or not, but were just told there were exactly three carmakers to choose from, or just two operating systems to choose from, and so on. However, with rapid advancement of computing paradigms and constantly lowering cost of ubiquitous communication devices, they suddenly have the opportunity to demand what they want, and hence classical ‘forecast’ models of requirements elicitation, design or production (at least in the manufacturing sense) don’t work so effectively as much as the newer ‘feedback’ driven models that allow for developing key hypotheses (and not hard requirements carved in stone) which could then be quickly and cheaply delivered as ‘done’ features so that customers could tell us if they like them or not. Based on such valuable customer feedback, the team could iterate to either refine the feature, or to pivot, as the case might be. In the past, this might take the entire product gestation cycle, by which time, many, if not most, things might have undergone significant changes, and the entire development costs being sunk by that time, not to mention complete loss of any window of opportunity.

As Facebook says – Done is better than perfect. User stories allows developing a small slice of the product without really perfecting the entire product, but facilitates the process of validated learning that eventually helps develop a product with much higher chances of meeting customer needs.

In today’s world, you probably might not have the luxury of having investors wait multiple years for an exit, or customer waiting several quarters for a perfect product when the competition is willing to serve them faster (even letting them lay their hands on an earlier version and give feedback thus making customers a co-creator of the product), or employees working month after month without getting any meaningful feedback from customers. And we know that absence of any feedback eventually leads to erosion of trust. Incremental product development could help you bridge such trust deficit by delivering highest value to customers early and periodically, and user stories might help you get your project a quickstart.

What are agile practitioners thinking in 2011?

Shortly before the yearend holidays, couple of us from product development companies got together to discuss how software development process philosophy and methodology is changing. We had representation from global embedded companies, healthcare products, internet domain, automotive, semiconductor – you name it. What was interesting was that irrespective of the type of software product being created, some common themes emerged in terms of practices that seem to make sense:

  1. Continuous integration: integrate early, integrate often seemed to be high on priority list for most.
  2. Test Automation was considered equally important agile practice
  3. Collocating cross-functional teams was found critical for end-to-end product development
  4. Being nimble to requirements was critical not just for customer-facing products but even in enterprise world
  5. Some (many ?) engineers are not used to reporting daily – they think it is an unnecessary intrusion into their work, and perhaps even a productivity impediment! Making them think otherwise is an important culture change
  6. Techies don’t want to become scrummaster – they would rather work knee-deep in coding! Some companies tried to hire ‘scrummasters’ – career project managers without specific domain skills but that was not very successful. The challenge is to find right set of people to play the role of scrummaster.
  7. Daily stand up seems to be a great way to make sure teams stay on the same page. I recounted my own experiences – way back in ’98 at Philips, we used to have daily stand-ups and it was highly effective.
  8. Simulating interface components was a high need for companies in hardware-software co-creation cycle. This especially became critical as the hardware was generally never available when doing software
  9. Can’t work from home in a truly agile world – have to co-locate teams. This seems to be a rather side-effect of using agile for teams – since teams have work closely and for things like daily stand-ups, everyone should (preferably) be in the same room. However, given the realities of modern day life, working from home is not only inevitable few days a month, but might actually be a productivity booster (ask those of us from Bangalore!). So, having a rigid agile discipline seems to be at crossroads to people’s ability to balance their work-life.
  10. Blank sprint after 3-4 sprints is a generally used practice. People felt back-to-back sprints would fatigue the teams, and make the work monotonous, and hence a blank sprint. A blank sprint was simply another timebox with housekeeping activities, vacations, etc.
  11. There seems to be an growing chorus for having internal agile coaches – however, no one in the group was using them. There is a general disdain for external coaches who might only give bookish prescriptions – after all, you need someone within the system to own the action items and not someone who simply makes powerpoint of the status. There was a great discussion that such agile coaches can’t simple be a single-axis professionals whether process guru, people manager, or techie. Rather, they need to be a bit of everything and then some more – project manager + process guru + people coach + techie + communication expert + …
  12. What is the role of manager in an agile world? This question has never been addressed well by agile community. How do people grow in their careers in an agile world. In the traditional system, whether good or bad, there is a hierarchy to aspire or grow into, but how do you acquire different skills that prepare you for taking on higher-level responsibilities in the career? If scrummaster is the closest role for a manager, then what next? Scrummaster or scrummasters? There was no clear direction or best practices that have stood the test of time.
  13. Pretty much no one believed that bookish or a biblical approach to Agile is the right thing – everyone seems to tailor agile as per the unique combination of business needs, nature of products and business and culture, etc.

This was definitely an interesting session that gave an opportunity take stock of some of the things that are working or not working. We also discussed the fact that Agile is really addressing a subset of the entire business problem – to address the entire problem, we need to embrace systems thinking and lean thinking.

What are you thinking in 2011?

How do you schedule tasks in a project?

How do you decide what tasks to schedule first: the complex ones or the easy ones? the short ones or the long ones? the risky ones or the sure-shot ones? Most often, this task sequence is determined by hard logic, soft logic, or some other external constraints. However, how do you decide when there are no such contraints?

If we look at the risk driving the project lifecycle and scheduling, then it is natural to expect high-risk tasks being tackled at the start just so that we are systematically driving down risks in the project and achieve higher certainty levels as we get close to the project. However, it seems inconceivable that someone will cherry-pick the easy tasks first and leave all high-risk ones for the end! Clearly, that is setting up the project for a grand finale of all sorts!

Project SchedulingCould complexity be a good measure then? What will happen if we take high complex tasks first? Surely, that will lead to tackling some of the most difficult problems first, and there might also be a high level of correlation between complexity and risk. So, taking this approach is also likely to significantly lower a project risks. However, not all tasks are created equal. It is likely that high-risk tasks not the longest, so while the project makes appreciable gains in lowering the risk, but doesn’t make a whole lot of progress.

Agile approaches, Scrum in particular, rely on driving the tasks that make biggest sense to the customer – tasks that deliver maximum value to the customer. However, there is no guarantee that this will help lower the project risks, or manage the schedule better. Similarly, Kanban helps manage tasks but doesn’t really spell out how they should be scheduled.

So, what is the best approach?

I read an interesting blog, The Art of the Self-Imposed Deadline, where I specially liked this one from the blog post:

3. Avoid the curse of the “final push.” Scope and sequence a project so that each part is shorter than the one that precedes it. Feeling the work units shrink as you go gives you a tangible sense of progress and speeds you toward the end. When you leave the long parts for last, you’re more likely to get worn out before you finish. Besides, if you’re “dead at the deadline,” those other projects you’re juggling will stagnate.

This seems like a very practical suggestion – often the tendancy is to ‘cherry pick’ while scheduling the tasks – we tend to pick up tasks that are favorite to people, or appear to be easier to take up. However, very often, we end up taking more time, and project gets delayed. If the tasks are scheduled such that big/complex tasks are scheduled first, that might mitigate lot of risk pertaining to last-minute issues (could be due to underestimation, or integration issues, etc.). However, I am not 100% sure that purely scheduling tasks on “longest-task first” basis is the best policy to lower risks in a project. Shouldn’t we be using a combination of top-risk items that are longest?

I haven’t used this approach yet, but seems like something to try. How about you – what’s your favorite way to schedule tasks?

Your estimates or mine ?

For decades now, the project management world is divided between top-down estimation and bottom-up estimation. While a top-down approach might have limitations, it perhaps is the only way to get some meaningful estimates at the start of a project. A bottom-up approach might be a great way to get more accurate and reliable estimates but you might have to wait for a problem to be whittled down to that small a level to get such reliable estimates. Both are required for a comprehensive and useful project planning, but unfortunately, most people see one over other, and the Agilists abandoning top-down in favor of the highly accurate but low-lookahead bottom-up methods.

A top-down estimation is a great way to abstract the problem without worrying about its nitty-gritties, and come up with a workable estimates when there is no other source of getting any better estimates. At the start of a project, when making a realistic WBS is a remote possibility, the only way one could get some estimates is by top-down estimation techniques such as

  • Expert Judgment
  • Wideband Delphi Method
  • Analogous Estimation
  • Parametric Estimation

Estimates represent a range of possibilities

 

They allow an ‘order of magnitude’ estimate to be made available to set the ball rolling. We know that such estimates suffer from cone of uncertainty, but it is better to get imperfect estimate now than to get perfect estimates much later in the project – which we still need to get – the point is that we also need to get some estimates here and now!

Another key reason why we must use such top-down techniques at the start is because if we straightaway jumped into bottom-up estimation, we might face challenges because

  • Details about requirements are still emerging, and hence likely to get refined as our understanding about the problem gets better
  • Requirements might keep changing, thereby changing the scope of work all the time
  • Too many low-level details could introduce too many moving parts in the scope of work, thereby forcing one to skip “vital few” and start focusing on “trivial many” clearly not a prudent approach at that stage

So, we need to accept the ground conditions and accept those high-level estimates with a boulder of salt and keep moving. As we get deeper into the project, we decompose the problem better and have a more granular WBS that helps us zoom into the problem and get more refined estimates. At that point, a bottom-up approach is much more relevant, as it allows us to plan and track the work much better.

Agile methodologies clearly favor bottom-up estimation methods and the idea is to slice down every task on the project into ‘stories‘ that could be ideally done by an individual team members within a couple of days. It helps you to win daily battles but takes your sight off the big war that you must win to eventually succed. We can’t really scale up these stories for large projects even if we assume that every product scope could actually be broken down to meaningful stories of that grain size. So yes, in theory, that’s great. But the real world is not obligated to obey the theory!

A prudent approach is to adopt rolling-wave planning – start with big grain size when details are not very clear and gradually move down to small grain size as you get closer to the task. However, you don’t just abandon the overall planning simply because you are now doing task-level estimates and planning! On the contrary, it is even more important to keep track of dozens of small-task estimates becuase, to quote the great Fred Brooks in The Mythical Man-Month, it is not the tornadoes but the termites that eat up your project’s schedule! So, a delay of one day here and one day there repeated for a couple of tasks over next few sprints and you suddently are sitting with a month-long delay on your overall project. You might be hitting each timebox but that’s one problem with timeboxes – it could give you a false sense of comfort that all is well, when clearly you are being forced to jetsam things that you are not able to accomplish within a timebox. On the other hand, if you go by a task-driven schedule, you atleast have a clear idea of how much late you are to complete that task.

A frequent criticism of top-down estimates is that they are at best a wishful thinking, and at worst a highly unrealistic estimate forced upon a team against their free will. Again, we should be careful in such judgment because the reality is not always what is presented to us, but rather what we do with it! So, if you take a top-down estimate as the word of god and make it ultra-resistant to changes, than you have yourself to blame when those estimates don’t reflect the reality anymore. They should be taken as a map and should be continuously refined from the data coming from the terrain. To that end, top-down estimates and bottom-up estimates play a complmentary role and an effective project manager blends them together. A good top-down estimate covers the ground fairly well, thereby allowing realistic bottom-up estimates to happen (by allocating them sufficient, meaning as close to the actuals, time and resource available), and good bottom-up estimates at task level help reinforce assumptions made in the top-down estimates of the overall project.

So, at the end of the day, it is not about which estimation technique is superior. It is about using most appropriate tool for every type of problem.

Next time, think again before you ask the question your estimates or mine?

Do you follow Project Management ‘religiously’ ?

Project Management is perhaps one of the most fiercely debated and grossly misunderstood disciplines in the software field currently, hence let me throw in a disclaimer first: if you are a small team of experts and/or people well-known to each other (e.g., have worked together as a team earlier), and situated in a collocated fashion, doing a lot of ‘creative work’ that can’t be very ‘accurately’ scoped, let alone managed; you probably will find ideas of formal project management a huge overkill (on time, effort, money and might even seem to stifle creativity), and you might be better off considering ‘lightweight’ methods like Agile Project Management / Scrum in the context of software development (well, nothing stops you from deploying pieces of Agile / Scrum in a non-software context – it is based on common sense after all). Small projects, small teams can afford to define a process that exploits the tailwind:

  • A small and collocated team means more direct interaction among team members and lesser management overheads in formalizing project communication (e.g., project status reporting, team meetings, etc.). This reduces the time it takes to collate, transmit and share important project information and also decreases the possibility of information distortion or confusion. A small team means there is high signal to noise ratio in team communications, is also leads to a better utilization of time spent in transmissing, receiving or digesting communication. Further, to schedule any event, one can always convene impromptu meetings around a whiteboard or around the coffee table without worrying about people’s already double-booked calendars or finding a place large enough to fit the team for the next meeting, and so on.
  • Having a group of experts means there is low(er) need for managerial supervision and people are generally ‘hand-picked’ for their knowledge, skills and abilities that could typically be ‘mutually exclusive, collectively exhaustive’. This ensures tight interdepency and very high mutual respect among team members. There is lesser ‘competition’ among team members and everyone knows that “united we stand, divided we fall” which fosters teambuilding.
  • For a small project, a large swing in any of the project constraints (time, cost, scope) could seriously and irretrievably impact the project in a downward spiral. If the work is new to the team, it could spell bigger trouble, especially if there is no way to do mid-course corrections. Executing projects in an iterative fashion helps shorten the feedback loop and give a chance to make necessary corrections to improve its ability to stay on-course and eventually hit the target. Additionally, if there is a customer involvement on these short iterations, the quality and value of these feedbacks could improve substantially lest any decisions are made that are hard to change if required.

However, if your project is anything more than a handful of people, takes more than a few months and entails issues like external dependencies with multiple vendors and ISVs (Independent Software Vendors), substantial technical and managerial risks, subcontracting / co-development, etc., you probably might improve chances of success using a formal project management methodology like PMBoK or PRINCE2 than without it. Agilists will argue and disagree with me that Agile / Scrum is not suited for large projects, but if you are new to Agile as well as to large projects, you might want to consider all options and associated risks – don’t believe in what worked for others, it might or might not work for you (remember the golden rule – we are all different), and your company might have a different set of constraints and tolerance to performance. 

The size, complexity, criticality, and risk profile of a project, and organizational appetite for project risk and uncertainty, and organizational culture would generally determine the level and need of management control required, and hence one must always right-size the project management methodology based on the context. Both, PMBoK / PRINCE2 and Agile methods evolved in response to challenges seen in executing projects successfully. While PMBoK / PRINCE2 took the approach that projects most often failed due to lack right levels of management oversight, planning, execution and control; Agillists felt the key reason was the fundamental nature of software being a ‘wicked problem‘ that just could not be managed using an obsolete waterfall model of developing software – it needed a better way to develop software in short iterations with very close-knit self-managed teams.

Without getting my loyalties needlessly divided betwen the two of these approaches, I believe there are merits in both the thoughts. I refuse to believe that any non-trivial project can be managed by piecemeal iterations that give no commitment whatsoever on the overall project performance, nor give any method to assess if the overall project is indeed on the right track (so, even if first few iterations are achieving the desired ‘veolcity’, what is the guarantee that subsequent iterations will also move at the same pace?). On the other hand, doing an iterative development is highly intiutive and very effective way to learn from past experiences to plan next steps, especilly on any non-routine project that is full of unknowns and assumptions. To me, these represent ‘mix-and-match’ ideas that one should pick up and apply wherever required, without getting baptized to the school of thought. I find most process wars have taken the disproportionate dimensions of falling just short of religious wars – you have to belong to one of the camps, and must talk evil about the other camp to be seen as a law-abiding corporate citizen of that camp. In life, we borrow ideas around us – our housing society, professional volunteer society we volunteer for, our kids playground, gardening, home improvement, nature, science, religion, other cultures…without necessarily converting to another faith or religion or society just because we like an idea that we want to borrow. So why should that be required in project management methods ???

I think this process war will be won by pragmatics. Those who take a position on extreme ends of the continuum and insist on always applying a particular style as the panacea might be in for a complete surprise because no two problems were created equal. A prudent approach is to evaluate all options and then decide which is the best response to a problem, as opposed to blindly follow a project management approach ‘religiously’.

 

Do you follow your project management aproach ‘religiously’ ?

Blame your “flaccid developers” and your “flaccid customers” for your poor quality products !

This is the text from a recent announcement for a course by Ken Schwaber on “Flaccid Scrum – A New Pandemic?” (text underlining is mine):

Scrum has been a very widely adopted Agile process, used for managing such complex work as systems development and development of product releases. When waterfall is no longer in place, however, a lot of long standing habits and dysfunctions have come to light. This is particularly true with Scrum, because transparency is emphasized in Scrum projects.

Some of the dysfunctions include poor quality product and completely inadequate development practices and infrastructure. These arose because the effects of them couldn’t be seen very clearly in a waterfall project. In a Scrum project, the impact of poor quality caused by inadequate practices and tooling are seen in every Sprint.

The primary habits that hinder us are flaccid developers and flaccid customers who believe in magic, as in:

Unskilled developers – most developers working in a team are unable to build an increment of product within an iteration. They are unfamiliar with modern engineering and quality practices, and they don’t have an infrastructure supportive of these practices.

Ignorant customer – most customers are still used to throwing a book of  requirements over the wall to development and wait for the slips to start occurring, all the time adding the inevitable and unavoidable changes.

Belief in magic – most customers and managers still believe that if they want something badly enough and pressure developers enough to do it, that it will happen. They don’t understand that the pressure valve is quality and long term product sustainability and viability.

Have you seen these problems? Is your company “tailoring” Scrum to death? Let Ken respond to your issues and questions!

Ken will describe how Scrum addresses these problems and will give us a preview of plans for the future of the Scrum certification efforts.

Here are my observations and thoughts from this synopsis:

  • line 2: what does “when waterfall is no longer in place” mean ? So, when waterwall was still in place, there were no issues ??? Somehow, one gets a feeling all the problems came to light only when Waterfall was “no longer in place”…so, why not get waterfall back in its place 🙂
  • line 5-6: In a Scrum project, the impact of poor quality caused by inadequate practices and tooling are seen in every Sprint ? …in every Sprint ?….now, wait a minute….I thought  Scrum project did not have any of these problems because there were no inadequate practices and tooling issues. And you certainly don’t expect to find such issues in every Sprint, or do you ?
  • line 7: OK, so now we have an official reason: “flaccid developers” and “flaccid customers” for all ills of the modern world. Wow! I am not sure if that is the best way to build trust either with teams or with customers by fingerpointing and squarely blaming them…without giving them a chance to even speak for themselves. And I thought Srcum was cool about trusting developers, not fixing blame on individuals, interacting with customers…but flacid developers and flaccid customer ??? Does it get any better than this ?
  • OK, I will probably agree ‘unskilled developers’ in the context of building an increment of product in an iteration, but what the hell is an ‘ignorant customer’ ? Try telling a customer that you are an ignorant customer…Scrum or no Scrum, you are guaranteed some unsurprising results ! And which Customer paying through his nose waits for the “slips to start occurring” ? In these tough economic times ?  
  • Is your company ‘tailoring’ Scrum to death ? I thought there were only two shades of Scrum – either you did Scrum-per-the-book or you did not. Since Scrum is the modern-day Peter Pan which refuses to grow up, we have only one version of Scrum to play around with, and unless some group of people decide now Scrum can grow to the next version (perhaps more because of commercial interests than anything else, whenever that happens). So, how can one ‘tailor’ Scrum – by any stretch of imagination, that is NOT Scrum. I mean, that is not allowed ! Right  ? So, why are we discussing it and wasting time – we might actually be acknowledging and unknowningly legitimizing that there is a secret world out there where Scrum can be ‘tailored’ and still be called Scrum ! That is sacrilege !

I always hate marketing messages that overpromise miracles, offer snake oil, belittle the intelligence of people, ridicule people…why can’t people simply stick to facts and figures ? Why don’t we talk in a non-intimidating language that encourages people to look up to what is being talked about ? And in the context of current subject, I doubt Scrum community is going down the right path if its next target is developers and customers. One of them does the work and the other one pays for it. Who gives us the right or the data to talk about any of them, in absentia ? Customers are customers, and even if they are irrational, they are still the customers, and whether paying or not, they alone are going to decide how they will behave. Yes, we might not like it, but is ridiculing them as ‘ignorant customers’ going to turn things into our favor ? Other industries like retail, hospitality, transportation, etc. have millions of years of collective experience in managing customers, yet they never break the singlemost cardinal rule of customer service: the customer is always right! And the first chance we software developers get to explain a poorly designed or a bad quality product and rightaway we blame the customers for it ! As if that is not enough, we then look inwards and blame the developers ! GREAT !

I think I will just develop software without blaming anyone else for my mistakes and limitations 🙂 

PRINCE2 handles Project Tolerances better

Most project management frameworks and methods advocate (and actually require) ‘point-estimates’ in planning and scheduling. By ‘point-estimates’, I mean there is a ‘hard number’ that seems to be etched in stone, leaving with no ‘tolerance’ or ‘leeway’ for the project manager and his team. Even though we all understand that estimates are never point-estimates (and hence the project commitments are never a point-commitment), we still expect a firm estimate (and consequently a firm commitment) from a project manager.

For example, a given feature must be required by 23-June, but there is no recognition of the fact that 23-June might still be a couple of months away, and several things could go wrong or could change meanwhile thus affecting the validity of this date. In real-life, there are always tolerances, some allowable, some acceptable, some tolerable and some simply unacceptable. This is not limited to schedule along, but also apply to cost, quality, scope, project benefits, etc. In some cases, the tolerances are defined, for example for the project scope, requirements might be prioritized as ‘M-S-C’ (or Most, Should and Could). In a previous organization I worked for, we had a great system for prioritizing all product requirements as ‘A’ (for current version), ‘B’ (implement if time is available) or ‘C’ (long-term requirement, but is being identified right now just so that designers and developers are available of how the system will evolve over time – so that, hopefully, they could build their designs futureproof). However, most product managers are unwilling to commit to such tolerances (or atleast unwilling to acknowledge existance of such scope tolarances), perhaps fearing that development team will straighaway prune down the list to bare minimum ones. Hence, in most organizations and projects, there seem to be two sets of commitments: one set is for the customer and one set is for the team. The one for team are the more stringent ones, but in the heart of the hearts, upper management knows that not all of them will be achievable and hence makes a little less stringent commitment (perhaps a more realistic one !) to the customer, thereby cushioning themselves against Murphy and the real-world. The project team slogs over months, often spending long nights and weekends alone in office just to try and meet the set of commitments that have been given to them, hardly ever realizing that there is another set of more realistic commitments that perhaps don’t acknowledge the cost of their commitment and efforts the team members are currently paying.

Agile methods, and Scrum in particular, take a major step forward and acknowledge that long-term plans are prone to later-day variations (and thus a waste of time and effort for all practical purposes) and hence favor short iterations to minimize the possibility of a project drifting or overcommiting itself to impossible deadlines due to such wild and optimistic estimates for long-range plans. But, it goes a little too far by simply accepting a world where no project tolerances are recognized, let alone identified for a project or its iterations. When a team member comes back and informs that he needs more time to complete a task than originally estimated, it simply makes suitable changes in the sprint burndown chart without placing an upper-boundary condition on how much off-estimation is permissible? As per the revised commitments, if the engineer is able to complete the task in the current sprint, well and good, but if for some reason, he is not able to complete, you simply push it out to the next sprint. It may not be necessarily bad, but definitely doesn’t place any upper-bound on the allowances available to a project team, and hence might not find good favor from upper management or product management in the long run.

PRINCE2 not only legitimizes the existance of tolerances at project level, it also allows a project to inherit a portion of those tolerances at stage plans and team plans at various stages thus giving a reasonable and mutually-agreed upon leeway to project manager and team managers to use those tolerances to manage the projects better (than, say, worrying about some 2-day delay that they can’t avoid or not able to meet a scope requirement because some peripheral feature can’t be met in given time). The mere act of legitimizing project tolerances creates a healthy understanding among all stakeholders that in real-world, there will be variations in future due to factors that might not be known or understood fully, or might simply change over time. A prudent project manager might not restrict himself to his current project management framework, and might want to look at what PRINCE2 has to offer in this regard.

How do you handle project tolerances ?

How Agile Practices address the Five Dysfunctions of a Team ?

Since times immemorial, ideas, objects and experiences of grand stature and lasting economic, social and emotional value have been created by men and women working together in teams. Granted that some extraordinary work in the fields of arts, philosophy and sciences was done by truly exceptional individuals, apparently working alone, I suspect that they too were ably supported by other selfless and unsung individuals (in the backoffice, perhaps) who all worked together as a team. Right from the great wars, social upheavals, political resistance, empire building, freedom struggles and forming of nations and protecting its borders to the creation of majestic wonders such as Pyramids, Taj Mahal, Eiffel Tower, Statue of Liberty, Sydney Harbor Bridge or the London Eye and many more, each one of them owes its creation and existence to teamwork. Of course, the scope of teamwork doesn’t exclude simple, mundane and everyday things that are extremely important even though they never make a headline: an activity as routine as tilling the fields, or planning a picnic or even a family function, all involve a team. 

With such profound impact teamwork having on our everyday lives, it is only natural to expect that output of a team is directly impacted by the quality of its teamwork. Unfortunately, this doesn’t happen by having right intentions alone, or by leaving it to chance. Quite often, it doesn’t even happen ! Quality of teamwork is impacted by various factors such as motivation levels of individual team members, levels of trust among team members, clarity of purpose, uniform understanding of the goals, lack of resources, poor communication among the team members, and so on. Thus, it comes as no surprise that appropriate investments must be made to make the team click. However, most often, team dysfunctions affect a team’s performance seriously jeopardizing its ability to perform effectively, any state of art processes or tools notwithstanding. Most software managers lack the ability to detect such deeper sociological smells, thus are unable to deal with its impact. Any superficial response to such problem only makes the task harder to deal with. 

In this article, I have analyzed the team dysfunction model proposed by Patrick Lencioni in his wonderful book, written in the form of a fable in a business setting, The Five Dysfunctions of a Team – A Leadership Fable. In his model, called The Model in the book, he has identified five dysfunctions of a team that affect team performance. These five dysfunctions are not really independent, but interrelated to each other, and build on top of one another…

Read the full article here.

What is the Inventory of your Software Development project ?

 

Traditional software development follows a quintessential Waterfall model. Among its many limitations, originally discussed by Winston Royce in his 1970 classic paper “Managing the Development of Large Software Systems” and many more thereafter, it also forces a build-up of ‘work-in progress’, or inventory of unfinished work that has not yet been delivered to the customer, and even if it has been presented to the customer, it doesn’t add value until the final working software has been delivered. I define ‘value’ here as originally defined by Womack and Jones in their classic ‘Lean Thinking’, “…The critical starting point for lean thinking is value. Value can only be defined by the ultimate customer. And it’s only meaningful when expressed in terms of a specific product (a good or a service, and often both at once), which meets the customer’s needs at a specific price at a specific time”.  

 

Agile software development, as originally envisaged in the Agile Manifesto, aims to address several of the challenges faced in a Waterfall-type of software development by working on shorter delivery cycles using only a subset of requirements at a time. This subset of requirements (“Sprint Backlog” in Scrum) is prioritized by the customer or customer’s representative (“Product Owner”), so at the end of this short delivery cycle (“Sprint”), the customer gets ‘potentially shippable software’ having exactly those features that add ‘value’ to her. Frequent delivery of working software also has an additional impact on reducing ‘inventory’ levels in a software project, however, Agile doesn’t specifically address how is improves it over traditional software development. We can use concepts from Lean manufacturing to understand and explain inventory build-up in a software development, and how Agile practices help us manage them better compared to the traditional software development practices.

 

Little’s Law states that inventory in a process is the multiplication of throughput and the flow-time. In traditional manufacturing, there is a strong emphasis on plant capacity utilization as a core driver in cost management. However, a high plant capacity utilization requires (or rather leads to) high inventory to ensure the production doesn’t slow down for want of raw materials. High inventory in turn leads to a low inventory turnover, signifying poor sales, thus having high economic implications. Inventory is also identified as one of the seven wastes in Lean. I have discussed the concept of inventory in a typical manufacturing process and how Little’s Law can be used to analyze mathematical relationship between inventory, manufacturing lead time and cycle time in my article “Applying Little’s Law to Agile Project Management – part 1”.

 

In software development, inventory can be thought of as the development costs that continue to be incurred while the development activities are underway. Only when a successful delivery has been made and accepted by the customer, the development team can realize its developments costs, and free-up the ‘inventory’. Until such time, this ‘inventory’ accumulates and remains locked-up.

 

In traditional software development, there is no concept of interim releases to the customer. Some teams might provision for a proof of concept or a prototype, but most often, they are only meant to work under ‘test conditions’, meaning they are not fit for deployment in real world. At best, they give a feel of what it likely to come out of the development lab at the end of a generally long development cycle. In that long cycle, it is possible that some requirements become obsolete, or get clubbed with some other requirements, or just get de-prioritized. In effect, a ‘BRUF’ approach often might end up the team with a software built to original specs, but unfortunately for them, the customer’s view of the world has changed while they were busy developing the software in isolation. So, the ‘effort’ that doesn’t add to ‘value’ also becomes part of an already bloated inventory of development costs. Mary and Tom Poppendieck have discussed the subject of ‘inventory’ in software development in great details in their seminal work on Lean Software Development. They have taken a much wider view of the inventory, while I have limited my discussion in this article only to the development costs.

 

When we use Agile practices, we work on a prioritized subset of requirements for each increment, and at the end of each such increment, we deliver a working software (not a prototype) to the customer. The customer is in immediate position to test-drive the delivery and come back what any necessary changes that could be incorporated in the very next increment. At the end of each such successful increment that delivers ‘value’ to the customer, it is possible for the development teams to realize their development costs, thereby resetting the inventory levels to zero before starting the next increment. I have discussed this scenario taking a near-real-life example in my featured paper “Applying Little’s Law to Agile Project Management – part 2”.

 

Conclusions

Agile software development is a radical approach to improve productivity in software development teams while improving the value delivered to the customer by using shorter delivery cycles (and a lot of engineering practices, though not discussed in this article). Using Agile practices, it is possible for the development organization to create ‘value’ for the customer and also manage its own inventory levels. This is true for ISVs, or services companies working for a customer. It is also equally applicable for start-ups that often work for a prolonged gestation period in a ‘stealth mode’ and their only performance metric during that phase being the notorious ‘cash burn rate’. In today’s tough economic times, virtually every firm is trying to reduce its cash cycles, and Agile software development just provides one more financial incentive by reducing the inventory and shortening the cash cycle. 

 

 

 

 

Applying Little’s Law to Agile Project Management – Part 2

Second part of my article on exploring how Little’s Law related to the Agile Project Management just got printed in PM World Today. Here is the abstract:

Little’s Law[1] states that inventory in a process is the multiplication of throughput and the flow-time[2]. In first paper[3] of this two-part series, we took an every-day example to discuss Little’s Law at length. We also briefly looked at the implications of Little’s Law for manufacturing and for software development. In traditional manufacturing, there is a strong emphasis on plant capacity utilization as a core driver in cost management[4]. However, a high plant capacity utilization requires (or rather leads to) high inventory to ensure the production doesn’t slow down for want of raw materials. High inventory in turn leads to a low inventory turnover, signifying poor sales[5], thus having high economic implications. Inventory is also identified as one of the seven wastes in Lean[6].

 Traditional software development has evolved from the days when requirements were fairly well crystallized, and the customer was willing to wait for the release while development teams went about developing the software and delivering only when everything was done. This often meant waiting for the working software for several months while development costs were accumulating. Today, thanks to a sleepless world of global competition, there is a business need to deliver software to the customer as early and as often as possible. Among other things, this also helps keep the development costs in check. This ‘development cost’ could be construed to as the ‘inventory’ in a software project, for it represents ‘work-in-progress’ just like the raw material represents locked-up capital in a manufacturing plant. However, there is neither a systematic concept nor consistent awareness of ‘inventory’ in software project management, and hence software development community has not benefited from the learnings from its more robust and more scientific cousin, the Lean Manufacturing. Agile software development has tried to solve similar set of problems from another perspective, at times even borrowing from the Lean concepts, but incidentally, there is a significant parallel between the two of them.

 

In this second part of this paper will explore in details how Little’s Law is not only conceptually akin to the Agile way of software development and project management, how well its mathematical principles could be used to understand and improve financial performance of a software project using Agile philosophy.

 

Read complete paper in English


[3] Applying Little’s Law to Agile Project Management, Part 1, http://www.pmworldtoday.net/tips/2008/nov.htm#6

[4] Pharmaceutical Manufacturing: Cost, Staffing and Utilization Metrics, http://www3.best-in-class.com/bestp/domrep.nsf/Content/013881AE1866865385256DDA0056B517!OpenDocument

[6] Types of Wastes targeted by Lean, http://www.epa.gov/lean/thinking/types.htm

Applying Little’s Law to Agile Project Management

My article by this title got published in PM World Today, Nov 2008 issue. Here is the abstract:

Little’s Law states that inventory in a process is the multiplication of throughput and the flow-time. While this seems intuitive, it helps us establish a mathematical relationship between basic factors that govern performance of a production process: arrival rate (or the flow-time or the cycle time), manufacturing lead time (or the throughput) and the inventory present in the system at any point of time (or the work in progress). The law has been found to hold good as long as these three parameters represent long-term averages of a stable system and they are measured in consistent units.

Little’s Law has its origins in manufacturing, but it also very relevant to a project manager in non-manufacturing setup. An in-depth understanding of Little’s Law will help a project manager to improvise on critical ideas in scheduling project activities to minimize the risk of schedule variance, improve accuracy of tracking and provide more usable status reporting.

In this first paper of the two-part series, we will explain what is Little’s Law using an example, and conclude with what it means for manufacturing and for software development. A second part of this paper will explore in details how Little’s Law is not only conceptually akin to the Agile way of managing project, how well its mathematical principles could be used to improve project management using Agile philosophy.

Read complete paper in English.

What is your Software Development Religion today ? And where does the Customer fit in that ?

The Swiss Army manual says: When the map and the terrain disagree, trust the terrain. However, in software industry, there seems to be an unending effort to make sure the terrain is retrofitted so that it looks much more like the map in hand ! So, I have modified the Swiss Army saying to suit the reality in the software development community as: When the real-life and bookish process definition differs, it clearly shows you have not understood the bookish definition, and hence you must change the real-life until it looks like the bookish definition and then trust the bookish definition.

Statutory Health Warning: Reading this blog post further could be bad for health, especially for those who love the process vocabulary more than the process intent, see the means more clearly than the ends and anyday favor the established processes of the day even to solve newer class of problems that clearly require fresh thinking lest they end up shaking the establishment. Author takes no material responsibility for anyone proceeeding from this point beyond and suffering serious health problems because of the radical views presented in this blog.

The conventional definition, popular understanding and state of practice of Software Quality is seriously flawed. They have created a very lopsided perspective that adherence to certain standards is Software Quality, howsoever hard- or soft-baked those standards be. On one hand of the rather colorful spectrum are the die-hard process zealots who won’t stop short of anything less than an ISO, CMMi or the likes and on the other end of the spectrum are the neo-rebels who believe anything Waterfall is bad and unless anything is Agilized, it ain’t good enough. On the innumerable discussion boards that I subscribe and listen to, completely petrified to speak up lest be asked my credentials to back up my anti-establishment views, I find more productive hours being lost on what should be the exact definition of a ‘product owner’, what should an ‘iteration zero’ be better known as, and whether you pass the Nokia test or not. I find the neo-rebels falling in the same honeytrap that they once so detested and fought tooth and nail – compliance over creativity. I see more mail threads getting fatter and longer because there are linguistic differences that probably should be settled so that practitioner’s camp can have peace after all, but where is the Customer in all this ?

We are probably forgetting that software came first, the generic notion of Quality came much earlier and everything else is only a recent model that some wise men and women have put together – and is likely to change with time. In fact, if that doesn’t change, there is something seriously wrong. So, there is a shelf life associated with Waterfall, and there is a shelf life associated with Agile as well ! Granted that Waterfall has had a rather long-tail that simple refuses to die (much to the chagrin of agilists), and so is Agile likely to have, but the fact remains if the software community stops innovating at and after Agile, it is not just bad enough for Agile – it is bad enough for the entire industry for the wheels of innovation and continuous improvement would have come to a grinding halt. 

I believe quality is not about how much the goods or services that a manufacturer or a service provider produces confirms to requirements. They are also not about whether it is achieved using Waterfall or Agile, whether CMM or ISO, whether done in-house or oursourced, or qualified using random testing or automated testing. Quite frankly speaking, I couldn’t care less if it was done by a bunch of social misfits or comforming cousins as long as I get what I want. But in all the noise and the alphabet soup of neo-processes, I think “what customer wants” isn’t audible much these days.

So, what really is quality. Well, to me, Quality is THAT differentiator in a product or a service that

  • makes me drive a few extra miles just so that I could buy or experience something I really like even when other, relatively cheaper options are available nearby. (=sarcifice time, effort and comfort to get something I value)
  • makes me choose one over other even when, everything else being rather equal, the one I choose might be costlier but not exorbitantly priced. (= availability of other altarnatives, freedom and ability to choose what I want)
  • makes me patiently wait in a line for my turn to come (=sacrificing my comfort to get something that I believe it worth it)
  • makes me pick up a product blindfold (=blind trust, but not trust blindly; reliable everytime)
  • I can recommend to my friends and family (=what is good enough for me is good enough for people I care)”

When I view quality from this perspective, it becomes very easy to substantiate what I said earlier – who cares what process was used, where the software was written, which language was used and what metrics were collected as long as I got what I wanted ? When one is willing to take a customer-centric view of software quality like this, the choice between Waterfall or Agile is only a matter of what class of problem we are trying to solve and what is the best-proven technique to address that situation. There are no permanent ideologies nor permanent religions – you must be flexible to choose what suits the problem at hand, rather than view it from a fixed-focus lens and try to ‘retrofit’ the problem to your software development religion.

So, what is your Software Development religion today ? and where does the Customer fit in that ?

PS: This post happened in response to a question on my favorite Q&A forum – LinkedIn.

Whatever I know about Scrum, I learnt from my sixth-grader son, and Scrum can too !

Scrum offers a fresh approach of software development. However, the philosophy itself is not entirely new. It has been around in various disciplines for quite some time, and it is only now that the software community has woken up to it. I found an everyday example of my sonss full academic year as a great way to explain Scrum to someone new to it.

My sixth-grader son (“Scrum team[1]) has a bag full of books (“Product Backlog”) that the teachers (“Product Owners”) must complete in the given academic year (“Project Schedule”). There are various subjects (you may call them sub-systems, component or modules if you like), and each subject has one or more books, each having several chapters (“Sprint Tasks”). To a great extent, the books don’t change in the middle of an academic year, though some of the factual contents could (for example, when his session started, Pluto was still the ninth planet of our solar system, but that changed a few months back !). The academic year works on a fixed-end-date planning and the holiday schedules are announced for the entire year in advance and the exact exam dates are announced using rolling-wave planning. To that end, there is a very well laid out plan for the entire year.

However, there are always unexpected changes that happen from time to time. Last month, my son got selected to represent his school at the city-level declamation contest. That meant him skipping some classes, and going out on a school on the competition day. He also had to skip an exam. However, all these “changes” were handled by the teachers with extreme professional finesse and personal touch, with him being given extra time to complete the classwork and homework, and letting him write the exam. I personally feel Scrum takes a rather rigid stance on accommodating in-cycle change requests because in real-life, there are always such unavoidable things that gatecrash and must be addressed in the same timeframe without de-prioritizing any of the existing commitments. In this particular case, could I have told my son to skip the declamation contest because his monthly unit tests were more important, or told the teachers that he be allowed to skip the exams ? I think Scrum 2.0 might address such real-world issues.

Some other known patterns of requirement changes include the scrum team (i.e., my son) getting overenthusiastic and volunteering for every second task like preparing for sports day, or annual day, or chart on Global Warming – essentially things that offer a lot of distraction from the core project activities, and take up lot of unplanned effort out of the sprint without really contributing to the tasks to be accomplished (“Sprint Goals”). This is real-life, and as we all know that when well-intentioned plans go haywire, Scrum or no Scrum, we must sit down and slog over the long evenings and weekends. In self-created situations like this he is getting used to his favorite quotation that I told him sometime back “Life is unfair, better get used to it[2].

Instead of an annual exam alone as the only and final way to assess his learning, the teachers give him and other students monthly (“4-Week Sprint”) unit tests (“Sprint Backlog”) and in the final annual exams, test the entire knowledge (“Product Regression Testing”). So, there are a set of chapters that become part of the sprint backlog in each sprint, and get tested towards the end of every sprint, which are, of course, the monthly unit tests. The schedule of monthly unit tests can’t move (“Fixed-Duration Sprints”) – so, if the given chapter (an item from the sprint backlog) is not complete, the teacher will skip that and put on the top of the next sprint backlog. Some months have more holidays, some have less, and some have school annual functions and other events, so the sprint backlog (and consequently the “Velocity”) is adapted based on net usable time available in a month (“Product Backlog Item Effort”), but the sprint is always four-weekly.

The child comes home every month (for that matter, every week if not every other day) with new knowledge and skills that he can directly apply in real-life (talk about “shippable software”).

At the end of every month, evaluated test papers are sent to parents (the “Scrum Masters”) for review. If the performance is not good, they go have a discussion with the teacher on their child’s performance (“Sprint Retrospective Meeting”), and some of those improvement items become input for next evaluation cycle (“Product Backlog Item”).

Every six-months (so, there are two in an academic year: a mid-point and the second as an year-end), there is a comprehensive assessment of child’s overall academic, extra-curricular and social performance. The class teacher prepares this based on evaluated and observed feedback from other teachers for every child and sends to parents. Not sure what is the scrumology for that would be, but the one closest is “Sprint Retrospective Meeting”.

Daily schedule involves checking classwork and homework, both in class with the teacher (“Product owner”) and with the parent (“Scrum Master”) and various issues related to performance and progress are discussed (“Impediments”). There is a special class of scrum masters in this model, known as “mothers” who always seem to have the right burn-down chart in their minds whether the child agrees with it or not. The daily meetings are quite “standup meetings” by themselves especially when Sprint Goals are not being achieved, if you know what I mean 🙂

PS: Towards completing this article, my son came over, read the contents, asked what Scrum was, but approved the article. But he was not very happy that his contributions were not acknowledged anywhere in the article. So, I hereby thank my wonderful son Chanakya Varma, soon to be eleven this October, for his efforts and a lovely and tough time he gives to my wife and I. Son, you are a great example of how a Sprint Team should behave, especially during tough times, one of them definitely being asking your Mom to help you out.


[1] Purists might disagree with the definition of one boy as a scrum team, but those who have been parent of a sixth-grader will know it better !

[2] Quote by Bill Gates