Why are we in this mess?

Prior to the industrial age, the world was essentially an agrarian and a trading economy. Production methods were often a craft and top secret, fiercely protected within a family and handed down from a master craftsman to his sons, and with no machinery for mass production, pretty much every product was handmade and unique, perhaps also customized, for its intended user. Industrial revolution made mass production and rapid movement of goods possible, and among other things, catapulted Britain into forefront of global economies. Gutenburg’s printing press was perhaps the first mass production system built by man. Subsequent inventions like harnessing of steam power made railways possible, spinning machines, and other advances in iron founding and chemicals pushed the envelope. However, a lot of these advances were limited to Europe and even more within the UK, which thrived on these advances and became the economic (and imperial) superpower well until the start of twentieth century.

However, by the start of twentieth century, America too woke up to industrial advancements and contributed to some of the most important advancements that still continue to touch our lives. The pioneering work by Eli Whitney on ‘interchangeable parts’ on his now classic cotton gin introduced the concept of modular design, followed up by Frederick Winslow Taylor’s groundbreaking work on scientific management led to the concepts of standard work and division of labor  (even if somewhat questionable and controversial in today’s context) and created the foundation for Henry Ford to envisage a mass production system with a moving assembly line where finished goods could be assembled from standard parts by semi-trained operators, e.g. Ford’s most famous Model T car in black color (“Any customer can have a car painted any colour that he wants so long as it is black”).

In essense, a production run, say in an automative parts production or even a car assembly line, is the repetition of a process that produces similar (or similar-looking) objects. Once the ‘process’ is ‘designed’, the job entails repeating the process till the desired number of objects have been produced. Clearly, the faster you produce those objects, the sooner you can put them up in the market for sale and start getting money. The more you produce, the more you are able to amortize the capex, and get lower per unit price over the long run. Intuitively, if you have to produce objects that are exactly alike in properties, shape, size, color or any other physical attributes achieved by the production process, you can ensure that your production machinery will need to be ‘programmed’ once and re-used several times later. So, if you run the paint shop (which seems to be the largest bottleneck in terms of time in a modern production setup) and need to produce purple colored chasis, once you set the process, stock the paint to desired levels, you are pretty much all set. Now imagine if you have to produce first 200 cars in purple, then next 30 in wine red, then next 70 in pearl white colors. Surely there needs to be some way (manual or otherwise) to alter the production process to suit such job order. Similarly, instead of producting 300 sedans, if you have to produce a mix of automobiles – say, 50 SUVs, 50 compacts, 150 sedans and 50 hybrids, your process will have to be different from the one that just produces 300 sedans in a single production run. While the customer desires options (don’t we all?), the manufacturer incurs additional time, money and resources in creating such options. From the manufacturer’s point of view, producing each piece exactly similar as the last one makes such great economic sense that he can create huge economies of scale made possible by principles of mass production. It simplifies the machine (and machine operations) required in plants, it standardizes the components required, there is no downtime to alter the production process, people don’t have to be retrained every now and then on different type of products, and all this make the entire production process very ‘controllable’ from throughput and quality perspective, and hence highly predictable. Elaborate statistical charts can be created based on prior experience on how much time it takes for a given production run, how much men (and women) and materials are needed to meet a given production target, and what levels of quality can be achieved based on statistical experiences. 

After WWII, an economically-broken Japan trying to rebuild itself too threw its hat in the ring and set out to Europe and US to learn about principles of mass production. For reasons that are well-research and well-chronicled in books starting from “The Machine that Changed the World” by Womack et al, companies like Toyota created a brand-new way of mass production that focused on just-in-time production as opposed to utilizing the production machinary as the way to achieve economices of scale. But we are getting ahead of ourselves here a bit, so let’s just ignore it for a while (because the world would not ‘discover’ these techniques until late 80s or early 90s).

Around the same time, computer science was born, and the earliest of software started to be written. Writing software was an extremely hard job, what with enough complexities of huge (and very costly) machines. Software had to be written in the so-called low-level or machine language and required very high level of cognitive abilities. With large endeavors, software creation soon became a techno-managerial problem involving several dozens or even hundred of people. However, unlike the semi-trained operators in Ford’s factories, these were highly educated and perhaps the first generation of knowledge workers of the digital economy.

At that time, there were perhaps the two best methods to organize men and machines: military was where men were employed in thousands and mass production was where machines were deployed. So, what happened next was only the most logical way to extend the thinking: software was treated as a ‘production’ problem and the techniques of mass production were used to develop a ‘waterfall’ model that allowed for enough (?) time and effort at the start of a project to gather all requirements and do the ‘design’ and then rest of the development was contrued as the ‘production’ and hence principles of mass production could be deployed. To organize men, the traditional command and control model was pereived as the best way to separate decision-making executive from the heavy-lifting workers.

Today we debate and critisize these as the worst things that could have happened to software development, but I don’t want to be unfair to what was done more than five decades back based on the state of art knowledge, tools and practices back then. I am sure people back then also wanted to do the best thing – as do the people today claim :).

This was a short overview of how some of the important advancement had impact on thought process on software development (and there are many more important ones that have influenced our world). In the next article, we will discuss how it actually impacted the software development, both positively and negatively.

6 thoughts on “Why are we in this mess?

  1. Cortez Kachmarsky

    It appears that you have put a considerable amount of work in to your post and I need more of those on line currently. I really got a kick from the post. I can’t really have much to express answering, I simply had to remark to your impressive work.

  2. Anuj Magazine

    Thanks for sharing this. Quite informative! Looking at the Software development methodologies from historical stand-point does lead to a different perspective. While reading this, i was somewhat comparing this to evolution of cloud computing, which historically dates back to the time when Electrical Grid system was formed. Before that apparantly due to the lack of standardization, Electricity was available for select few. The evolution of Electrical grid rather revolutionized the usage of Electricity.
    In essence, cloud computing was also orginally meant to centralize computing and make it accessible to one and all like Electricity. But, over the years, certainly there are a few success stories but the overall concept seems to be drifting to a different direction altogether. Now we hear the terms such as “Private could” or “Public cloud” and many other variants. I guess the overall notion of cloud gets defeated if we talk of decentralization of computing by using terms such as “Private cloud”.
    I guess there’s another mess brewing up there. But, isnt it well known that Software people realize the importance of simplicity only after making things complex to the core!

Leave a Reply