Over Engineering

Over engineering is an easy trap to fall into.  Typically, it occurs when additional complexity and functions are added to an application that are not needed.  They are added with a view of a particular feature being needed in the future, sometime….  Or, just when people are clever, and want to demonstrate how clever they are.

Deciding future features is best left to the future….  Otherwise you can build features that will never be used. 

Waiting until you need a particular feature has several advantages:


  • You save time, because you avoid writing code that you turn out not to need.
  • Your code is better, because you avoid polluting it with 'guesses' that turn out to be more or less wrong but stick around anyway.


Often times, it is put forward that we will be able to do less work overall, at the cost of doing more work now (building in these features, etc). 

But unless your project is very different from every IT project ever, you already have too much to do right now.  The last thing you want is more work now!

Obviously, there is nothing wrong with being clever, but the economic realities are such that we can’t spend unwarranted time on a feature or function that is out of the boundaries of what we are being tasked to do.

For example, if a client has a requirement for an “Animal Manager” application.  Currently, the application uses Dog and Cat.  You may create a generic Animal interface, but creating a AbstractCanine and AbstractFeline base classes, Biped interface, is clearly over engineering.

From Extreme Programming, an agile methodology, we want to aim for "the simplest thing that could possibly work".  The design that is most easily adapted to new and changing requirements (and is thus the most future-proof and extensible) is the design that is as simple as possible.


A simple way to gauge if you should develop now or later; do all of these apply?

  1. Is this critical to my users success?
  2. Is the application crippled (unusable) without this feature?
  3. Is the cost to develop it now 50-75% less than it would be in the future?
  4. Is there a business need to support this feature?

The Truth about Estimates for Software Projects

When a software project comes in, or an opportunity for some work, typically the developers or Project Manager is charged with creating an "estimate" for the work.  Typically, by estimate, the requestors really mean a "quote".  They want to know how much it will definitely cost.

Unless the project is something that has been done before, and you are merely "repeating" previous work, the reality is that the estimate is not much more than a "guess-timate".  Or an educated guess with varying degrees of education behind it.

Sometimes, reams of paper are used to make the "estimate" seem legitimate or scientific.  This is just a wasteful disguise.

Anyone under the impression that an estimate for any real software development is anything more than a glorified guess is delusional.  There are various techniques that you can use to make the guess more or less educated - but it still remains a guess.  No developer i have ever met has been able to predict the future. 

The fact that the estimate is just a "guess" does not negate the need for a respect and consciousness from the developers to the clients.  Perhaps idealism leads me to think that the "guessing" nature of it should be quite upfront.  It is the pretending that it isn't just an elaborate guess that is annoying.  The amount of pretend working-out that doesn't actually help improve the quality of the estimate. 



Fixed-cost and fixed-scope Contract

When commissioning software, organisations often prefer “fixed-price” and “fixed-scope” contracts in an attempt to decrease their financial risk. They want to make sure their outlay is constrained.   Unfortunately, in trying to fix the price of the project in this way, inevitably they end up paying more than they need too. In reality fixed price contracts are anything but.  They typically go over the “fixed price”.  And, typically take longer to deliver than originally thought.

Fixed price contracts put all the “perceived risk” on the supplier.  The supplier, reluctantly assuming this burden, will then attempt to itemise to the letter every item that will be delivered in a “requirements” document so it doesn’t have to do “extra work”.  The client wants this itemised list as well, so that it can be sure of what it is getting.

When attributing a price for the software development, the supplier will add-in to the cost the worse-case scenario for each component of development.  This is because, if some of the project development takes a lot longer than expected, or is a lot harder, the supplier has to foot the cost.  So, naturally, the supplier “protects” themselves by adding these contingencies.

Because the supplier is also having to commit to a fixed-price and scope, the supplier needs to nail-down exactly what is wanted.  A lot of effort is spent on this itemisation – and the document becomes “law”.  And – this effort doesn’t add value to the software. It doesn’t make it “better”.  But, at the beginning it may make the customer feel safe.


Typically – the requirements and associated estimates (being only a “probability”), turn out to be “wrong” – with items taking more effort than expected.  Even though there has been a fair amount of padding in this estimate anyway....  So, initially at least, the supplier is “hurt”.  But, there is an opportunity for the supplier to get some more money....


Customers need to see the software in order to be sure of what they actually want.  At the beginning of a project, it is just in their imagination.  They thought they knew what they wanted, so agreed to the requirements document.  But when they see the software that has been developed, they can see that a few things don’t really work well or are not as they envisaged – and they need a couple of other things in order to make the software meet their desired outcome.  Even some of the things they were happy with a month ago don’t look as good now, and need some minor changes.

So, when the customer asks for some changes, the supplier now says:


 “That wasn’t in the requirements document.  It would be a change request”.


The “change request” allows the supplier to recoup its initial losses and issue a new invoice with the cost for the “change” to make up for the pain before-hand.  And – if you remember the initial requirements document – by the end of a project only vaguely resembles the actual software product....  So, all that effort in writing it is down the drain.  Well, someone has to pay for it....  Add that to the “change request” as well.


To deliver successful projects – the client and supplier have to define a different way of working together.  “Nickel and diming” a customer with change requests doesn’t help deliver the best software solution for them – which is what it is about really.


Suppliers and customers need to work together – not against each other.  Both parties need to share the risk of the development – in a collaborative, transparent way.  It requires trust – not suspicion.  This redefined way of working reduces the total cost of developing the software for both parties – and ensures that the software matches the customers need.


Why software projects go wrong?

Software projects - large or small - do not have the best reputation for being successful in terms of being on-time and on-budget.  There are many key reasons for this:

  • Software systems tend toward complexity. Software is less tangible and less constrained than other building mediums. 
  • Traditionally, software projects have been run using methodologies used in “manufacturing”, even though software projects are more akin to “design and development”.  The methodology has been “predictive” in a changing environment.
  • Software development is “unique creation” not “manufacturing replication”.  Typically, the software project is totally unique – it hasn’t been done before.  Yes, there are known technologies – but putting them all together in the way required by the software project is new.  When dealing with new, there is an element of the unknown – and this unknown adds risk to the project.
  • Reliance on “estimates” in planning without acknowledging that estimates are a “probability” that a particular item of work will be completed.  Estimates are not treated as an estimate but as a firm quote. These estimates are often reliant on other estimates, therefore taking these estimates as gospel creates a chain-reaction of likely delays.  The unknown nature of developing software means that estimating time to develop software is “difficult”.  In reality, estimates become more accurate as the project progresses and knowledge is acquired by doing.
  • With software, until you can see something, it is hard to truly envisage how it should work.  It's common to show a client a working system to date only to have them realise that what they asked for really isn't what they want after all. 
  • Change will inevitably occur on a software project.  Methodologies that do not accept this hinder the overall success of the project.  Change can be required by any number of reasons:
    • A requirement was missed from the original “scope”.
    • A better understanding of the problem and solution was discovered during the project.
    • Marketplace changes.
    • Legislation changes.


If you try to "freeze" the requirements early in the lifecycle you guarantee that you won't build what is needed and, instead you'll build only what was initially required.  In order to then “incorporate the necessary change”, additional resources are added to the project cost.

The projects that are completed within the parameters of the budget, time allocated  and scope often fail to really deliver the intended value.  Are the customers - the end-users - really that happy with the end result?  Are there large features missing or is it cumbersome to use?  Measuring based solely on the cost / time matrix is unlikely to reveal real success.



Software Development Success – a history

Traditionally, software projects have been run using rigid, sequential practices.  These practices were born out of manufacturing or traditional engineering practices that do not truly account for the unique nature of software development.

Some statistics relating to the success of projects run this way:

In 2003, the Standish Group (which tracks IT project delivery success over the last twenty years) reported that just 31% of software projects were deemed successful.

The Standish Group also looked at a subset of successful projects run with “traditional” software development teams (Big Design Up Front) which eventually delivered into production and asked the question:

“Of the functionality which was delivered, how much of it was actually used?” 

An astounding 45% of the functionality was never used, and a further 19% is rarely used.  Therefore, over 50% of the resources that go into a software development project are wasted....


Everything we do provides value to the customer.

Our clients trust us to deliver value on their projects.  As such, our job actions are targeted to provide this value.  Everything we do must provide real value to the customer and their respective project.  Documents that will never get read will not provide value.  Processes that slow down work, without offering any other benefit do not provide value.


Unnecessary process

Many of us have lived through the nightmare of a project with no practices to guide it. The lack of effective practices leads to unpredictability, repeated error, and wasted effort. Customers are disappointed by slipping schedules, growing budgets, and poor quality. Developers are disheartened by working ever-longer hours to produce ever-poorer software.


Once we have experienced such a fiasco, we become afraid of repeating the experience. Our fears motivate us to create a process that constrains our activities and demands certain outputs and artifacts. We draw these constraints and outputs from past experience, choosing things that appeared to work well in previous projects. Our hope is that they will work again and take away our fears.


But projects are not so simple that a few constraints and artifacts can reliably prevent error. As errors continue to be made, we diagnose those errors and put in place even more constraints and artifacts in order to prevent those errors in the future. After many projects, we may find ourselves overloaded with a huge, cumbersome process that greatly impedes our ability to get projects done.


A big, cumbersome process can create the very problems that it is designed to prevent. It can slow the team to the extent that schedules slip and budgets bloat. It can reduce the responsiveness of the team to the point of always creating the wrong product. Unfortunately, this leads many teams to believe that they don't have enough process. So, in a kind of runaway process inflation, they make their process ever larger.


Runaway process inflation is a good description of the state of affairs in many software companies. Although many teams were still operating without a process, the adoption of very large, heavyweight processes is common.


We value working outcomes over comprehensive documentation.

Documentation is valuable.  But – only when it is useful and not for the sake of documentation.  Long documents tend not to be read.  “Specifications” tend to be correct at the time of writing them, but totally different from the outcome that is derived.  Documents that are hard to find, or unknown to people, might as well have never been written.

Pictures and diagrams are often more representative – and concise – than lots of words. 

Heavy use of bullet points can keep the message, but loose the extraneous text.

Putting useful information on the intranet is a good place to make it available to everyone.


For programming – unit, integration and UI tests are typically a much better explanation of what the code is intended to do than a long document.  That long document would likely never be read or not be kept up-to-date anyway.


Simplicity - the art of maximizing the amount of work not done - is essential.

We want to deliver value to the client – not necessarily blindly produce work that has no value just because we “think” they need it.

This idea can relate to many different work practices.


Deciding as late as possible

Deciding as late as possible, allows the decision to be made with as much knowledge as possible.  It allows additional knowledge to be acquired in the time leading up to the decision being made.  In deciding what to do in phase 4 – it is much better to have completed phase 1, 2 and 3....


Hewlett Packard saved millions by delaying decisions. 


Originally, electric plugs were attached to printers at the time of manufacture.  Now, different markets around the world use different electric plugs.   Invariably, there would be countries that didn’t have enough printers, and other countries with too many.  Regardless of how much planning and analysis was done prior to manufacture, this same problem would occur.


Even though it was cheaper initially to add the plug to the printer at the manufacturing stage, HP decided to add the plugs post-manufacture in the warehouse when the printer was ordered.  At this stage they had a clearer idea of current market demand.  This way they could direct printers to the countries with the highest demand. 


This initial change saved HP $3million a month, even though the overall unit cost increased.


Ask for clarification

Asking the client if something is necessary, before “guessing” that it is or isn’t can save a load of work


At regular intervals, we reflect on how to become more effective, then tune and adjust our behaviour accordingly.

Often cited in any explanation of evolution / natural selection is the idea of “Survival of the Fittest”.  Charles Darwin though never actually said this.  The principle that Darwin espoused would be more accurately represented by “survival of the most adaptable”.  Those species that were best able to adapt to their environment were most likely to survive.  Those species that couldn’t adapt would inevitably die-out.

The lesson for everyday life could be that those willing to adapt their thoughts and actions are best placed to succeed.  To come up with new and better ways to do things, and thus continue to improve.


The most efficient and effective method of conveying information to and within the team is face-to-face conversation.

What is the test of a good communicator?  It is the results that they get from their communication.  Humans have a unique ability for language – but that doesn’t necessarily mean we are all great communicators.

In an office environment, there can be an over-reliance on email, to the detriment of efficiency and getting the outcomes you want.

Once an email gets to a length above a paragraph – its efficiency and effectiveness typically diminishes.

It takes a lot longer to type a long email message than it does to simply say it.  Particularly if you are talking about some digital project – where it would be ideal if you could look at a monitor and point to things.

Emails also have an increased capacity to be taken the “wrong way”.  Have you ever been annoyed at someone’s email – not necessarily for what it said, but what you thought it was saying?  The “underlying” tone.  The email could be perfectly innocent, but people take things the wrong way.   The longer your email, the more likely this is to happen.

Really – face-to-face is better than a video conference, video conference is better than a phone call, a phone call is better than email. 

Yes – sometimes information needs to be written down.  But, thinking carefully about when this is useful will make life a lot easier for you and everyone you work with.


About the author

Something about the author

Month List

Page List