For a system with an existing codebase it is not unusual for demands for consistency to be brought out to argue against any changes to the system's architecture. It is true that inconsistency represents an overhead when working with a system but in my opinion these concerns are generally vastly overstated, addressable with decent development practices and ignore the very real costs of forcing everything to be the same.
Consistency is not an absolute property but rather a continuum with systems with no architecture or common structure at one extreme and systems where everything fits into a single rigid design at the other. Apart from the most trivial examples very few real world systems inhabit these extremes. Therefore decisions about consistency are really deciding where on the continuum it is appropriate for your system to be. There is no set level for this as it's entirely context dependent. It can be established by determining whether the cost of carrying the inconsistency is justified by the benefits gained from doing so.
Costs of inconsistency can include:
- Incompatibility between different parts of the system where different approaches are in use. This may be a significant issue for a large monolithic codebase but those are bad for other reasons. In general while this is an issue the kinds of systems where it is likely to be a major concern already have significant maintenance problems that are likely best addressed by breaking the system apart.
- Less familiarity with parts of the codebase, with corresponding increases in maintenance cost and risk. At its worst this can become a kind of "Here be dragons" area of your system where mortals fear to tread.
- Potential ambiguity as to how to build new features which may lead to confusion as to the appropriate approach.
- Conflicts between software versions and difficulty in updating dependencies due to differing requirements of different parts of the codebase.
One thing I've been given as a potential cost that I reject is that developers will be incapable of dealing with potential inconsistency in the codebase. Generally it's been in the form "sure you can handle this but other people on the team can't, so we have to code to the lowest common denominator". I dislike this argument on a number of levels. Firstly developers already deal with inconsistency in any real system, but also when searching for information online and looking at documentation and other reference works. More importantly the suggestion that you should dumb the code down to help "lesser" developers is most likely an insult to those developers, most of whom can either handle the differences or learn to do so. In the cases where they are genuinely not competent enough to deal with inconsistency in approach then they are highly unlikely to be competent enough to deal with complex business requirements. Such people should not be employed as developers.
The genuine costs are not to be dismissed, so what are the benefits you gain from introducing inconsistency into your system?
- The ability to deliver improvement quickly. Applying a change to the entirety of a codebase will generally be prohibitive in terms of cost and risk. If you don't allow any other approach your codebase will stagnate, unable to improve on how it delivers value. If you can't adopt modern approaches it may exhibit significant loss of value relative to more agile competitors. Removing the requirement to apply a change to all parts of the system allows you to make targeted improvements that have significantly lower cost and risk while providing maximised value.
- The structure of the code can be adjusted to meet the needs of the problem. This may mean simple CRUD for low value parts of the system while reserving the more buzzword compliant architectural patterns for the places where they will deliver the most value.
- Optimisations can be employed where they deliver the most value. Global optimisation is rarely going to be cost effective.
- Incremental improvement can be done in very small parts that are each very low risk. Over time many small changes add up to significant overall change without the risk and high cost associated with doing the changes all at once. (e.g. The Boy Scout Rule)
As professional developers it's therefore incumbent upon us to attempt to achieve, as much as is practicable, the benefits of inconsistency without incurring the costs. This is mostly a matter of thoughtfulness, appropriateness and communication. For each change you should be able to demonstrate that the value exceeds the cost. You must demonstrate that paying that cost is appropriate to your system. A refactoring that makes your system more maintainable over time is generally great, but maybe something to avoid 15 minutes before your next big release (although big releases are themselves something to avoid). And you must communicate with your team to ensure that your changes fit with the direction of the system as a whole. If every developer does whatever they want you don't have a team you have a merge conflict. Argue for the change you want to see but be prepared to accept that you don't win every argument and it shouldn't be personal.
In general what you want to have is a series of goals. These should be less defined as they get further away. In the long term you have a general architecture or set of architectures you wish to approach. These may change over time as you learn more and as technology progresses. In the medium term you have components you wish to build or alter to a particular style with some real concrete deliverables. In the short term you will have specific code that you wish to make an explicit set of improvements to. This gives you specific things you can do at a small scale to achieve large scale results. Inconsistency along the way is therefore a considered tool in delivering your ultimate objective.