Sunday, September 9, 2007

Factor/Refactor - get fast and stay there!

Something that I saw in two different posts recently got me thinking. The first post had to do with abbreviation and accidental complexity, the other was on Use and Reuse of code. The topics are intertwined and linked.

The commentary on Raganwald attempts to point out that just because something is brief it is not necessarily better and at the same time the reverse can also be true, that something verbose may not be easier to understand over all due to accidental complexity. Programming is about finding the balance between the two points. In other words the SNR (Signal to noise ratio) in ones code is an important factor. This leads to the second article on Use and Reuse of code.

I especially liked the commentary that Enfranchised Mind's Brian Hurt provided here, that the Use vs. Reuse of programming could be considered the Second Derivative of programming (remember back to all that calculus, the second derivative is acceleration). Stated in another way, that whats important in programming is the ability of a coder(s) to add new code faster and more efficiently in a constantly changing environment.

Why is this so important? What makes this idea of adding new code faster and more efficiently such an important topic to cover? Well, the reason that I have been finding it important lately wraps around to another fact of life - systems of things (programming software or otherwise) tend towards entropy and need attention to be held at their current state or improved in anyway. This plays out in the following fashion:

An existing system of code lends itself to doing certain things in certain ways. This way of doing things often represents the way in which the programmers 'thought' that certain bits of system code would be used in the short term. Those same programmers my have been forward thinking about needed changes or potential other uses but lets face it we are all not prognosticators and can not read the future. That being said at some point in the future the code is going to need to be modified to suffice some additional need. Now the question becomes how hard is it to suffice this new need given what currently exists. If it is hard to suffice this new need because the code is not structured in a way that allows its reuse then the coder is going to have to slow down and possibly implement a bunch of things she did not intend to. This temporary slowing down / refactoring is IMPORTANT as it makes the system more resilient to the 'new' types of changes being requested and it keeps people chugging along at top speed (or close to it).

Here is the practical take away, systems that do nothing but implement new stuff all the time and pay no attention to the things that pin them up (the pillars or underlying architecture) will eventually become so decrepit as to actually slow coders down when attempting to DO anything with the code. Refactoring is a necessity to keep people using and re-using code. It is the re-using that keeps the speed of implementing new things high and allows development to accelerate and succeed. The best case is to have accelerated (by re-using code) your development to the point at which it is lightning fast and hold it there devoting some percentage of energy to not decelerating from that really fast point.

No comments: