Wednesday, April 2, 2008

Peeling the onion

A post recently got me thinking about the topic of when it is a good idea to either revisit, rework or at the very least rehash an older "decided" on piece of code in order to make sure that all sections of an application are getting attention. Couple of thoughts around this that stuck me.

Re-visitation

One of the things that I like to do as a programmer is go over other code within a system (especially if it was not originally written by me) and work to understand it. This can sometime be a large and arduous task given the size of some system and the length of time that they have been in production. In the case of really large systems I may take a certain sub-section of the larger whole and attempt to understand just that. My tendency is to do this both on my own and as a result of being asked to 'work' on a given section of the application. Over all I think that this helps me to get a better picture of what the application is doing and can help me to refactor, when needed, to make the system more resilient.

7 +- 2

The above can sometimes be complicated by the fact that despite efforts most people can only hold in their head about 7 plus or minus 2 items or ideas or thoughts. Its a memory thing that is well proven. This point matters the most when you think about how you work on a section of a large application. At any point in time you may be making a spot fix (remembering 1 or 2 points of the application) or a sweeping type of change (attempting to remember how 5-6 or more modules work in concert to get work done). The larger the change you are aiming for the more information that you have to hold in your head. This can get even more complicated the larger the system and or the larger the change attempting to be made.

Pulling back the covers

So with each new dive into the code I get to uncover something that I didn't know and something that potentially I have to 'remember' when I am working in the codebase. One of the things that I love to see is that things that I 'assume' I can take for granted - I can actually take for granted. What I mean is - in working with any code base I have to come at it with a few assumptions and let the code prove me wrong. As a for instance I usually assume that something like connection pooling to a database works in a certain way (ways in which I have seen it work in the past, or ways in which tooling I have used in the past has taught me to think about connection pooling) when the system shows me that it does something different I have to increase the number of things to remember when doing work by 1.

Why memory matters

Programming should let framework be framework and business code be business code. When the code is different then common assumptions and works in a way that is not 'worldly intuitive' (intuitive here meaning in a widely accepted way in the world, knowing that the word intuitive isn't) it means I have one more thing I need to keep in my head when working in the code base. When the code base is really large there are somethings that I would rather not have to worry about. Connection pooling would be one of them. I would want to know that I can grab a connection from the pool and do work in a standard way and then return the connection to the pool. I don't want to have to know that it also performs acrobatics to protect me from not closing resources, or monitors what I am doing or even that it supports the color RED in a new and special way.

No decision should be forever

So, memory, change and refactoring all very important in my mind in thinking about programming. Since I am also a proponent of agile (scrum in particular) I support the idea of constant refactoring as well. I want to always make sure the code I am touching can be resilient to making large changes and always will move in response to the business and the way the business wants the system to work while minimizing the amount of work needing to be put it in to make it do what the business would like. Constant revision and constant eyes make things better over time. Previous decisions may or may not make sense NOW in comparison to when they were made.

Memory is one reason people may suggest changing something, having worked with a tool in the past may be a reason as well. You may even find that people are suggesting certain things based on what they feel is best for the overall codebase. What you can't do is dismiss the efforts to change with the following:

"...And if you don’t even understand what is in place now, then you’re simply not qualified to be suggesting that we move to something standard."

There are different levels of change and different levels of understanding... what is intuitive to one may not be for another. I am not suggesting at all that you dumb down the code to the lowest common denominator - but I am certainly suggesting that when you get enough people saying that something doesn't make sense it might be time to remove that one extra item from memory so that they can get on with making changes that matter.

1 comment:

Anonymous said...

Great article, and overall you and I are on the same page as usually. However, I think many people sacrifice performance for simplicity. I am speaking mainly from the .NET world. There are a ton of bad programmers out there, writing “simple” code, and I have been in a few situations where the consensus of the group is down right wrong.

For example: I once was confronted by a couple of developers that didn’t understand why my enumerations were 1,2,4,8,16… I then explained it was so I could use bitwise operations to determine whether or not a user had certain access rights. I told them it was a very efficient and common practice. They were not convinced and proceeded to tell me my code was not maintainable and should be modified. I argued my case and showed them how ACL’s worked, at which point they begrudgingly let it go.

Sometimes, sticking to your guns and pleading your case is the best thing for all involved, even if they don’t know it at the time.