Code shouldn't trust anyone
- Details
- Written by Daniele Lupo
Alt! Stop! Who are you???
Many times I see projects that have many errors because project managers are too optimistic. I can see in their documents and in meetings that in their opinion everything will run without problems. Some common assumptions that I see from these "experienced" people are that:
- Network is faster than shared memory. If I send a request, the reply will be received instantly;
- Memory is infinite. I've never seen in their eyes, and in their specification, that we should take care about memory consumption. We have all the memory that we need;
- Hard disks are infinite. The same thing happens for storage. Why we need to remove dump files, logs and so on?
These are only some examples about what I'm seeing almost everyday.
Most developers have the same faith. In one of last meetings we talked about a simulator and the possibility to save/load scenarios. They talked about the ICD (Interface Control Document) that explains all messages, and for saving a scenario, there were only a message, sent from the front-end to the back-end.
"And what happens if the scenario is not saved because there's some error?"
All people saw me like a crazy man. "It's not possible!! We cannot allow that a command is not handled successfully. If we say that we save a scenario, we must do it!"
Repeat this procedure for every feature.
The problem is that they don't understand that programs are made by people, and people make mistakes. We don't know that programs run on hardware, and hardware is not ideal. The result is that when everything is ok, simulators work like a charm, but if something is wrong, all people enter in panic mode.
I'm used to think that bad things can happen, and we should prepare for it. We can perform perfect code, but if data that we receive is wrong, or the hardware have limitations, we must handle it.
People should take care of some basic principle.
Hardware is real
Don't think that you can save terabytes of data, don't think that memory allocation is always ok, you can wait minutes before receiving a network reply. So handle these limitation properly. When you save a file you can always check if the space is sufficient, if you perform a network request don't block your program until the reply is received.
Don't trust anyone
Maybe you think that you're smarter than others. So if other people send you data in some form, you should always expect that these data are wrong. One day a customer opened a bug to me, because in some circumstances the simulator crashed. After some testing, I've found the problem. Another process was sending a structure with an angle, and I pass this angle to the other process (as requirement says). the problem was that the input angle was in -180, 180 range while the other process expected an angle between 0 and 360. The other process is so smart than, if it receives a negative angle, it crashes. No log, no warning, no-sense. So don't expect that others send you always right data. Handle error cases, and do it in a way that allows you to continue to work. It's a strange thing to think? I don't think so.
Don't trust yourself
More important, don't trust yourself!!! People make errors, and you're people too. So the consequence is natural. If you write a function, and then you use it, check always errors. If your function should return a positive number, check if it's negative (better, if something's wrong, raise an exception).
If you write function that raise exceptions, remember always to put them in a try block. If you write data in a database, remember to check if the query was correctly executed, etc etc...
Don't worry about bloat the code a little with all these error checking. If you write code well, the error checking doesn't affect readability (try blocks instead if error codes, for example, or incapsulation in a lower layer).
Remember, the only error that can cause you problems is the error that you forgot to handle.