The Devils that are Unsigned Numbers
Published on:Table of Contents
Here at the University of Michigan it is the time of year again for registration for classes for next semester. One of the classes that piques the inner perfectionist is called EECS 381 - Object-Oriented and Advanced Programming. Just in the name, every word rings true to me and how I program. However, as it should be, this class is difficult. I have heard stories that it is equivalent to a 40 hour a week job. This class is optional, and it is not required to get a Computer Science degree. Despite the notion that this class would be unpopular, the contrary is true. I have yet to reach my registration date and already the class is waitlisted and the waitlist is as long as the number of seats in the class! Many people want the prestige that comes with taking EECS 381. If you get through the class, it is confirmation that you’re better than everyone else.
I’ve resigned myself to not getting into the class next semester. No worries, I can still drool over it by looking over the stylish website. One of the links is to a C++ coding standards that the professor has outlined for all students to follow. Being an ever curious programmer, I took a look. On the first page was this statement:
Avoid declaring or using unsigned integers; they are seriously error prone and have no compensating advantage
I shrug my shoulders. Yes, I can see the value in that statement, but all good programmers shouldn’t fall into this trap. It will be obvious when a negative result can occur. I would even venture to say that doubling the range of positive numbers is a compensating advantage.
The embarrassing thing is that right after reading that statement I fell into the unsigned number trap where I compute a number that can be mathematically negative but it gets represented as an extremely large number. In the current EECS 281 project we are dealing with Minimal Spanning Trees and the Traveling Salesman Problem, where distances between vertices (in this case, cakes) are calculated using the Manhattan Distance, which I’ve already had experience with. Here is how I implemented the function.
int ManhattanDistance(const Point& p1, const Point& p2)
{
int dx = abs(p1.x - p2.x);
int dy = abs(p1.y - p2.y);
return dx + dy;
}
Finishing the rest of my code, I decided to run my code against the autograder to see how I was progressing. I received a report back.
Test case AC: Failed (runtime 0.000) Line: 1 Your MST weight is 35 off from the expected weight
Test case AD: Failed (runtime 0.001) Line: 1 Your MST weight is 1404068 off from the expected weight
Test case AE: Failed (runtime 0.000) Line: 1 Your MST weight is 2608029 off from the expected weight
Being off by 1.4 and 2.6 million screams unsigned integer underflow. The bug is
how the struct Point
is implemented. Coordinates x and y are of type size_t
which is an unsigned number. Since I never thought that a bug could exist in
such simple of a function, it was the last placed that I looked. But I should be
thankful that I learned a valuable lesson at a relatively painless cost. Maybe
I am not ready for EECS 381.
Comments
If you'd like to leave a comment, please email [email protected]