I am not going to participate in the debate of whether NULL is good or the billion dollar mistake. At least not now.
All scholars agree that introducing NULL was a mistake, but I tend to be very skeptical about their position.
The reason, I do not necessarily consider NULL to be evil, is that there are two states each variable can be at, uninitialized (or NULL) and initialized (or with a value).
And we should not collapse these two states into one, not always anyway. Collapsing these two states into one makes me uncomfortable, because, this way, we loose information.
But this is a subject which is much more involved than what I presented here, so I should leave it at that, at this present time.
In this blog post, I am going to point to two erroneous implementations of NULL.
SQL Server does the mistake of treating NULL as a value, so you cannot create a unique index in a column that has two or more NULLs.
Oracle does the mistake of equating the NULL string with the empty string.
Let me analyze these two errors, one by one.
Let me start with SQL Server. So SQL Server does not allow you to create a unique index in a column that has two or more NULL values.
Well, you could always create a conditional unique index, by specifying that NULLs are excluded from the index. The only “problem” is that conditional indexes are not as fast as “normal” indexes.
Conceptually, all NULLs are different from one another, so SQL Server should have permitted a unique index in a column with NULL values.
But, SQL Server’s mistake is not that major. You see, to a human and to machine, a bunch of NULL values all are the same thing: initialized values.
And I am not sure what the worth of indexing initialized values would be.
OK, now let me continue with Oracle. Oracle thinks that a string value of NULL is the same as the empty string.
But a NULL string is not the same as an empty string. A NULL string is an uninitialized string. An empty string is the string “”.
Strings inside the machine are usually represented and stored as null-terminated strings, meaning that they are terminated with the null character ‘\0’.
The null character is represented as ‘\0’, where the quotes denote a single character and the backslash denotes that we have a special character instead of the number zero as character, which would be ‘0’.
The null character ‘\0’ is a character with the ASCII value of zero, thus its corresponding ASCII byte is comprised of the following 8 bits: 0000 0000.
In comparison, the character ‘0’ representing the number zero has an ASCII value of 48, thus its corresponding ASCII byte is comprised of the following 8 bits: 0011 0000.
Thus, an empty string is a string that, inside the machine, is represented with the null termination character ‘\0’. And in the real world is represented as “”. And this is not the same as an uninitialized string.
Saying that a NULL string is the same as the empty string, is like saying that a NULL number value is the same as the number zero.
So, saying that a NULL string is the same as the empty string, is like saying that zero is not a number.
Thus, conceptually and otherwise, Oracle’s mistake is a major one.