开发者

null vs. undefined and their behaviour in JavaScript

So after a big argument/debate/discussion on the implementation of null and undefined in javascript I'd like somebody to explain the reasoning behind the implementation and why they differ in some circumstances. Some particular points I find troubling:

  • null == undefined evaluates to true
  • null + 1 equals 1 but undefined + 1 equal NaN
  • if(!null) evaluates to true and if(null) evaluates to false but null == false evaluates to false.

I've read the specification and I know how the results are reached, I'm looking for the paradigms and reasons that dictate this being开发者_如何学C the specification. Some of these points, especially the second one, given the first, feel very inconsistent.


The short and sweet version is that JavaScript was designed and implemented very rapidly by the Netscape team, and it had some inconsistencies such as the ones that you've pointed out.

The Internet Exploder team did its best to copy JS exactly and they did a damn good job of it to the point that the inconsistencies were copied as well. When Netscape went to get JS standardized as ECMAScript MS was a part of it and basically said that they weren't allowed to change the standard because it would break old code (existing systems inertia). The inconsistencies were standardized and that was that.

Douglas Crockford has a very good series of talks about some of these issues.


First and foremost, while plenty of languages get away without having two methods for such similar purpose, they do serve distinct though somewhat overlapping purposes in Javascript. "Why have both?" has been asked here before, and I find that this answer explains this fairly well. TL;DR: Javascript has certain language functions that produce absent values as opposed to non-initialized values:

  • delete'd values
  • non-existent properties in an object
  • missing function parameters

As for the seeming contradictions in your question, they are actually quite easily explained by the spec. (I believe it can even be argued that the explanation is elegant, though there are probably those who would vehemently disagree.)

Addressing each one separately:

  • null == undefined evaluates to true

See this answer for the best explanation of this. In short, the abstract equality comparison specification says that they are (non-strictly) equal.


  • null + 1 equals 1 but undefined + 1 equal NaN

The operator + functions as either the unary + (numeric conversion) operator or addition operator, but both route the arguments to the ToNumber spec, which says:

Argument Type — Result
Undefined — NaN
Null — +0
Boolean — The result is 1 if the argument is true. The result is +0 if the argument is false.
Number — The result equals the input argument (no conversion).

In other words null + 1 becomes +0 + 1 and undefined + 1 becomes NaN + 1, which is always NaN.


  • if(!null) evaluates to true and if(null) evaluates to false but null == false evaluates to false.

As you know ! is the Logical Not operator, and it performs a ToBoolean conversion on the expression. It is a type of truncation.

The if statement (if (expr)) performs an implicit boolean comparison on expr. So take a look at the type of expr in both of the above if statements:

  • if (!null): expr is !null which, given the result of the logical not operator (!), is a boolean.
  • if (null) : expr is null which means no conversion was performed.

Since the logical not operator performs an actual conversion, the same thing happens in other cases as well, and is not actually a logical contradiction as you seem it to be:

  • if (!"" == !undefined) = true
  • if ("" == undefined) = false, of course.


They are best thought of as completely different objects used for different purposes:

null is used for "has no value." It is used pretty rarely by the language, but often used by the host environment to signify "no value." For example, document.getElementById returns null for nonexistant elements. Similarly, the IE-only property onreadystatechange for HTMLScriptElements is set to null, not undefined, to signify that though the property exists, it's currently not set. It is generally good practice to use null in your own code, instead of undefined, and leave undefined for the following cases:

undefined is used for "isn't even set or doesn't even exist." It is the "default" in many cases, e.g. accessing an undefined property (like onreadystatechange for HTMLScriptElement in non-IE browsers), the default return value from methods without return statements, the default value of function parameters when the function is called with fewer arguments than it declares, and the like.

In this way, it's useful to think of null as a "valid value," just one signifying something special. Whereas undefined is more of a language-level thing.

Sure, there are some edge cases where these reasonings don't entirely hold; those are mostly for legacy reasons. But there is a difference, and it's one that makes some deal of sense.


As for your pain points in particular, they mostly arise from the evil of the == operator or type coercion:

  • null == undefined: don't use the == operator, because it essentially is a mess of backward-compatibility rules that seemed intuitive at the time.
  • null + 1 === 1 vs. undefined + 1 === NaN: the + operator does type coercion to Number before evaluating. And null coerces to 0 (+null === 0) whereas undefined coerces to NaN (isNaN(+undefined) === true).
  • if (!null), if (null), null == false: if evaluates the "truthiness" or "falsiness" of its argument, which has nothing to do with the mess of rules for ==. null is falsy, and !null is truthy, but the rules for == don't let null == false.


null == undefined does indeed evaluate to true, but null === undefined evaluates to false.

The difference in those two statements is the equality operator. Double-equal in Javascript will convert the two items to the same type before comparing them; for null == undefined, this means that the null is converted to an undefined variable before the comparison is done, hence the equality.

We can demonstrate the same effect with strings and integers: "12" == 12 is true, but "12" === 12 is false.

This example gives us an easier way to discuss your next point, about adding one to each of them. In the example above, adding 1 to the integer obviously gives 13, but with the string "12" + 1 gives us a string "121". This makes perfect sense, and you wouldn't want it any other way, but with a double-equal operator, the original two values were reported as equal.

The lesson here is to always use the triple-equal operator in preference to the double-equal, unless you have a specific need to compare variables of different types.

Your final point demonstrates the fickle nature of null in general. It is a peculiar beast, as anyone who's ever tried to work with a nullable database field will tell you. Null has a very specific definition in computer science, which is implemented in a similar way across multiple languages, so the situation you describe is not a special Javascript weirdness. Null is weird. Don't expect it to behave like an alternative name for false, because it doesn't work that way. The built-in infinity value can behave in a similarly bizarre way, and for similar reasons.

Javascript does have its share of weirdness though. You might be interested in reading http://wtfjs.com/, which has entries for a whole load of strange things that Javascript does. Quite a few of them are to do with null and undefined (did you know it's actually possible to redefine the value of the built-in undefined object?!), and most of them come with an explanation as to what's actually happening and why. It might be helpful in showing you why things work the way they do, and will definitely helpful in showing you things to avoid! And if nothing else, it makes for a good entertaining read to see some of the abuses people have tried throwing at the poor language.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜