Why do different operators have different associativity?
I've got to the section on operators in The Ruby Programming Language, and it's made me think about operator associativity. This isn't a Ruby question by the way - it applies to all languages.
I know that operators have to associate one way or the other, and I can see why in some cases one way would be preferable to the other, but I'm struggling to see the bigger picture. Are there some criteria that language designers use to decide what should be left-to-right and what should be right-to-left? Are there some cases where it "开发者_JAVA技巧just makes sense" for it to be one way over the others, and other cases where it's just an arbitrary decision? Or is there some grand design behind all of this?
Typically it's so the syntax is "natural":
- Consider
x - y + z
. You want that to be left-to-right, so that you get(x - y) + z
rather thanx - (y + z)
. - Consider
a = b = c
. You want that to be right-to-left, so that you geta = (b = c)
, rather than(a = b) = c
.
I can't think of an example of where the choice appears to have been made "arbitrarily".
Disclaimer: I don't know Ruby, so my examples above are based on C syntax. But I'm sure the same principles apply in Ruby.
Imagine to write everything with brackets for a century or two. You will have the experience about which operator will most likely bind its values together first, and which operator last. If you can define the associativity of those operators, then you want to define it in a way to minimize the brackets while writing the formulas in easy-to-read terms. I.e. (*) before (+), and (-) should be left-associative.
By the way, Left/Right-Associative means the same as Left/Right-Recursive. The word associative is the mathematical perspective, recursive the algorihmic. (see "end-recursive", and look at where you write the most brackets.)
Most of operator associativities in comp sci is nicked directly from maths. Specifically symbolic logic and algebra.
精彩评论