Overloaded indexer with enum : impossible to use default indexer
Considering the following code:
namespace MyApp
{
using System;
using System.Collections.ObjectModel;
class Program
{
static void Main(string[] args)
{
var col = new MyCollection();
col.Add(new MyItem { Enum = MyEnum.Second });
col.Add(new MyItem { Enum = MyEnum.First });
var item = col[0];
Console.WriteLine("1) Null ? {0}", item == null);
item = col[MyEnum.Second];
Console.WriteLine("2) Null ? {0}", item == null);
Console.ReadKey();
}
}
class MyItem { public MyEnum Enum { get; set; } }
class MyCollection : Collection<MyItem>
{
public MyItem this[MyEnum val]
{
get
{
foreach (var item in this) { if (item.Enum == val) return item; }
return null;
}
}
}
enum MyEnum
{
Default = 0,
First,
Second
}
}
I was surprised to see the following result:
1) Null ? True
2) 开发者_如何学运维Null ? False
My first expectation was that because I was passing an int
, the default indexer should be used, and the first call should have succeeded.
Instead, it seems that the overload expecting an enum
is always called (even when casting 0 as int), and the test fails.
- Can someone explain this behavior to me?
- And give a workaround to maintain two indexers: one by index, and one for the enum?
EDIT : A workaround seems to be casting the collection as Collection, see this answer.
So:
- Why does the compiler choose the most "complex" overload instead of the most obvious one (despite the fact it's an inherited one)? Is the indexer considered a native int method? (but without a warning on the fact that you hide the parent indexer)
Explanation
With this code we are facing two problems:
- The 0 value is always convertible to any enum.
- The runtime always starts by checking the bottom class before digging in inheritance, so the enum indexer is chosen.
For more precise (and better formulated) answers, see the following links:
- original answer by James Michael Hare
- sum up by Eric Lippert
The various answers here have sussed it out. To sum up and provide some links to explanatory material:
First, the literal zero is convertible to any enum type. The reason for this is because we want you to be able to initialize any "flags" enum to its zero value even if there is no zero enum value available. (If we had to do it all over again we'd probably not implement this feature; rather, we'd say to just use the default(MyEnum)
expression if you want to do that.)
In fact, the constant, not just the literal constant zero is convertible to any enum type. This is for backwards compatibility with a historic compiler bug that is more expensive to fix than to enshrine.
For more details, see
http://blogs.msdn.com/b/ericlippert/archive/2006/03/28/the-root-of-all-evil-part-one.aspx
http://blogs.msdn.com/b/ericlippert/archive/2006/03/29/the-root-of-all-evil-part-two.aspx
That then establishes that your two indexers -- one which takes an int and one which takes an enum -- are both applicable candidates when passed the literal zero. The question then is which is the better candidate. The rule here is simple: if any candidate is applicable in a derived class then it is automatically better than any candidate in a base class. Therefore your enum indexer wins.
The reason for this somewhat counter-intuitive rule is twofold. First, it seems to make sense that the person who wrote the derived class has more information than the person who wrote the base class. They specialized the base class, after all, so it seems reasonable that you'd want to call the most specialized implementation possible when given a choice, even if it is not an exact match.
The second reason is that this choice mitigates the brittle base class problem. If you added an indexer to a base class that happened to be a better match than one on a derived class, it would be unexpected to users of the derived class that code that used to choose the derived class suddenly starts choosing the base class.
See
http://blogs.msdn.com/b/ericlippert/archive/2007/09/04/future-breaking-changes-part-three.aspx
for more discussion of this issue.
As James correctly points out, if you make a new indexer on your class that takes an int then the overload resolution question becomes which is better: conversion from zero to enum, or conversion from zero to int. Since both indexers are on the same type and the latter is exact, it wins.
It seems that because the enum
is int
-compatible that it prefers to use the implicit conversion from enum
to int
and chooses the indexer that takes an enum defined in your class.
(UPDATE: The real cause turned out to be that it is preferring the implicit conversion from the const int
of 0
to the enum
over the super-class int
indexer because both conversions are equal, so the former conversion is chosen since it is inside of the more derived type: MyCollection
.)
I'm not sure why it does this, when there's clearly a public indexer with an int
argument out there from Collection<T>
-- a good question for Eric Lippert if he's watching this as he'd have a very definitive answer.
I did verify, though, that if you re-define the int indexer in your new class as follows, it will work:
public class MyCollection : Collection<MyItem>
{
public new MyItem this[int index]
{
// make sure we get Collection<T>'s indexer instead.
get { return base[index]; }
}
}
From the spec it looks like the literal 0
can always be implicitly converted to an enum
:
13.1.3 Implicit enumeration conversions An implicit enumeration conversion permits the decimal-integer-literal 0 to be converted to any enum-type.
Thus, if you had called it as
int index = 0;
var item = col[index];
It would work because you are forcing it to choose the int indexer, or if you used a non-zero literal:
var item = col[1];
Console.WriteLine("1) Null ? {0}", item == null);
Would work since 1
cannot be implicitly converted to enum
It's still weird, i grant you considering the indexer from Collection<T>
should be just as visible. But I'd say it looks like it sees the enum
indexer in your subclass and knows that 0
can implicitly be converted to int
and satisfy it and doesn't go up the class-hierarchy chain.
This seems to be supported by section 7.4.2 Overload Resolution
in the specification, which states in part:
and methods in a base class are not candidates if any method in a derived class is applicable
Which leads me to believe that since the subclass indexer works, it doesn't even check the base class.
In C#, the contant 0
is always implicitly convertible to any enum type. You have overloaded the indexer, so the compiler chooses the most specific overload. Note that this happens during compilation. So if you would write:
int x = 0;
var item = col[x];
Now the compiler doesn't infer that x
is always equal to 0
on the second line, so it will choose the original this[int value]
overload. (The compiler isn't very smart :-))
In early versions of C#, only a literal 0
would be implicitly casted to an enum type. Since version 3.0, all constant expressions that evaluate to 0
can implicitly be casted to an enum type. That's why even (int)0
is casted to an enum.
Update: Extra information about the overload resolution
I always thought that the overload resolution just looked at the method signatures, but it also seems to prefer methods in derived classes. Consider for example the following code:
public class Test
{
public void Print(int number)
{
Console.WriteLine("Number: " + number);
}
public void Print(Options option)
{
Console.WriteLine("Option: " + option);
}
}
public enum Options
{
A = 0,
B = 1
}
This will result in the following behavior:
t.Print(0); // "0"
t.Print(1); // "1"
t.Print(Options.A); // "A"
t.Print(Options.B); // "B"
However, if you create a base class and move the Print(int)
overload to the base class, then the Print(Options)
overload will have a higher preference:
public class TestBase
{
public void Print(int number)
{
Console.WriteLine("Number: " + number);
}
}
public class Test : TestBase
{
public void Print(Options option)
{
Console.WriteLine("Option: " + option);
}
}
Now the behavior is changed:
t.Print(0); // "A"
t.Print(1); // "1"
t.Print(Options.A); // "A"
t.Print(Options.B); // "B"
精彩评论