Covariance and Contravariance inference in C# 4.0
When we define our interfaces in C# 4.0, we are allowed to mark each of the generic parameters as in
or out
. If we try to set a generic parameter as out and that'd lead to a problem, the compiler raises an error, not allowing开发者_StackOverflow中文版 us to do that.
Question:
If the compiler has ways of inferring what are valid uses for both covariance
(out
) and contravariance
(in
), why do we have to mark interfaces as such? Wouldn't it be enough to just let us define the interfaces as we always did, and when we tried to use them in our client code, raise an error if we tried to use them in an un-safe way?
Example:
interface MyInterface<out T> {
T abracadabra();
}
//works OK
interface MyInterface2<in T> {
T abracadabra();
}
//compiler raises an error.
//This makes me think that the compiler is cappable
//of understanding what situations might generate
//run-time problems and then prohibits them.
Also,
isn't it what Java does in the same situation? From what I recall, you just do something like
IMyInterface<? extends whatever> myInterface; //covariance
IMyInterface<? super whatever> myInterface2; //contravariance
Or am I mixing things?
Thanks
If the compiler has ways of inferring what are valid uses for both covariance (out) and contravariance(in), why do we have to mark interfaces as such?
I'm not quite sure I understand the question. I think you're asking two things.
1) Can the compiler deduce the variance annotations?
and
2) Why does C# not support call-site variance like Java does?
The answer to the first is:
interface IRezrov<V, W>
{
IRezrov<V, W> Rezrov(IRezrov<W, V> x);
}
I invite you to attempt to deduce what all legal possible variance annotations are on V and W. You might get a surprise.
If you cannot figure out a unique best variance annotation for this method, why do you think the compiler can?
More reasons here:
http://blogs.msdn.com/ericlippert/archive/2007/10/29/covariance-and-contravariance-in-c-part-seven-why-do-we-need-a-syntax-at-all.aspx
More generally: your question indicates fallacious reasoning. The ability to cheaply check whether a solution is correct does not logically imply that there is a cheap way of finding a correct solution. For example, a computer can easily verify whether p * q == r is true or false for two thousand-digit prime numbers p and q. That does not imply that it is easy to take r and find p and q such that the equality is satisfied. The compiler can easily check whether a variance annotation is correct or incorrect; that does not mean that it can find a correct variance annotation amongst the potentially billions of possible annotations.
The answer to the second is: C# isn't Java.
OK, here is the answer to what I asked (from Eric's answer) : http://blogs.msdn.com/ericlippert/archive/2007/10/29/covariance-and-contravariance-in-c-part-seven-why-do-we-need-a-syntax-at-all.aspx
First, it seems to me that variance ought to be something that you deliberately design into your interface or delegate. Making it just start happening with no control by the user works against that goal, and also can introduce breaking changes. (More on those in a later post!)
Doing so automagically also means that as the development process goes on and methods are added to interfaces, the variance of the interface may change unexpectedly. This could introduce unexpected and far-reaching changes elsewhere in the program.
I decided to put it out explicitly here because although his link does have the answer to my question, the post itself does not.
精彩评论