Silverlight. Parsing by hands vs Reflection
How much slower will work Silverlight browser app if I'll use Reflection instead of parsing each XML values to classes by hands?
UPD Here is examples (Reflection will not be the same, but very similar):
Parsing By hands:
//...
IEnumerable<XElement> XMLProfiles = xmlDocument.Element("response").Elements(xmlDocument.Element("response").Element("user").Name);
foreach (XElement userElement in XMLProfiles)
{
this.Id = userElement.Element("UserID"开发者_JS百科).Value;
this.Name = userElement.Element("someName").Value;
this.Phone= userElement.Element("mobile").Value;
// ...
this.p20 = userElement.Element("someName20").Value;
}
//...
Reflection:
//...
[XmlElement("UserID")]
public string Id { get; set; }
[XmlElement("someName")]
public string Name{ get; set; }
[XmlElement("mobile")]
public string Phone{ get; set; }
/* other parems */
[XmlElement("someName20")]
public string p20 { get; set; }
//...
public void Load(XDocument data)
{
XElement XML;
Type T = this.GetType();
PropertyInfo[] PI = T.GetProperties();
foreach (PropertyInfo item in PI)
{
var AObj = item.GetCustomAttributes(false);
XMLProfiles = data.Element("user").Element(((XmlElementAttribute)AObj[0]).ElementName);
object Value = Parse(item, XML.Value);
if (Value != null)
item.SetValue(this, Value, null);
}
}
OK. This is kind of silly... but... Yes. There are good reasons to use a serialization/deserialization technique like this. The first option is certainly faster than the second... but there is so much to say here...
First of all, .Net already has two XML serialization techniques that you can use. They are both likely to be faster than your approach. Look into the XmlSerializer and the DataContractSerializer. Don't re-invent the wheel. Harness what is already there.
Next, your own code for the second option can be made a ton faster by caching the meta-data based mappings. Do the reflection once, and then use a lookup table when deserializing.
Which gets me to my next point: If you are only doing it once, the difference is negligible. You won't notice it at all. If you are doing it thousands of times, the reflection-based technique is guaranteed to be slower... how much? Time it and find out.
Ultimately, though, if you don't make it more efficient, doing the timing is a waste of time, IMO.
BUT, an efficient use of declarative serialization/deserialization is a great approach in my experience. It greatly reduces the complexity of your code, which is worth a lot. I will go back to my first point. Don't re-invent the wheel. Use one of the two existing mechanisms if you can.
Assume that a reflection call could take up to 1ms (it should be faster than that). That means you could potentially make 1000 reflection calls per second. When you do profiling, you'll find that it's more like microseconds per call, which should be plenty fast enough for the 60 times per second that you seem to have specified.
if you rewrite what you have, you can get rid of most of the time required for reflection.
Dictionary<PropertyInfo, object[]> PI;
public void InitClass()
{
PI = new Dictionary<PropertyInfo, object[]>();
Type T = this.GetType();
PropertyInfo[] propInfos = T.GetProperties();
foreach (PropertyInfo info in propInfos)
{
var AObj = info.GetCustomAttributes(false);
PI.Add(info, AObj);
}
}
public void Load(XDocument data)
{
XElement XML;
PropertyInfo item;
object[] AObj;
foreach (var keyValPair in PI)
{
item = keyValPair.Key;
AObj = keyValPair.Value;
XMLProfiles = data.Element("user").Element(((XmlElementAttribute)AObj[0]).ElementName);
object Value = Parse(item, XML.Value);
if (Value != null)
item.SetValue(this, Value, null);
}
}
精彩评论