Why does formatting a DateTime as a string truncate and not round the milliseconds?
When a Double
is formatted as a string rounding is used. E.g.
Console.WriteLine(12345.6.ToString("F0"));
outputs
12346
However, when a DateTime
is formatted as a string truncation is used. E.g.
var ci = CultureInfo.InvariantCulture;
var dateTime = DateTime.Parse("2011-09-14T15:18:42.999", ci);
Console.WriteLine(dateTime.ToString("o", ci));
Console.WriteLine(dateTime.ToString("s", ci));
Console.WriteLine(dateTime.ToString("yyyy-MM-hhThh:mm:ss.f", ci));
outputs
2011-09-14T15:18:4开发者_开发技巧2.9990000 2011-09-14T15:18:42 2011-09-14T15:18:42.9
What is the reasoning (if any) behind this behavior?
Rounding to nearest second can be achieved by adding half a second before formatting as a string:
var ci = CultureInfo.InvariantCulture;
var dateTime = DateTime.Parse("2010-12-31T23:59:59.999", ci);
Console.WriteLine(dateTime.ToString("s", ci));
var roundedDateTime = dateTime.AddMilliseconds(500);
Console.WriteLine(roundedDateTime.ToString("s", ci));
outputs
2010-12-31T23:59:59 2011-01-01T00:00:00
This is a bit subjective, but I would say that rounding date and times values as opposed to truncating them would result in a "more" unexpected behavior.
For example, rounding new DateTime(2011, 1, 1, 23, 59, 59, 999)
would result in a new day completely. This sounds much more weird than just truncating the value.
Old question, but it has been referred to from a new, and the answers discuss reasons for rounding or not (which of course are valid) but leaves the question with no answer.
The reason for not rounding is, that ToString
just prints the date/time parts you ask for.
Thus, for example, neither will it round to the nearest minute:
Console.WriteLine(dateTime.ToString("yyyy-MM-hhThh:mm", ci));
Output:
2011-09-03T03:18
With no parameter, ToString
uses the default date/time format string of your environment.
At the final unit of measurement, if an event occurs at that frequency, rounding cuts way down on aliasing.
For example, if jitter makes one second data frame come in 2 mS into the second and the next frame to arrive at 990 mS into the same frame, they will both be stamped as being in the same second. A jitter of only a few milliseconds would result in many scattered non-unique key values.
Rounding would put them into several seconds cleanly until the jitter got much more severe, say +/- 499 mS.
The purpose of rounding is to stop the resolution from going on forever. When the uncertainty is way below the resolution, it cuts aliasing tremendously.
"Cascading" can only occur at less than the resolution of the boundary. For example, a toggling year number seems shocking, but this can only occur at less than a second (or millisecond, etc) from midnight New Year's. Nothing unexpected or especially inaccurate to that.
To truly prevent aliasing (the same time mentioned twice) you need to implement anti-aliasing (as is done in graphics), after rounding.
精彩评论