开发者

What's the point of a width smaller than a precision in printf()?

I came across some code with a line looking like:

fprintf(fd, "%4.8f", ptr->myFlt);

Not working with C++ much these days, I read the doc on printf and its ilk, and learned that in this case 4 is the "width", and 8 is the "precision". Width was defined as the minimum number of spaces occupied by the output, padding with leading blanks if need be.

That being the case, I can't understand what the point of a template like "%4.8f" would be, since the 8 (zero-padded if necessary) decimals after the point would already ensure that the width of 4 was met and exceeded.开发者_高级运维 So, I wrote a little program, in Visual C++:

// Formatting width test

#include "stdafx.h"

int _tmain(int argc, _TCHAR* argv[])
{
    printf("Need width when decimals are smaller: >%4.1f<\n", 3.4567);
    printf("Seems unnecessary when decimals are greater: >%4.8f<\n", 3.4567);
    printf("Doesn't matter if argument has no decimal places: >%4.8f<\n", (float)3);

    return 0;
}

which gives the following output:

Need width when decimals are smaller: > 3.5<
Seems unnecessary when decimals are greater: >3.45670000<
Doesn't matter if argument has no decimal places: >3.00000000<

In the first case, the precision is less than width specified, and in fact a leading space is added. When the precision is greater, however, the width seems redundant.

Is there a reason for a format like that?


The width format specifier only affects the output if the total width of the printed number is less than the specified width. Obviously, this can never happen when the precision is set greater than or equal to the width. So, the width specification is useless in this case.

Here's an article from MSDN; the last sentence explains it.

A nonexistent or small field width does not cause the truncation of a field; if the result of a conversion is wider than the field width, the field expands to contain the conversion result.


Perhaps a mistake of the programmer? Perhaps they swapped %8.4f or they actually intended %12.8f or even %012.8f

See codepad sample:

#include <stdio.h>

int main()
{
    printf("Seems unnecessary when decimals are greater: >%4.8f<\n", 3.4567);
    printf("Seems unnecessary when decimals are greater: >%8.4f<\n", 3.4567);
    printf("Seems unnecessary when decimals are greater: >%12.4f<\n", 3.4567);
    printf("Seems unnecessary when decimals are greater: >%012.4f<\n", 3.4567);

    return 0;
}

Output

Seems unnecessary when decimals are greater: >3.45670000<
Seems unnecessary when decimals are greater: >  3.4567<
Seems unnecessary when decimals are greater: >      3.4567<
Seems unnecessary when decimals are greater: >0000003.4567<


Probably just a guess, but: The precision gives the decimals one length that wont be exceeded if you got more decimals. Likewise the width prevents your number from consuming less space than it should. If you think of some kind of table with numbers you only can achieve uniform columns when each column on each row has the same width regardless of the number it contains.

So precision would be needed in some price like format like 10.00€ where you always want 2 decimals.

For your specific line: I feel like you about the redundancy of the width specifier in this special case.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜