开发者

Minimum buffer length to read a float

I'm writing a small command-line program that reads two floats, an int, and a small string (4 chars max) from stdin. I'm trying to figure out the buffer size I should create and pass to fgets. I figured I could calculate this based on how many digits should be included in the maximum values of float and int respectively, like so:

#include <float.h>
#include <limits.h>

...

int fmax = log10(FLOAT_MAX) + 2;     // Digits plus - and .
int imax = log10(INT_MAX) + 1;       // Digits plus -
int buflen = 4 + 2*fmax + imax + 4;  // 4 chars, 2 floats, 开发者_JS百科1 int, 3 spaces and \n

...

fgets(inbuf, buflen + 1, stdin);

But it's occurred to me that this might not actually be correct. imax ends up being 10 on my system, which seems a bit low, while fmax if 40. (Which I'm thinking is a bit high, given that longer values may be represented with e notation.)

So my question is: is this the best way to work this out? Is this even necessary? It just feels more elegant than assigning a buffer of 256 and assuming it'll be enough. Call it a matter of pride ;P.


This type of thing is a place where I would actually use fscanf rather than reading into a fixed-size buffer first. If you need to make sure you don't skip a newline or other meaningful whitespace, you can use fgetc to process character-by-character until you get the the beginning of the number, then ungetc before calling fscanf.

If you want to be lazy though, just pick a big number like 1000...


This is defined for base 10 floating point numbers (#include <float.h> or the equivalent member of std::numeric_limits<float_type>):

FLT_MAX_10_EXP // for float
DBL_MAX_10_EXP // for double
LDBL_MAX_10_EXP // for long double

As is the maximum precision for decimals in base 10:

FLT_DIG // for float
DBL_DIG // for double
LDBL_DIG  // for long double

Although it really depends on what you define to be a valid floating point number. You could imagine someone expecting:

00000000000000000000000000000000000000000000000000.00000000000000000000

to be read in as zero.


I'm sure there's a good way to determine the maximum length of a float string algorithmically, but what fun is that? Let's figure it out by brute force!

#include <stdio.h>
#include <string.h>
#include <stdlib.h>

int main(int, char **)
{
   float f;
   unsigned int i = -1;
   if (sizeof(f) != sizeof(i))
   {
      printf("Oops, wrong size!  Change i to a long or whatnot so the sizes match.\n");
      return 0;
   }
   printf("sizeof(float)=%li\n", sizeof(float));

   char maxBuf[256] = "";
   int maxChars = 0;
   while(i != 0)
   {
      char buf[256];
      memcpy(&f, &i, sizeof(f));
      sprintf(buf, "%f", f);
      if ((i%1000000)==0) printf("Calclating @ %u: buf=[%s] maxChars=%i (maxBuf=[%s])\n", i, buf, maxChars, maxBuf);
      int numChars = strlen(buf);
      if (numChars > maxChars)
      {
         maxChars = numChars;
         strcpy(maxBuf, buf);
      }
      i--;
   }
   printf("Max string length was [%s] at %i chars!\n", maxBuf, maxChars);
}

Looks like the answer might be 47 characters per float (at least on my machine), but I'm not going to let it run to completion so it's possibly more.


Following the answer from @MSN, you can't really know your buffer is large enough.

Consider:

const int size = 4096;
char buf[size] = "1.";
buf[size -1 ] = '\0';
for(int i = 2; i != size - 1; ++i)
    buf[i] = '0';
double val = atof(buf);
std::cout << buf << std::endl;
std::cout << val << std::endl;

Here atof() handles (as it is supposed to), a thousand character representation of 1.

So really, you can do one or more of:

  • Handle the case of not having a large enough buffer
  • Have better control over the input file
  • Use fscanf directly, to make the buffer size someone else's problem
0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜