For loop not terminating in c
I am writing some code and I am getting a strange error: my for loop does not seem to exit when the condition statement becomes false. The code is as follows:
static void wstrcpy_from_Py_UNICODE(Py_UNICODE *inBuf, Py_ssize_t strLength, wchar_t **outBuf)
{
if (strLength == 0) *outBuf = NULL;
else
{
Py_ssize_t i;
wprintf(L"String Length: %d\n", strLength);
*outBuf = (wchar_t *)malloc (sizeof (wchar_t) * (strLength +1));
for (i=0; i < strLength; i++)
{
wprintf("i:%d, strLength:%d\n", i, strLength);
(*outBuf)[i] = (wchar_t)(inBuf[i]);
wprintf(L"i < strLength: %d\n\n", i < strLength);
}
/* Make sure new string is zero terminated */
(*outBuf)[i] = L'\0';
}
}
When running this code with an example input, (The Py_UNICODE * buffer points to the internal unicode python object made with u"example") I get the following output:
String Length: 7
i:0, strLength: 7
i < strLength: 1
i:1, strLength: 7
i < strLength: 1
i:2, strLength: 7
i < strLength: 1
i:3, strLength: 7
i < strLength: 1
i:4, strLength: 7
i < strLength: 1
i:5, strLength: 7
i < strLength: 1
i:6, strLength: 7
i < strLength: 1
i:7, strLength: 7
i < strLength: 1
i:8, strLength: 7
i < strLength: 1
...
The loop doesn't exit until the python interpreter the code is running from (I am wrapping a c module for python) crashes.
The printf's were put in for debugging.
I am compiling this on Mac OSX 10.6, here are the commands I am using to compile:
gcc -c source.c -I/usr/include/python2.6 -I/usr/lib/python2.6
ld -bundle -flat_namespace -undefined suppress -o out.so source.o -F./ -framework some_framework -macosx_version_min 10.6 -rpath ./
As you can see I am linking in the framework that I am making the python wrapper for. This is not a problem as I can call the functions just fine that use the linked framework, just when I call the function that uses the helper function shown above do I get开发者_开发问答 the problem.
Am I being stupid here and doing something very basic wrong or is something wrong with the compiler? Any help would be much appreciated!
I think this mostly have to do with number precission. If Py_ssize_t
is a 64 bit type, it may be of the form: 0xffffffff00000008
(maybe because a previous calculation value that involves incorrect precision numbers, or mixing signed with unsigned calculations). When considered as an int
(32 bits), its result is 8, but when considered as a 64 bit value, it yields a very small negative number (signed), or a very big positive number (unsigned). Try to change the wprintf
expressions to write a long decimal (%ld
) and see what get's printed, or debug your code with gdb to see the number in its real size.
Can you try using an int i
and an int strLength
?
I don't know Py_ssize_t
type, but the implicit cast with the %d
in printf
might hide the issue
What's a Py_ssize_t
?
printf("sizeof (int) is %d\n", (int)sizeof (int));
printf("sizeof (Py_ssize_t) is %d\n", (int)sizeof (Py_ssize_t));
Other than calling wprintf
with a char*
(once), you're using the "%d"
specifier for values of type Py_ssize_t
精彩评论