开发者

Why does the Win32-API have so many custom types?

I'm new to the Win32 API and the many new types begin to confuse me.

Some functions take 1-2 ints and 3 UINTS as arguments.

  • Why can't they just use ints? What are UINTS?

Then, there are those other types:

DWORD LPCWSTR LPBOOL 
  • Again, I think the "primitive" C types would be enough - why introduce 100 new types?

This one was a pain: WCHAR*

I had to iterate through it and push_back every character to an std::string as there wasn't another way to convert it to one. Horrible.


The Windows API was first created back in the 1980's, and has had to support several different CPU architectures and compilers over the years. They've gone from single-user single-process standalone systems to networked multi-user multi-core security-conscious systems. They had to work around issues with 16-bit vs. 32-bit processors, and now 64-bit processors. They had to work around issues with pre-ANSI C compilers. They had to support C++ compilers in the early unstandardized times. They had to deal with segmented memory. They had to support internationalization before Unicode existed. They had to support some source-level compatibility with MS-DOS, with OS/2, and with Mac OS. They've had to run on several generations of Intel chips, and PowerPC, and MIPS, and Alpha, and ARM. The same basic API is used for desktop, server, mobile, and embedded systems.

Back in the 1980's, C was considered to be a high-level language (yes, really!) and many people considered it good form to use abstract types rather than just specifying everything as a primitive int, char, or void *. Back when we didn't have IntelliSense and infotips and code browsers and online documentation and the like, such usage hints were helpful, and it made it easier to port code between different compilers and different programming languages.

Yes, it looks like a horrible mess now, but that doesn't mean anybody did anything wrong.


Win32 actually has very few primitive types. What you're looking at is decades of built-up #defines and typedefs and Hungarian notation. Because there were so few types and little or no IntelliSense developers gave themselves "clues" as to what a particular type was actually supposed to do.

For example, there is no boolean type but there is an "aliased" representation of an integer that tells you that a particular variable is supposed to be treated as a boolean. Take a look at the contents of WinDef.h to see what I mean.

You can take a look here: http://msdn.microsoft.com/en-us/library/aa383751(VS.85).aspx for a peek at the veritable tip of the iceberg. For example, notice how HANDLE is the base typedef for every other object that is a "handle" to a windows object. Of course, HANDLE is defined somewhere else as a primitive type.


UINT is an unsigned integer. If a parameter value will not / cannot be negative, it makes sense to specify unsigned. LPCWSTR is a pointer to const wide char array, while WCHAR* is non-const.

You should probably compile your app for UNICODE when working with wide chars, or use a conversion routine to convert from narrow to wide.
http://msdn.microsoft.com/en-us/library/dd319072%28VS.85%29.aspx

http://msdn.microsoft.com/en-us/library/dd374083%28v=VS.85%29.aspx


A coworker of mine would say, "There is no problem that can't be solved (obfuscated?) by a level of indirection." In Win32, you'll be dealing with WCHAR, UINT, etc., and you'll get used to it. You won't have to worry when you deploy that DLL which basic type a WCHAR or UINT compiles to—it will "just work".

It is best to read through some of the documentation to get used to it. Especially on the "wide char" support (WCHAR, etc.). There's a nice definition on MSDN for WCHAR.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜