开发者

DirectX versus OpenGL Game Development

I have a few questions regarding what graphics platforms are utilized when making professional games (like WoW, starcraft2, diablo 3.. etc).

First, would WoW look different if rendered in OpenGL, rather than Direct X? Would performance be significantly differe开发者_运维问答nt if rendered in OpenGL?

Second, What is the main process when creating a game, specifically in terms of using a model created in Maya, or 3DS Max in the video game? Does one create the model, then export the model to readable OpenGL/DirectX code, with all of the shader programs exported, and all vertices and triangles created in code, and finally utilize the code in a C++/c program?

OVerall, I'm curious on how video game designer/programmers export actual 3DS MAX/Maya models into their code


First, would WoW look different if rendered in OpenGL, rather than Direct X?

It is rendered in OpenGL (well, it can be with a setting). And no, it doesn't look different in D3D rendering. (note: DirectX is more than just Direct3D.)

Would performance be significantly different if rendered in OpenGL?

Unlikely. There are some things that OpenGL implementations can be faster at, but WoW doesn't use any of them.

Does one create the model, then export the model to readable OpenGL/DirectX code

Neither OpenGL nor Direct3D has a model format. Even .x files are deprecated at this point; those were part of the D3DX library, which was just a Microsoft-written layer on top of D3D.

Second, you don't export them into "code". You export them into files, which you then read in your game. Well, generally, you process those files into more digestible formats for you game, as exporters tend to optimize meshes differently from what you might want in your game.

So it's export to file -> process to final file -> read in game.


First, would WoW look different if rendered in OpenGL, rather than Direct X? Would performance be significantly different if rendered in OpenGL?

OpenGL and Direct3D follow similar principles: A stream of vertices goes in, gets transformed (either fixed function or by a vertex shader), after transformation the vertices are grouped into primitives (points, lines, triangles) and the primitives rasterized, i.e. for each pixel the primitive covers some calculation (again either fixed function or by a freely programmable fragment shader) is performed that determines the so called "fragment" color. Finally the fragment it tested for visibility and blended into the picture-in-drawing-progress.

Later versions of OpenGL and DirectX added geometry and tesselation stages, but those are just details.

In all other aspects the ways they work are so similar, that there's virtually no difference between rendering results, except the rounding errors introduced on the vertex positions by the different clip space mappings.

Second, What is the main process when creating a game, specifically in terms of using a model created in Maya, or 3DS Max in the video game? Does one create the model, then export the model to readable OpenGL/DirectX code, with all of the shader programs exported, and all vertices and triangles created in code, and finally utilize the code in a C++/c program?

No! The model and its auxiliary data are stored separately from the code. While it most certainly is possible to store geometry in code, this is a rather bad idea and should not be done. I explained the basic idea, how a model is loaded in How to Import models from 3D software, like Maya into OpenGL?

OVerall, I'm curious on how video game designer/programmers export actual 3DS MAX/Maya models into their code

They don't. They export their models into a file format optimized for efficient loading by a 3D engine, and then the engine loads the models and auxiliary data from files. This is not some kind of black voodoo magic. The whole process is not very different from loading an image from a file and displaying it or some music playing it to the speakers. To animate a 3D model, the model contains additional control information (armatures, bones, etc.) which allow to deform it based on a rather small number of parameters (arm rotation, looking direction, etc.) Then those parameters are varied.


You're asking a large question, but to at least the "second" part of it-- nontrivial models are created in software packages as you describe, but they're then usually exported into data files that basically represent lists of geometry. Those files are then read in by the game's resource loader, and put into vertex arrays or buffer objects or etc. This way the data is independent from the code.

Hardcoded geometry is usually used for small, fixed things, and for tutorial projects. :)


Your first question is really not answerable - both graphics libraries give me access to the underlying graphics hardware through an API. How the game looks depends largely on how that API is used. If I set the backbuffer to red in OpenGL, it will look the same as if I set the backbuffer to red in DirectX. If I do the same lighting calculations in OpenGL that I do in DirectX, I will see the same results, and so on. Any differences that result are likely a result of the APIs not being identical so the programmers work around it.

Models from a 3D modeling program (such as Maya or 3DsMax) are not 'coded', they are exported into a format that the game can understand. These can be XML based formats such as Collada, text based formats such as OBJ, or even proprietary binary formats. What is important is that the data from the modelling program is made available to the game in a format which it can read in and convert into an internal representation of the geometry. Pretty much every game engine handles geometry in a slightly different fashion.

Technically, I could just save the following file:

V1  0.0, 0.5, 0.5
V2 -0.5,-0.5, 0.5
V3  0.5, 0.5, 0.5

M1 V1, V2, V3

As long as the game understood that this file contained three vertices (V1, V2, V3) and one mesh (M1) consisting of those three vertices, and knew how to read it in to a format that we could then render, we would be in business and could draw this triangle to the scene as a mesh. Incidentally, this format is not far from the OBJ format.

Most exporters are much more complicated than this, and most engines support a more complicated geometric representation to account for all the extra data that these (powerful) modeling tools can provide for us.


This is a very big question that is hard to answer briefly.

About directX and Opengl, they are not very different in terms of functionality and performance (maybe).

DirectX is always one step ahead of Opengl. For example, it first supported the hardware tessellation feature, and then after a while, this feature appeared in Opengl. But this is not a big deal, because when you develop a game, you will have to support a range of machines, including those low performance machines. So you may not be able to use the new features until it becomes very common.

Yes, game developers use Maya, and 3Ds max kind of software to create the models of a game, and normally they output the model to a model file. this file is read in by the game engine of the game.

I don't suggest you to start with opengl or directx, I think it is better to use a game (or graphics) engine instead. For example, ogre 3D. http://www.ogre3d.org/

because it supports both directx and opengl, and it also has ready made 3Ds max, Maya exporter, which makes loading models easily.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜