passing vectors (and other structures) in opengl and own libraries
This is a code style & design question, perhaps dealing with tradeoffs. It is probably obvious.
Backstory:
When migrating from C++ to Java I encountered a difference I am now not sure how to deal with.
In all opengl calls you pass an array with an offset and an aditional param wich tells the function how the passed array is structured. Take glDrawArrays for an example.
So when drawing, it would be best to have all my vertex in an array, a FloatBuffer. However, I also need those vertex for my physics callculation.
The question:
Should I create a separate buffer for physics and copy its results to the FloatBuffer every update, dealing with a Vec3f and Point3f classes since they can not be passed to opengl functions because they might be fragmented (or can they?).
Or should I have a seperate class for dealing with my structures which takes the offset along with the array.
public static void addVec3(float[] vec3in_a, int offset_a, float[] vec3in_b, int offset_b, float[] vec3out, int offset_out)
And what should the offsets represent. Should they acc开发者_Python百科ount for vec3 size and move apropriatly (offset_a*=3), like an array of Vec3 would behave, or should it just offset as a normal float array.
Thank you:)
can't you do the physics calculations on the GPU? JOCL or shaders would be a possible route. Usually you try to prevent to do all the vertex transformation on the CPU (in java, C, whatever) and sent it to the GPU every frame.
if you really have to do that in java (CPU) you could adapt your maths classes (Vec, Point etc) to store the data in a FloatBuffer. But this will be certainly slower compared to primitive floats since the read/write operation to a FB is not without overhead.
Without knowing what you are actually doing, a copy from FB -> math object and back could be even feasible. If it is not fast enough... optimize later :)
精彩评论