开发者

How to implement painting (with layer support) in OpenGL?

situation

I'm implementing a height field editor, with two views. The main view displays the height field in 3D enabling trackball navigation. The edit view shows the height field as a 2D image.

On top of this height field, new images can be applyed, that alter its appearence (cut holes, lower, rise secific areas). This are called patches.

Bouth the height field and the patches are one channel grayscale png images.

For visualisation I'm using the visualisation library framework (c++) and OpenGL 4.

task

Implement a drawing tool, available in the 2D edit view (orthographic projection), that creates this patches (as seperate images) at runtime.

important notes / constrains

  • the image of the height field may be scaled, rotated and transposed.
  • the patches need to have the same scale as the height field, so one pixel in the patch covers exactly a pixel in the height field.
  • as a result of the scaling the size of a framebuffer pixel may be bigger or smaller than the size of the height field/patch image pixel.
  • the scene contains objects (example: a pointing arrow) that should not appear in the patch.

question What is the right approach to this task? So far I had the following ideas:

  • Use some kind of QT canvas to create the patch, then map it to the height field image proposions and save it as a new patch. This will be done everytime the user starts drawing, this way implementing undo will be easy (remove the last patch created).
  • Use an neutral colored image in combination with textre buffer objects to implement some kind of canvas myself. This way every time the user stops drawing the contents of the canvas is mapped to the height field and sav开发者_如何学Pythoned as a patch. Reseting the canvas for the next drawing.
  • Thre are some examples using a frame buffer object. However I'm not sure if this approach fits my needs. When I use open gl to draw a sub image into the frame buffer, woun't the resultig image contain all data?


Here is what I ende up:

I use the PickIntersector of the Visualisation Library to pick agains the Height Field Image in the edit view. This yealds local coords of the image. There are transformed to uv coords, wich in turn get transformed into pixel coords. This is done when the user presses a mouse button and continues to happen when the mouse moves as long as its over the image.

I have a PatchCanvas class, that collects all this points. On commands it uses the Anti-Grain Geometry library to accually rasterize the lines that can be constructes from the points.

After that is done the rasterized image is divied up into a grid of fixed size. Every tile is scanned for a color different then the neutral one. Tiles that only contain the neutral color are dropped. The other are saved following the appropied naming schema, and can be loaded in the next frame.

Agg supports lines of different size. This issn't implemented jet, but the idea is to pick to adjacened points in screen space, get those uv coords, convert them to pixels and use this as the line thickness. This should result in broader strockes for zoom out views.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜