开发者

Layer-backed OpenGLView redraws only if window is resized

I have a window with a main view of type NSView and a subview which is a subclass of NSOpenGLView whose name is CustomOpenGLView. The subclass of NSOpenGLView is obtained through a Custom View in Interface Builder and by setting its class to CustomOpenGLView. This is made according to the Apple Sample Code Layer Backed OpenGLView.

The app is made to draw something to the OpenGLContext every, let's say, 0.05 seconds. With Core Animation Layer disabled I am able to see the moving object in the view, which is the consequence of the continuous redrawing of the view. And everything works flawlessly.

I now want to have a semitransparent view on top of CustomOpenGLView to house 开发者_C百科control buttons like play/stop/ecc..

To do this I have add a subview to CustomOpenGLView and I have enabled Core Animation Layer on CustomOpenGLView. Control buttons are placed in this new subview.

This way the view with control buttons correctly appears on top of CustomOpenGLView but now the view doesn't redraw. It draws only if I resize the window containing all these views.

The result is that I do not see any "animation"...I only see a still image which represents the first frame which gets drawn when the drawing loop starts. If I resize the window, openGLContext gets redrawn until I stop resizing the window. After that I see once again a still image with the last drawing occurred during the resize.

In addition, when the drawing loop starts, only the first "frame" appears on screen and if I resize the window, let's say, 5 seconds later, I see in the view exactly what it should have been drawn 5 seconds after the starting of drawing loop. It seems like I need to set [glView setNeedsDisplay:TRUE]. I did that but nothing has changed.

Where is the mistake? Why does adding Core Animation Layer break the redraw? Does it imply something I'm not getting?


When you have a normal NSOpenGLView, you can simply draw something via OpenGL and then call -flushBuffer of the NSOpenGLContext to make the rendering appear on screen. If your context is not double buffered, which is not necessary if you render to a window, since all windows are already double buffered by themselves in MacOS X, calling glFlush() is sufficient as well (only for real fullscreen OpenGL rendering, you'll need double buffering to avoid artifacts). OpenGL will then render directly into the pixel storage of your view (which is in fact the backing storage of the window) or in case of double buffering, it will render to the back-buffer and then swap it with the front-buffer; thus the new content is immediately visible on screen (actually not before the next screen refresh but such a refresh takes place at least 50-60 times a second).

Things are a bit different if the NSOpenGLView is layer-backed. When you call -flushBuffer or glFlush(), the rendering does actually take place just as it did before and again, the image is directly rendered to the pixel storage of the view, however, this pixel storage is not the backing storage of the window any longer, it is the "backing layer" of the view. So your OpenGL image is updated, you just don't see it happening since "drawing into a layer" and "displaying a layer on screen" are two completely different things! To make the new layer content visible, you'll have to call setNeedsDisplay:YES on your layer-backed NSOpenGLView.

Why didn't it work for you when you called setNeedsDisplay:YES? First of all, make sure you perform this call on the main thread. You can perform this call on any thread you like, it will for sure mark the view dirty, yet only when performing this call on the main thread, it will also schedule a redraw call for it (without that call it is marked dirty but it won't be redrawn until any other parent/child view of it is redrawn). Another problem could be the drawRect: method. When you mark the view as dirty and it is redrawn, this method is being called and whatever this method "draws" overwrites whatever content is currently within the layer. As long as your view wasn't layer-backed, it didn't matter where you rendered your OpenGL content but for a layer-backed view, this is actually the method where you should perform all your drawings.

Try the following: Create a NSTimer on your main thread that fires every 20 ms and calls a method that calls setNeedsDisplay:YES on your layer-backed NSOpenGLView. Move all your OpenGL render code into the drawRect: method of your layer-backed NSOpenGLView. That should work pretty well. If you need something more reliably than a NSTimer, try a CVDisplayLink (CV = CoreVideo). A CVDisplayLink is like a timer, yet it fires every time the screen has just been redrawn.

Update

Layered NSOpenGLView are somewhat outdated, starting with 10.6 they are not really needed any longer. Internally a NSOpenGLView creates a NSOpenGLLayer when you make it layered, so you can as well use such a layer directly yourself and "building" your own NSOpenGLView:

  1. Create your own subclass of NSOpenGLLayer, let's call it MyOpenGLLayer
  2. Create your own subclass of NSView, let's call it MyGLView
  3. Override - (CALayer *)makeBackingLayer to return an autoreleased instance of MyOpenGLLayer
  4. Set wantsLayer:YES for MyGLView

You now have your own layer backed view and it is layer backed by your NSOpenGLLayer subclass. Since it is layer backed, it is absolutely okay to add sub-views to it (e.g. buttons, textfields, etc.).

For your backing layer, you have basically two options.

Option 1
The correct and officially supported way is to keep your rendering on the main thread. Therefor you must do the following:

  • Override canDrawInContext:... to return YES/NO, depending on whether you can/want to draw the next frame or not.
  • Override drawInContext:... to perform your actual OpenGL rendering.
  • Make the layer asynchronous (setAsynchronous:YES)
  • Be sure the layer is "updated" whenever its resized (setNeedsDisplayOnBoundsChange:YES), otherwise the OpenGL backing surface is not resized when the layer is resized (and the rendered OpenGL context must be stretched/shrunk each time the layer redraws)

Apple will create a CVDisplayLink for you, that calls canDrawInContext:... on main thread each time it fires and if this method returns YES, it calls drawInContext:.... This is the way how you should do it.

If your rendering is too expensive to happen on main thread, you can do the following trick: Override openGLContextForPixelFormat:... to create a context (Context B) that is shared with another context you created earlier (Context A). Create a framebuffer in Context A (you can do that before or after creating Context B, it won't really matter); attach depth and/or stencil renderbuffers if required (of a bit depth of your choice), however instead of a color renderbuffer, attach a "texture" (Texture X) as color attachments (glFramebufferTexture()). Now all color render output is written to that texture when rendering to that framebuffer. Perform all rendering to this framebuffer using Context A on any thread of your choice! Once the rendering is done, make canDrawInContext:... return YES and in drawInContext:... just draw a simple quad that fills the whole active framebuffer (Apple has already set it for you and also the viewport to fill it completely) and that is textured with the Texture X. This is possible, since shared contexts share also all objects (e.g. like textures, framebuffers, etc.). So your drawInContext:... method will never do more than drawing a single, simple textured quad, that's all. All other (possibly expensive rendering) happens to this texture on a background thread and without ever blocking your main thread.

Option 2
The other option is not officially supported by Apple and may or may not work for you:

  • Don't override canDrawInContext:..., the default implementation always returns YES and that's what you want.
  • Override drawInContext:... to perform your actual OpenGL rendering, all of it.
  • Don't make the layer asynchronous.
  • Don't set needsDisplayOnBoundsChange.

Whenever you want to redraw this layer, call display directly (NOT setNeedsDisplay! It's true, Apple says you shouldn't call it, but "shouldn't" is not "mustn't") and after calling display, call [CATransaction flush]. This will work, even when called from a background thread! Your drawInContext:... method is called from the same thread that calls display which can be any thread. Calling display directly will make sure your OpenGL render code executes, yet the newly rendered content is still only visible in the backing storage of the layer, to bring it to screen you must force the system to perform layer compositing and [CATransaction flush] will do exactly that. The class CATransaction, which has only class methods (you will never create an instance of it) is implicitly thread-safe and may always be used from any thread at any time (it performs locking on its own whenever and wherever required).

While this method is not recommend, since it may cause redraw issues for other views (since those may also be redrawn on threads other than main thread and not all views support that), it is not forbidden either, it uses no private API and it has been suggested on the Apple mailing list without anyone at Apple opposing it.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜