Bug 827170 - Clamp intermediate surface's framebuffer dimensions to what is supported by the GL - r=BenWa

Without this, we have assertion failures as we fail to create our textures and subsequently
we have incomplete framebuffers. The present patch is a short-term compromise: to avoid
asserting, we just clamp texture sizes. That can result in fuzzy rendering. Ideally
(with some suitable tiling) we wouldn't have to do that.
This commit is contained in:
Benoit Jacob 2013-01-25 13:40:38 -05:00
parent 7dee6cb50e
commit 7f759366c7

View File

@ -193,8 +193,19 @@ ContainerRender(Container* aContainer,
const gfx3DMatrix& transform = aContainer->GetEffectiveTransform();
bool needsFramebuffer = aContainer->UseIntermediateSurface();
if (needsFramebuffer) {
LayerManagerOGL::InitMode mode = LayerManagerOGL::InitModeClear;
nsIntRect framebufferRect = visibleRect;
// we're about to create a framebuffer backed by textures to use as an intermediate
// surface. What to do if its size (as given by framebufferRect) would exceed the
// maximum texture size supported by the GL? The present code chooses the compromise
// of just clamping the framebuffer's size to the max supported size.
// This gives us a lower resolution rendering of the intermediate surface (children layers).
// See bug 827170 for a discussion.
GLint maxTexSize;
aContainer->gl()->fGetIntegerv(LOCAL_GL_MAX_TEXTURE_SIZE, &maxTexSize);
framebufferRect.width = std::min(framebufferRect.width, maxTexSize);
framebufferRect.height = std::min(framebufferRect.height, maxTexSize);
LayerManagerOGL::InitMode mode = LayerManagerOGL::InitModeClear;
if (aContainer->GetEffectiveVisibleRegion().GetNumRects() == 1 &&
(aContainer->GetContentFlags() & Layer::CONTENT_OPAQUE))
{