Files
linux/drivers/gpu/drm
Daniel Vetter a20bc62902 drm/i915: Update write_domains on active list after flush.
commit 99fcb766a3 upstream.

Before changing the status of a buffer with a pending write we will await
upon a new flush for that buffer. So we can take advantage of any flushes
posted whilst the buffer is active and pending processing by the GPU, by
clearing its write_domain and updating its last_rendering_seqno -- thus
saving a potential flush in deep queues and improves flushing behaviour
upon eviction for both GTT space and fences.

In order to reduce the time spent searching the active list for matching
write_domains, we move those to a separate list whose elements are
the buffers belong to the active/flushing list with pending writes.

Orignal patch by Chris Wilson <chris@chris-wilson.co.uk>, forward-ported
by me.

In addition to better performance, this also fixes a real bug. Before
this changes, i915_gem_evict_everything didn't work as advertised. When
the gpu was actually busy and processing request, the flush and subsequent
wait would not move active and dirty buffers to the inactive list, but
just to the flushing list. Which triggered the BUG_ON at the end of this
function. With the more tight dirty buffer tracking, all currently busy and
dirty buffers get moved to the inactive list by one i915_gem_flush operation.

I've left the BUG_ON I've used to prove this in there.

References:
  Bug 25911 - 2.10.0 causes kernel oops and system hangs
  http://bugs.freedesktop.org/show_bug.cgi?id=25911

  Bug 26101 - [i915] xf86-video-intel 2.10.0 (and git) triggers kernel oops
              within seconds after login
  http://bugs.freedesktop.org/show_bug.cgi?id=26101

Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Tested-by: Adam Lantos <hege@playma.org>
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2010-02-23 07:37:49 -08:00
..
2009-03-28 20:22:18 -04:00
2009-08-31 09:37:22 +10:00
2009-11-25 12:27:42 -08:00

************************************************************
* For the very latest on DRI development, please see:      *
*     http://dri.freedesktop.org/                          *
************************************************************

The Direct Rendering Manager (drm) is a device-independent kernel-level
device driver that provides support for the XFree86 Direct Rendering
Infrastructure (DRI).

The DRM supports the Direct Rendering Infrastructure (DRI) in four major
ways:

    1. The DRM provides synchronized access to the graphics hardware via
       the use of an optimized two-tiered lock.

    2. The DRM enforces the DRI security policy for access to the graphics
       hardware by only allowing authenticated X11 clients access to
       restricted regions of memory.

    3. The DRM provides a generic DMA engine, complete with multiple
       queues and the ability to detect the need for an OpenGL context
       switch.

    4. The DRM is extensible via the use of small device-specific modules
       that rely extensively on the API exported by the DRM module.


Documentation on the DRI is available from:
    http://dri.freedesktop.org/wiki/Documentation
    http://sourceforge.net/project/showfiles.php?group_id=387
    http://dri.sourceforge.net/doc/

For specific information about kernel-level support, see:

    The Direct Rendering Manager, Kernel Support for the Direct Rendering
    Infrastructure
    http://dri.sourceforge.net/doc/drm_low_level.html

    Hardware Locking for the Direct Rendering Infrastructure
    http://dri.sourceforge.net/doc/hardware_locking_low_level.html

    A Security Analysis of the Direct Rendering Infrastructure
    http://dri.sourceforge.net/doc/security_low_level.html