Some drivers(the open source ones most notably) cannot satisfy all
possible D3D formats. This doesn't mean we should fall back to the
emergency fallback instantly. Instead, try to loosen the requirements
step by step.
It isn't related to the shader backend any longer. The nvts_enable in
the ffp code isn't quite right as well, it should be moved away once
there is a dedicated nvts fragment pipeline replacement
The idea of this patchset is to split the monolithic state set into 3
parts, vertex processing, fragment processing and other states(depth,
stencil, scissor, ...). The states will be provided in templates which
can be (mostly) independently combined, and are merged into a single
state table at device creation time. This way we retain the advantages
of the single state table and having the advantage of separated
pipeline implementations which can be combined without any manually
written glue code.
This gets rid of depth_copy_state in the device, and instead tracks
the most up to date location per-surface. This makes things a lot
easier to follow, and allows us to make a copy when switching depth
stencils in SetDepthStencilSurface().
SetupForBlit sets up the GL viewport and projection matrix for
screen-cordinate access to the framebuffer. These settings were not
updated if the other gl states were already set up for blitting. Guild
Wars reads back an offscreen rendered texture from the framebuffer,
which currently sets up CTXUSAGE_BLIT, then changes the render target,
and draws to the texture, which has to be reloaded from system memory
before it can be rendered to(since GW loaded some data into it). If the
two render targets had different size this failed.
There is no reason to do that, now that the SetGLTextureDesc bug is
fixed. This avoids an infinite recursion because PreLoad calls
ActivateContext at some point.
The previous logic assumed that if NVTS or ATIFS are available they
will be used. This happens to be true for NVTS, but ATIFS is only used
if neither ARBFP nor GLSL are supported. This breaks fixed function
fragment processing on ATI r300 and newer cards
Mesa has a bug that causes a crash due to a NULL pointer dereference
with the R200 driver when making a context current that has
GL_FRAGMENT_SHADER_ATI enabled. This patch works around this bug by
making sure that GL_FRAGMENT_SHADER_ATI is disabled before deactivating
a context, and reactivates it afterwards. The context manager keeps
GL_ATI_FRAGMENT_SHADER generally enabled, except if the context is in 2D
blit mode.
For each pixel format we store a flag in the table whether it supports
post pixelshader blending. Before applying blending or during a
context switch we verify that blending is turned off for the
format. In case of R32F this gave a 5-6x performance boost (without
filtering and software conversion).
This adds code for handling fixed function fragment processing with the
GL_ATI_fragment_shader extension. This is a sort-of programmable
interface for fragment processing at the level of shader model 1.4 in
d3d. This code is of use on r200, r250 and r280 cards(radeon 8500 to
9200) which do not support GL_ARB_fragment_program, but support pixel
shader 1.4 on Windows. This code is somewhat a counterpart to the
existing fragment processing code using GL_NV_register_combiners and
GL_NV_texture_shader.
Add a new property of the shader backend which indicates whether the
shader backend is able to dirtify single constants rather than
dirtifying vshader and pshader constants as a whole. Depending on this
a different Set*ConstantF implementation is used which marks constants
dirty. The ARB shader backend uses this and marks constants clean
after uploading.
Before it was done in findContext, before selecting the new context
which is bad (it doesn't always work). The new code works and this
change also fixes some draw buffer regressions that happened during
the surface rewrite from the last couple of days.