The GLFW context version hint is a minimum version, not maximum version so
requesting 4.4 and then falling back to lower versions doesn't make sense.
This change sets the minimum version to 3.2 and attempts to standardize this
across all example apps.
Also print the maximum supported GL version along with the context version
at startup.
All examples, regression tests and tutorials directly looked into
opensubdiv source directory to grab the header files. This is somewhat
convenient during development but they can mistakenly access private
header files.
With this change, when OPENSUBDIV_INCLUDE_DIR is given to cmake,
it will be used as an include search path to build examples etc.
Otherwise it follows the same behavior as before.
Also replaces include references to the files in regression dir
to be relative, and cleanups some copy-paste patterns.
Add EvalStencils and EvalPatches API for most of CPU and GPU evaluators.
with this change, Eval API in the osd layer consists of following parts:
- Evaluators (Cpu, Omp, Tbb, Cuda, CL, GLXFB, GLCompute, D3D11Compute)
implements EvalStencils and EvalPatches(*). Both supports derivatives
(not fully implemented though)
- Interop vertex buffer classes (optional, same as before)
Note that these classes are not necessary to use Evaluators.
All evaluators have EvalStencils/Patches which take device-specific
buffer objects. For example, GLXFBEvaluator can take GLuint directly
for both stencil tables and input primvars. Although using these
interop classes makes it easy to integrate osd into relatively
simple applications.
- device-dependent StencilTable and PatchTable (optional)
These are also optional, but can be used simply a substitute of
Far::StencilTable and Far::PatchTable for osd evaluators.
- PatchArray, PatchCoord, PatchParam
They are tiny structs used for GPU based patch evaluation.
(*) TODO and known issues:
- CLEvaluator and D3D11Evaluator's EvalPatches() have not been implemented.
- GPU Gregory patch evaluation has not been implemented in EvalPatches().
- CudaEvaluator::EvalPatches() is very unstable.
- All patch evaluation kernels have not been well optimized.
- Currently GLXFB kernel doesn't support derivative evaluation.
There's a technical difficulty for the multi-stream output.
In osd layer, we use GLPatchTable (D3D11PatchTable) as a
device-specific representation of FarPatchTables instead of
DrawContext. GLPatchTable may be used not only for drawing
but also for GPU eval APIs (not yet supported though.
We may add CudaPatchTable etc as needed).
The legacy gregory patch drawing buffers are carved out to
the separate class, named GLLegacyGregoryPatchTable.
Also face-varying data are split into client side for now, until
we add new and more robust face-varying drawing structure
(scheduled at 3.1 release)
Tentatively replicate PatchArray structure in GLPatchTables. It will
be revised in the upcoming change.
Shifting hard-coded SRV locations of legacy gregory buffers in HLSL shaders.
Remove DrawRegistry from osd layer and put a simple shader caching
utility into examples/common. osd layer only provides patch shader
snippet and let client configure and compile the code. Clients also
maintain the lifetime of shader object, which is preferable for the
actual application integration.
update all examples to use the new scheme.
Since unified shading work already removed subPatch info from
Osd::PatchDescriptor, the difference between Far::PatchDescriptor and
Osd::PatchDescriptor is just maxValence and numElements. They are used
for legacy gregory patch drawing.
Both maxValence and numElements are actually constant within a topology
(drawContext). This change move maxValence to DrawContext and let client
manage numElements, then we can eliminate Osd::PatchDescriptor and simply
use Far::PatchDescritor instead.
This is still an intermediate step toward further DrawRegistry refactoring.
For the time being, adding EffectDesc struct to include maxValence and
numValence to be maintained by the clients. They will be cleaned up later.
The side benefit of this change is we no longer need to recompile regular b-spline
shaders for the different max-valences.
In OpenSubdiv 2.x, we encapsulated subdivision tables into
compute context in osd layer since those tables are order-dependent
and have to be applied in a certain manner. In 3.0, we adopted stencil
table based refinement. It's more simple and such an encapsulation is
no longer needed. Also 2.0 API has several ownership issues of GPU
kernel caching, and forces unnecessary instantiation of controllers
even though the cpu kernels typically don't need instances unlike GPU ones.
This change completely revisit osd client facing APIs. All contexts and
controllers were replaced with device-specific tables and evaluators.
While we can still use consistent API across various device backends,
unnecessary complexities have been removed. For example, cpu evaluator
is just a set of static functions and also there's no need to replicate
FarStencilTables to ComputeContext.
Also the new API delegates the ownership of compiled GPU kernels
to clients, for the better management of resources especially in multiple
GPU environment.
In addition to integrating ComputeController and EvalStencilController into
a single function Evaluator::EvalStencils(), EvalLimit API is also added
into Evaluator. This is working but still in progress, and we'll make a followup
change for the complete implementation.
-some naming convention changes:
GLSLTransformFeedback to GLXFBEvaluator
GLSLCompute to GLComputeEvaluator
-move LimitLocation struct into examples/glEvalLimit.
We're still discussing patch evaluation interface. Basically we'd like
to tease all ptex-specific parametrization out of far/osd layer.
TODO:
-implments EvalPatches() in the right way
-derivative evaluation API is still interim.
-VertexBufferDescriptor needs a better API to advance its location
-synchronization mechanism is not ideal (too global).
-OsdMesh class is hacky. need to fix it.
refactor CL/CUDA specific initialization stuffs into
examples/common/clDeviceContext and cudaDeviceContext, and
update examples to use those structs.
also
- remove CL/CUDA tests from osd_regression. The tests for those kernels will be covered by glImaging.
- update cuda initialization to use the GL-interoperable device if available.
- remove CL specialization from glShareTopology, following the same pattern as we took in the previous OsdGLMesh refactoring. (still something strange with XFB kernels though)
- fix file permissions.
Removed OpenCL/D3D11 specialization and add DEVICE_CONTEXT as a template
parameter. For the kernels which don't need a context object (e.g.
CPU, OpenGL, cuda) just ignore the context, and for the kernels which
use a context (e.g. OpenCL, DirectX) takes a context or a user-defined
class as which encapsulates device contexts. Note that OpenCL requires
two objects, cl_context and cl_command_queue. The user-defined
class must provide GetContext() and GetCommandQueue() for strongly typed
binding to osd VertexBuffers and ComputeContexts.
Osd::Mesh and MeshInterface have been used as a handy harness to host
multiple GPU kernels and graphics APIs. However it has CL/DirectX
specializations and duplicates large amount of plubming code. With this
change, glMesh.h and d3d11Mesh.h become just typedefs and all logic is
put into mesh.h without specializations.
Also cleaned up unused header files and code formatting.
This change moves all gregory patch generation from Far::PatchTablesFactory
so that we can construct patch tables without stencil tables as well as client
can chose any end patch strategies (we have 3 options for now: legacy 2.x style
gregory patch, gregory basis patch and experimental regular patch approximation).
Also Far::EndCapGregoryBasisPatchFactory provides index mapping from patch index
to vtr face index, which can be used for single gregory patch evaluation on top
of refined points, without involving heavier stencil tables generation.