Merge pull request #686 from davidgyu/doctest

Added some minor grammar and spelling fixes
This commit is contained in:
George ElKoura 2015-06-22 22:01:24 -07:00
commit b7e7334e43
2 changed files with 37 additions and 37 deletions

View File

@ -18,7 +18,7 @@
Unless required by applicable law or agreed to in writing, software
distributed under the Apache License with the above modification is
distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the Apache License for the specific
KIND, either express or implied. See the Apache License for the specific
language governing permissions and limitations under the Apache License.
Porting Guide: 2.x to 3.0
@ -87,7 +87,7 @@ recommended for usage where this conversion process is critical.
Details on how to construct a TopologyRefiner can be found in the
`Far overview <far_overview.html#far-topologyrefinerfactory>`__ documentation.
Additionally, documentation for Far::TopologyRefinerFactory<MESH> outlines the
requirements and a Far tutorial (tutorials/far/tutorial_1) provides an example
requirements, and a Far tutorial (tutorials/far/tutorial_1) provides an example
of a factory for directly converting HbrMeshes to TopologyRefiners.
Its worth a reminder here that Far::TopologyRefiner contains only topological
@ -183,7 +183,7 @@ still grouped for the same reasons. There are two issues here though:
* the ordering of components within these groups is not guaranteed to have
been preserved
Vertices in a refined level are grouped according the type of component in
Vertices in a refined level are grouped according to the type of component in
the parent level from which they originated, i.e. some vertices originate
from the center of a face (face-vertices), some from an edge (edge-vertices)
and some from a vertex (vertex-vertices). (Note that there is a conflict in
@ -204,10 +204,10 @@ Version and option Vertex group ordering
3.0 orderVerticesFromFacesFirst = true face-vertices, edge-vertices, vertex-vertices
============================================ =============================================
The decision to change the default ordering was based on common feedback,
and the rationale being that it allows a trivial mapping from vertices in
the cage to their descendants at all refinement levels. While the grouping
is fundamental to the refinement process, the ordering of the groups is
The decision to change the default ordering was based on common feedback;
the rationale was to allow a trivial mapping from vertices in the cage to
their descendants at all refinement levels. While the grouping is
fundamental to the refinement process, the ordering of the groups is
internally flexible, and the full set of possible orderings can be made
publicly available in future if there is demand for such flexibility.
@ -215,7 +215,7 @@ The ordering of vertices within these groups was never clearly defined given
the way that HbrMesh applied its refinement. For example, for the
face-vertices in a level, it was never clear which face-vertices would be
first as it depended on the order in which HbrMesh traversed the parent faces
and generated them, and given one face, HbrMesh would often visit neighboring
and generated them. Given one face, HbrMesh would often visit neighboring
faces first before moving to the next intended face.
The ordering with Far::TopologyRefiner is much clearer and predictable. Using
@ -234,10 +234,10 @@ and also for other components in refined levels, i.e. the child faces and
edges.
For child faces and edges, more than one will originate from the same parent
face or edge. So in addition to the overall ordering based on the order of
the parent faces or edges, an additional ordering is imposed on multiple
children originating from the same face or edge. They will be ordered based
on the corner- or end-vertex with which they are associated.
face or edge. In addition to the overall ordering based on the parent faces
or edges, another ordering is imposed on multiple children originating from
the same face or edge. They will be ordered based on the corner or
end-vertex with which they are associated.
In the case of refined faces, another way to view the ordering is to consider
the way that faces are originally defined -- by specifying the set of vertices
@ -295,8 +295,8 @@ OsdVertexBufferDescriptor Osd::BufferDescriptor
ComputeContext, DrawContext
+++++++++++++++++++++++++++
Essentially replaced with API-specific StencilTable and PatchTable objects, for
example Osd::GLStencilTableSSBO.
ComputeContext and DrawContext have been replaced with API-specific StencilTable
and PatchTable objects, for example Osd::GLStencilTableSSBO.
======================================= ========================================
OpenSubdiv 2.x OpenSubdiv 3.0
@ -346,8 +346,8 @@ Feature Adaptive Shader Changes
===============================
In 3.0, the feature adaptive screen-space tessellation shaders have been
dramatically simplified and the client-facing API has changed dramatically as
well. The primary shift is to reduce the total number of shader combinations and
dramatically simplified, and the client-facing API has changed dramatically as
well. The primary shift is to reduce the total number of shader combinations, and
as a result, some of the complexity management mechanisms are no longer
necessary.

View File

@ -50,16 +50,16 @@ scheme and its associated options. The latter two provide the basis for a
more comprehensive implementation of subdivision, which requires considerably
more understanding and effort.
Overall the approach taken was to extract the functionality at as low a
level as possible. In some cases they are not far from being simple global
functions. The intent was to start at a low level and build any higher
Overall, the approach was to extract the functionality at the lowest level
possible. In some cases, the implementation is not far from being simple
global functions. The intent was to start at a low level and build any higher
level functionality as needed. What exists now is functional for ongoing
development and anticipated needs within OpenSubdiv for the near future.
Its also worth noting that the intent of Sdc is to provide the building
blocks for OpenSubdiv and its clients to efficiently process the specific
set of subdivision schemes that are supported. It is not intended to be
a general framework for defining customized subdivision schemes.
The intent of Sdc is to provide the building blocks for OpenSubdiv and its
clients to efficiently process the specific set of supported subdivision
schemes. It is not intended to be a general framework for
defining customized subdivision schemes.
Types, Traits and Options
@ -90,7 +90,7 @@ around to other Sdc classes and/or methods and are expected to be used at a high
level both within OpenSubdiv and externally. By aggregating the options and
passing them around as a group, it allows us to extend the set easily in future
without the need to rewire a lot of interfaces to accommodate the new choice.
Clients can enables new choices at the highest level and be assured that they will
Clients can enable new choices at the highest level and be assured that they will
propagate to the lowest level where they are relevant.
Unlike other "options" structs used elsewhere to specify variations of a
@ -109,16 +109,16 @@ independent of the subdivision scheme, the goal in Sdc was to encapsulate all
related creasing functionality in a similarly independent manner. Computations
involving sharpness values are also much less dependent on topology -- there
are vertices and edges with sharpness values, but knowledge of faces or boundary
edges is not required -- so the complexity of topological neighborhoods required
edges is not required, -- so the complexity of topological neighborhoods required
for more scheme-specific functionality is arguably not necessary here.
Creasing computations have been provided as methods defined on a Crease class
that is constructed with a set of Options. Its methods typically take sharpness
values as inputs and compute one or a corresponding set of new sharpness values
values as inputs and compute a corresponding set of sharpness values
as a result. For the "Uniform" creasing method (previously known as *"Normal"*),
the computations may be so trivial as to question whether such an interface is
worth it, but for "Chaikin" or other schemes in future that are non-trivial, the
benefits should be clear. Functionality is divided between both uniform and
worth it, but for "Chaikin" or other schemes in the future that are non-trivial,
the benefits should be clear. Functionality is divided between both uniform and
non-uniform, so clients have some control over avoiding unnecessary overhead,
e.g. non-uniform computations typically require neighboring sharpness values
around a vertex, while uniform does not.
@ -153,9 +153,9 @@ Scheme-specific support
While the SchemeTypeTraits class provides traits for each subdivision scheme
supported by OpenSubdiv (i.e. *Bilinear*, *Catmark* and *Loop*), the Scheme class
provides these more directly, along with methods for computing the various sets
of weights used to compute new
vertices resulting from subdivision. The collection of weights used to compute
provides these more directly, Additionally, the Scheme class provides methods
for computing the various sets of weights used to compute new vertices resulting
from subdivision. The collection of weights used to compute
a single vertex at a new subdivision level is typically referred to as a
*"mask"*. The primary purpose of the Scheme class is to provide such masks in a
manner both general and efficient.
@ -242,7 +242,7 @@ The <FACE>, <EDGE> and <VERTEX> interfaces
Mask queries require an interface to a topological neighborhood, currently
labeled **FACE**, **EDGE** and **VERTEX**. This naming potentially implies more
generality than intended as such classes are only expected to provide the
generality than intended, as such classes are only expected to provide the
methods required of the mask queries to compute its associated weights. While
all methods must be defined, some may rarely be invoked, and the client has
considerable flexibility in the implementation of these: they can defer some
@ -287,8 +287,8 @@ The information requested of these classes in the three mask queries is as follo
The latter should not be surprising given the dependencies noted above. There
are also a few more to consider for future use, e.g. whether the **EDGE** or
**VERTEX** is manifold or not. In most cases additional information can be
provided to the mask queries (i.e. pre-determined Rules) and most of the child
**VERTEX** is manifold or not. In most cases, additional information can be
provided to the mask queries (i.e. pre-determined Rules), and most of the child
sharpness values are not necessary. The most demanding situation is a
fractional crease that decays to zero -- in which case all parent and child
sharpness values in the neighborhood are required to determine the proper
@ -320,9 +320,9 @@ So the mask queries require the following capabilities:
through a set of methods required of all *MASK* classes. Since the maximum
number of weights is typically known based on the topology, usage within Vtr,
*Far* or *Hbr* is expected to simply define buffers on the stack or in
pre-allocated tables to be partitioned into the three sets of weights on
construction of a *MASK* and then populated by the mask queries.
*Far* or *Hbr* is expected to simply define buffers on the stack. Another
option is to utilize pre-allocated tables, partitioned into the three sets
of weights on construction of a *MASK*, and populated by the mask queries.
A potentially useful side-effect of this is that the client can define their
weights to be stored in either single or double-precision. With that