[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Not a HOWTO but a guide of some sort...
Johnny Taporg wrote:
>Which brings us to the next level: SDL, ClanLib, and
>PTC. PenguinPlay would also go at this level, but
>I'm yet to be aware of anything that they've declared
>ready for use (plus my poor gcc 2.7.x chokes on some
See below. It works, but I wouldn't recommend it for "general use" yet.
>of the C++ from the new standard).
Yep. Much of our code requires at least egcs 1.1
And yes, I'm convinced this is ok (really)
>PenguinPlay, ... well? Christian, you want to say
>something here? As far as the rest of us know, we're
>still waiting for a working release of some kind.
Ok, a working snapshot of Penguin2Dwas announced on Freshmeat about 2 weeks
ago. It's mainly a C++ wrapper to the most important libGGI features at
this stage.
I'm currently doing some cleanups on the code. Things are going a bit slow
right now because Adrian (the Penguin2D author) is somewhere in remote area
of the world and thus has problems accessing the code and I just started
hacking it. But I think this cleanup phase won't take much more than 2-3
weeks.
After that we'll implement the remaining LibGGI functionality (relatively
easy; mainly call forwarding, eventually some infrastructure extensions)
And when this is done there will be some more code cleanup and most likely
some alpha release of it before we start working on stuff like transparency.
So, if you have libggi and a reasonably recent compiler (i.e. egcs 1.1 or
greater) installed you can get the snapshot and play around with it. But
it's not ready for prime time yet.
>I'll also point out that the GGI team is working on
>something called libggi3d. This library (only in the
>formative stages, as far as I can tell) is intended
>to let you use the hardware accels on any 3D card you
>have. I'm not sure if this library will be OpenGL, or
>run below OpenGL, but when/if it happens count on it
You're lucky - the libggi3d proposal just appeared on the ggi-devel list:
-----------------------------------------------------------------------
LibGGI3D is a system for building componentized 3D rendering pipelines.
It is not a 3D graphics API like OpenGL, PHIGS or Direct3D. Rather, it is
a "toolkit" which can be used to build the rendering "guts" of a complete
3D graphics system. An API such as OpenGL would be built on top of
LibGGI3D.
Similar in philosophy to the University of Utah "OSKit".
http://www.cs.utah.edu/projects/flux/oskit
LibGGI3D is an attempt to solve an old problem in graphics: how to enable
graphics-using applications to transparently take advantage of any type of
hardware acceleration that may be present, while also taking into account
the rest of the hardware on the system as well as the software.
Combinatorial explosion of possible rendering paths makes optimization
very difficult.
"Capability Masks" such as are used by Direct3D are a lousy solution
because they arbitrarily restrict the types of acceleration that the
hardware may provide to an API. This is one of the reason why Direct3D is
such a joke - because the capability masks are built around existing
hardware features, it becomes difficult (or in some cases impossible) to
cleanly support new hardware features in the API as they become
available. The history of D3D development is one of incompatible API
after incompatible API, an inevitable consequence of its design.
OpenGL is better in that it explicitly allows for the API to be extended
and also in that it tries to be as generic and flexible as possible and
allow for many "cut-ins" (my terminology) - places in the API where the
user can hook into the rendering pipeline and take control over some
aspect of the rendering process. However, in the final analysis the user
is still limited to the flexibility inherent in the API. Extensions must
be queried for presence before use. Abstractions such as surfaces,
texturing, frame buffers and even the notion of rasterization itself are
embedded into the API and the internal structure of the code paths. And
the cut-ins, while well designed, are still coarse-grained and constrict
the space of potential code-path optimizations.
Both OpenGL and D3D commit the cardinal sin of not cleanly
separating interface and implementation of the API. That these should
remain clean and separate is a fundamental tenet of modern programming.
OpenGL also does not facilitate the clean re-use of code, because
optimizations are either suboptimal due to API restrictions or optimal but
customized and non-modular. No matter how much time the OpenGL designers
put into the cut-in design and extensions, someone somewhere is going to
find themselves limited in what they can do. Perhaps some type of new
hardware is released which requires a new cut-in to properly optimize its
use. Or someone comes up with a novel type of rendering (such as portal
code) which by its design cannot properly be optimized for in an
OpenGL-allowed rendering path.
It is a testament to the genius of the OpenGL designers at SGI
that OpenGL works as well as it does. But that genius is beginning to be
strached pretty thin. Each successive version of the OpenGL API adds
layer after layer of bloat. The API now stretches to hundreds of calls.
Apps must be able to parse and handle an increasing number of specialized
extensions until (and if) those extensions are approved by the OpenGL ARB
(Architectural Review Board) and incorporated into the next revision of
the API. The ARB is necessary because OpenGL must take into account so
many differing factors for design purposes:
* Past, present and future hardware designs
* Commercial interests of the ARB members
* Backwards-compatibility requirements for older revisions of the API
...that a firm hand on the wheel (so to speak) is absolutely necessary to
keep OpenGL useable. It should be obvious that a piece of code which
requires this much fussing-over to maintain must have design problems
somewhere. OpenGL is, in the main, a very good standard 3D graphics API.
But it needs to have its interface and implementation disentangled from
each other or it will inevitably choke itself to death as it grows - a
process which some would claim is well underway.
What is needed is a good way to allow for (potentially) infinitely
fine-grained control over the rendering codepath, *without* having to
account for it in the structure of the API itself. APIs would be designed
as a thin layer on top of this system, whose role is to manage the
on-the-fly stringing-together of the available rendering components in
such a way as to optimize the 'expression' of that API on the given
hardware and software. LibGGI3D provides exactly this.
LibGGI3D allows one to build these rendering pipeline code paths out of
components. As more generic components are replaced over time by
specialized components tuned to the particulars of the API, other
components in the pipeline, and the hardware and software present, the
optimization level of the system as a whole will mature.
Mature LibGGI3D rendering paths are distinguished from immature ones by
what I will refer to as their 'degree of adaptation', or DOA, to the
hardware (video chipset, CPU, bus, etc) and software (API) environment
they are meant to run in/on. I will refer to this combined
hardware+software environment as the "niche" of the rendering path. The
DOA of a path is maximized when the path cannot be optimized to its niche
any further - that is, the execution of a particular API call cannot be
recoded to make it any faster.
One of the main drawbacks of OpenGL's coarse-grained cut-in and extension
architecture is its inability to optimize DOA past a certain point.
LibGGI3D rendering paths do not suffer from this limitation, because they
are not tied to any particular API. The only limit to the DOA potential
of a LibGGI3D rendering path is the presence of already-written path
components tuned to the path's niche. This sort of creation of a
top-to-bottom customized RP is what is generally accepted as the form a
mature, DOA-maximized OpenGL implementation. Writing one of these well is
a great deal of work, because OpenGL doesn't allow for many intermediate
optimization steps.
In practice, it is common to see a 'first cut' OpenGL driver for
new hardware which cuts in a hardware-specific rasterization layer and
leaves the geometry and tesselation steps of the RP unaccelerated. This
sort of product tends to be considerably more optimized than a completely
unaccelerated, software-only OpenGL implementation, but still far from
completely optimized. To create a fully-optimized OpenGL implementation,
every aspect of the RP over the whole OpenGL API must be hand-tuned to the
hardware, with no intermediate optimization steps being possible.
The limiting case of DOA maximization in most cases is likely to
be one monolithic, customized LibGGI3D rendering component. In practice,
though, hyper-optimization of this nature is unlikely to be needed or
useful over the entire RP. In addition, the lack of flexibility and
modularity that such hyper-optimization would bring with it would reduce
or eliminate the ability of *new* RP components, such as those dealing
with driving new types of hardware accelerations, to be made use of. The
"penny-wise but pound-foolish" nature of hyper-optimized RPs can be seen
by examining older 3D MS-DOS games. Doom's rendering algorithms, for
example, are so highly tuned to VGA/x86/ISA that it is very difficult to
cut into the Doom RP to enable it to take advantage of video cards that
can do 3D acceleration.
LibGGI3D pipeline components are LibGGI extension libraries. Each
component implements downstream and upstream APIs, which define the set of
other LibGGI3D components which can render downstream to the first
component or to which the first component can render downstream to. The
set of all possible downstream rendering paths in a LibGGI3D pipeline
forms a tree. Every leaf represents a correct rendering path (if the
components are correctly constructed, of course).
Obviously, the user is going to want LibGGI3D to render down the
"shortest" (fastest-executing) path from a given root node to a leaf node.
Determining which components need to be bound together in what order to
follow this shortest path is a tricky problem, because it depends on the
API which is using the LibGGI3D componentry, the niche that the component
pipeline will be executing in, and of course the components that are
available on the users's system (not everyone will have the same
components).
Obviously, one could always take the easy way out and reduce the
complexity described above to a few major factors (API, CPU, video
hardware, bus, etc) and standardize the component design to minimize the
combinatorial complexity explosion. It would then be possible to ship
LibGGI3D with a simple, fixed ruleset describing the best component
layout for a given system. However, such an approach would greatly limit
the DOA potential of the system.
The best way to optimize such a system would be to exhaustively benchmark
every possible path in the component tree. The optimal rendering path to
a leaf for each API call would then be encoded into that API's "extension
path set", or EPS. When an API call is made, LibGGI3D would spawn a
thread which would supervise the flow of execution down through the
appropriate set of components to the leaf node. Such exhaustive
optimization should prove to be overkill in most real-world cases, and if
some part of the niche should change it would only be necessary to
re-benchmark the parts of the pipeline tree which are affected by the
change. A new API would require the whole tree to be exhaustively
re-benchmarked, of course.
The componentized nature of LibGGI3D pipelines does not cause speed loss
because the component bindings are implemented via ELF dynamic libraries.
There is no additional indirection needed when jumping from component to
component as the code path travels down the pipeline. The interfaces
between components can take any form at all, and thus can be customized
and optimized on an as-needed basis. All that is needed for the whole
system to work smoothly is for LibGGI3D's rendering pipeline manager to
have available a precalculated API-to-pipeline-path map.
LibGGI3D used to have a minimal, standardized API based on the concept of
shaded triangle drawing. This has been discarded in this revision of the
LibGGI3D spec. Even such a minimalistic API imposes unacceptable
constraints on the flexibility of the LibGGI3D components. That
shaded-triangle API could of course be re-implemented on top of LibGGI3D,
but so could OpenGL or any other API. That is the beauty of LibGGI3D - it
is not tied to any particular API.
------------------------------------------------------------------
>If anyone can comment on SciTech MGL (last I heard
>no linux port, but one was planned), or XWinAllegro,
Stephane Peter (The PPlay founder) is working for Scitech on the MGL Linux
port. AFAIK they had it running on X some months ago (internally) but I
don't know about the current state.
Great Summary!
Cu
Christian
--
if (1==1.003) printf ("Pentium detected!\n");