Hello, Marciano !
First of all, I'd like to say Horde is rather interesting in several aspects. The attempt of resource management like in real big engines, xml-based data description and skipping of old-hardware-compatible techniques development to reduce code size and complexity is right way.
Unfortunately I didn’t run your project under development environment yet, but after first quick preview of your project source dir, it looks like there are some minor inaccuracies in your code, may be you’ll find some helpful for you.
Thus, there are some of them:
GroupNode::setParam() must return “false” by default.
SceneManager::removeNode() will remove only half of requested node’s children.
This is due to the fact that sn->_parent->_children.erase(…) will shift down next siblings nodes actual indexes in parent container. Since this shift is ignored in parent node cycle call (“for( unsigned int i = 0; i < sn->_children.size(); ++i )…”), you’ll get unreferenced nodes in “_nodes” container. Iterating cycle in reverse direction will solve the problem.
Rounding TBN basis precision to 7 bits + sign (i.e. char) during serialization (see GeometryResource::load(…)) will cause severe image artifacts AKA banding. In order to see it just load hi resolution human face mesh or highly tessellated parametric model like sphere or cylinder with hundreds (better thousands) slices/stacks (per vertex lighting is enough). Since TBN value will remain the same for the space of several polygons and then “step” (during short polygon) to a nearest possible value, hardware color (or TBN basis) interpolation will useless. On good CRT-based monitors you’ll see consistent color bands up to 12-15 bits (depending on material properties), I’m sure of it. Definitely, for “next-gen” engine such image quality killing is not the way you are looking for.
For the tiny wrong files, geometry and animation load(…) functions will read (not write
) “other's” memory at header checking.
In Frustum destructor operator delete must be vector delete (“delete [] _planes”).
In SceneGraphResource::parseNode(…) must be “node->shadowContext = xmlNode->Attribute( "shadowContext" )” in shadow context attribute parsing.
Memory leakage in pipeline destructor due to incorrect void pointers usage. In general, void pointers is really bad thing, and fictitious code simplicity in this case will result in artful error. Pipeline destructor will call delete for “_pipelineCommands[i].valParams[j]”, but global delete rather then class defined delete. This will lead to class destructor will not be called, although memory will be free. For non-integral types, such as std::strings, additionally allocated memory for data storage (usually released in destructor) will be lost. The behavior depends on a container realization. Some “smart” containers, like std:: string from MVS2005, have shared buffer for small data/additional allocated memory pointer storage. For performance reason, small data (particularly up to 15 characters) is stored in local buffer. To get leakage, simply increase context name length in “pipeline.xml” in Chicago sample. For other (more “dummy”) containers, like MVS6.0 std::string or MFC CString, you will get leakage for any non zero data length.
Some possible solutions – dependent casting of pointers(bad) or using classes for your types (bool/float/string and so on) presentation with common parent class, and correspondingly utilization of typed pointer array (much better).
In TextureCubeResource::~TextureCubeResource() the second argument of unloadTexture(…) must be “true”.
After node removal (like single model removal from a group), parent’s box will not be updated (see SceneNode::update()).
Zero-sized bounding box will return “true” in BoundingBox::makeUnion(…) even in case of zero-sized incoming box (hence possible performance degradation). To avoid this, check own size before.
BTW, in some related projects, zero-sized box is interpreted as valid (for degenerated element like particle, for example). So, additional flags like “finite”, “infinite”, “empty” are frequently used. And else, unrolling, like in BoundingBox::transform(…), will not gain performance noticeably on modern “short-conveyer” CPUs.
TextureCubeResource::load(…) still can load map of incorrect size (like 12x17) due to “_height != (_width * 3) / 4” check. Use “_height * 4 != _width * 3” instead. BTW, it’s time to include 4096-sized shadow maps support to your project (see EngineConfig::setOption(…)).
In RendererBase::createRenderBuffer(…) “format ==” is missed before “RenderBufferFormats::RGBA32F”. It looks like if a floating-point textures are not supported, user can’t create RenderBuffer at all, even for a RGBA8 call.
To manage cube textures ResourceTypes::Count must be 8
.
In SGRReferenceNode destructor removal of reference resource does not happen.
In SceneGraphResource::parseNode(…) LightNode::shadowMapEnabled can be switched on only, but can not be switched off. So, if one day you want it to be on by default in SGRLightNode::SGRLightNode(), this state can not be changed from xml-data at all.
SGRReferenceNode::sceneGraphRes is not initialized in SGRReferenceNode::SGRReferenceNode(). So, if you accidentally skip “sceneGraph” attribute in xml data, you will have erroneous reference after SceneGraphResource::parseNode(…) return.
At last. Since Horde is declared as 3D-engine (I think, nevertheless “Current-Gen” rather then “Next-Gen”
), the main goal is (like in other similar projects) image quality and performance. But to be named “Current-Gen”, Horde AT LEAST must support omni and directional lights with shadow mapping, static meshes with different compact streams for position/TBN basis/texture/attributes, 3-dimensional textures support, dds and hdr texture loading (bye-bye, corona), pipeline queue sorting, particles, to a smaller extent geometry shader. Advanced data types (like RBBE) handling, sliced and mipmaped image loading capability also would be helpful.
With best wishes.