Horde3D http://horde3d.org/forums/ |
|
Performance problems - multiple cameras/agents http://horde3d.org/forums/viewtopic.php?f=2&t=1660 |
Page 1 of 1 |
Author: | Roland [ 03.04.2012, 11:28 ] |
Post subject: | Performance problems - multiple cameras/agents |
Hello everybody. I'm currently investigating the possibility of a vision-based steering approach for characters using Horde3D. Basically I'm using a paper from Siggraph 2010 (Youtube-Video) as an inspiration for what hopefully will turn into my masters thesis. For now I'm stuck because I'm experiencing heavy performance problems. My current application is based on the chicago sample, with the following "changes":
This is what it looks like using 64 characters: And the view of a character in fullscreen (the colors represent different values like distance from character, velocity, etc): The performance problem: As one can see from the first screenshot, the app is almost 100% cpu bound. Using 200 characters (=201 cams) the CPU time spent on one frame is about 150ms (Core2Quad @2.83GHz / GeForce GTS 250) or even 250ms (Core i7 @ 2.8 GHz - AMD radeon hd 7XXX) (VisualStudio2010 - Release mode). Having so many cameras leads to about 10.000 batches per frame. So my initial thought was that reducing these would increase my framerate. Therefore I removed the cylinders (each character had one attached to him, all cylinders using the same material) and instead put one mesh, made up of 200 cylinders, into the scene. The idea being, I could later write a "Dynamic Batching Extension" if this would reduce the cpu workload. The new scene: But this didn't result in the improvement I hoped for. Now I have one static mesh (200 cylinders) in the scene which each camera renders while it walks around in the scene. This leads to 1 batch per "walker-cam" (and a lot more triangles) but only a slight computation-time improvement (120ms instead of 150ms on the nvidia pc). Here are the profiling results for the Core2Quad @2.83GHz / GeForce GTS 250 computer: Individual cylinders -------- 1 static cylinder mesh ---------------- As expected the speed improvement originates from faster drawGeometry calculations. If you're interested in the Core i7 @ 2.8 GHz - AMD radeon hd profiles: Individual cylinders -------- 1 static cylinder mesh ---------------- No idea what to make out of these... Anyways, what I'm looking for are suggestions on how to improve my framerate. As a comparison: In the mentioned paper they achive 25fps with 200 walkers, including data transfer from gpu back to cpu which is described as the major bottleneck, on a laptop with an Intel T7800@2.6GHz CPU and a Quadro FX 3600M graphics card. I have no idea what rendering engine they used. How can I improve all the render calculations having hundreds of cameras which all use the same pipeline? I'd appreciate any comments and suggestions you guys have. I'll gladly share more implementation details, but I don't want this first post to be longer than it has to be |
Author: | zoombapup [ 04.04.2012, 16:27 ] |
Post subject: | Re: Performance problems - multiple cameras/agents |
One thing you might want to try. Is to add a render command to the pipeline. Then batch up all your drawing for cameras into a single object node (make a new node type) and then draw the cylinders as billboarded quads. Basically, look at the particle system for example of how to add new node types and render them. That 1000 batches is probably quite high, although I've heard conflicting things on the overhead of batching in OpenGL. But it seems likely you're not using a special shader for the cylinders or anything. I doubt you're geometry bound, so its more likely you're having to draw each object seperately thats causing a slowdown. But as with all these kind of things, you will have to optimize for your own use-case. So get a profiler and check how things are affected in your real application, not just some specific test case. I use a profiler lib called shiny. |
Author: | Roland [ 04.04.2012, 17:44 ] |
Post subject: | Re: Performance problems - multiple cameras/agents |
I didn't put it in my first post, but I guess it would make sense to explain my current pipeline: Every agent camera uses the "syntheticViewDeferred" pipeline. Deferred, because when using a forward-pipeline the depth buffer is not working when rendering to texture. The pipeline for each camera looks like this:
--------------- @zoombapup: I'm not quite sure I fully understand what you are suggesting, but you just gave me an idea: zoombapup wrote: Then batch up all your drawing for cameras into a single object node (make a new node type) and then draw the cylinders as billboarded quads. Lets assume I have 200 agents in my scene, each of them is rendering his own viewport every frame. I can safely assume that each agent (=> cylinder) will be in the viewport of at least one other agent (given the crowd scenario). So how about, instead of having 200 render(agent.camNode) calls like I have now, I do the following:
Not sure how to construct the pipeline so that it still allows me to do all the stuff that I want to, though. And whether it would result in big performance improvements. Is that sort of what you meant? I'm away until next week, so I can't try it out right now. |
Author: | Roland [ 16.04.2012, 11:42 ] |
Post subject: | Re: Performance problems - multiple cameras/agents |
Small update: I have a new version running which looks like this:
Code: ... ... <Stage id="VertexDepth42"> <SwitchTarget target="BUFFER"/> <ClearTarget depthBuf="true" colBuf0="true"/> <DrawGeometry context="DEPTH" class="synthetic"/> </Stage> <Stage id="compResults42"> <SwitchTarget target="AGENTRESULTS"/> <BindBuffer sampler="view" sourceRT="BUFFER" bufIndex="0"/> <DrawQuad context="ONE_CAM_COMPUTATION" material="materials/syntheticComputation.material.xml"/> </Stage> ... ... Using 200 agents this results in a 2.000-lines-pipeline Stage VertexDepth42 renders the view of agent 42 to the BUFFER-rendertarget. Stage compResults42 is supposed to analyze this image and write its results to the AGENTRESULTS target at a specific pixel position reserved for this agent. The next stage in the pipeline is VertexDepth43 which overwrites the BUFFER target with the new agent Nr. 43s-view. compResults43 writes its results to the AGENTRESULTS-target next to the ones from the previos agent. AGENTRESULTS is downloaded to the application at the end of the frame using h3dGetRenderTargetData. Using this setup I doubled my framerate (200 agents = 20fps/30fps, amd/nvidia). It could be a lot higher, but I still have to use DrawGeometry for each agent, meaning I am still uploading my synthetic scene from cpu to gpu for each agent. With the available pipeline commands I see no way to use the uploaded geometry for all my characters. Does anyone have any suggestions or can tell me if there is a way to reuse the uploaded geometry on the gpu, without any culling or clipping, multiple times? Could this be made possible using a geometry shader? _________________________________________________________________ Regarding batching/billboarding: I have not yet implemented any batching of the cylinders, or using billboarded quads like zoombapup suggested. I'm sure this would improve my framerate further, but the main bottleneck, drawgeometry and drawquad and all the OpenGL calls that are the result of this, will remain. Another optimization opportunity could be the 10% redundant OpenGL state changes I am currently having according to gDEBugger, which supposedly can "reduce render performance" and "are not cheap". I'll have to look into this. Any input would be appreciated. Cheers. |
Page 1 of 1 | All times are UTC + 1 hour |
Powered by phpBB® Forum Software © phpBB Group https://www.phpbb.com/ |