Idea 1
Ok, so multiple cameras in a naive sense is moderately simple to implement:
a) Add a 'calcOrthoProjectionMatrix' which defines an orthographic camera
b) Add a <Camera> tag to the scene definition files, allowing named cameras to be attached to any node.
c) Add a <SwitchCamera> tag to the pipeline definition files, which sets the current camera by name.
d) Allow Render Buffers to be named as textures in material scripts.
This allows multiple viewports, 3D GUIs, and pretty much everything else the casual graphics program needs.
Unfortunately, while it is nice and simple, it doesn't really help the general case of an application which wants to control multiple cameras in software. Lets look at a program which renders impostors in the distance - it only needs to update a few impostors each frame, and the number is not fixed, so it needs to be able to dynamically setup cameras, render them to textures, and bind the resultant texture to a material.
Idea 2
To solve this problem, we need a little more complexity, and a system much like the Material Contexts. So lets dump the system I just outlined, and assume a 'Camera Context' - hereafter called a 'Pass' - instead.
Passes are defined in the pipeline configuration, and contain one or more stages. Each camera is associated with a named Pass, and it uses that pass as its pipeline when rendering (much like lights and materials).
Then we can define a default pass, which will be used by the main camera, and can do post processing and so forth, and an impostor pass, which can be used by the impostor rendering code (with no post proccessing most likely), a GUI pass for the Ortho camera, and so on...
I think this solves all the problems associated with multiple cameras, but please give some thought as to feasibility, better ways to do this, etc. - all comments welcome