Splitting RGB layers from composite

Started by milfordman, January 03, 2006, 08:20:27 AM

Previous topic - Next topic

milfordman

This is kind of unrelated to gaming, but you guys seem to know you're stuff on video signals.  I�m looking into the possibility of live video anaglyphs (3D pictures) and I�m wondering, if you had two cameras, most likely sending composite output, could you �decompose� the signals and somehow blend the red layer of one video signal with the green and blue layer of the other?  I had tried this with VGA signals (splicing the red and red ground pin from one signal to the other) but I think I failed due to some sort of synchronization pin.  Are there filters out there that you know of that could help me figure this out?  Thanks!



NFG

There are cameras that have external sync connections, so they can all be synchronized, and there are chips that will deconstruct composite into RGB.

You're better off, IMO, using monochrome cameras with external sync capability and wiring one to red, and one to blue (or whatever).  mono cameras typically output RS-170 video, which is exactly what one R, G or B channel normally is.  

milfordman

If for example, I had two cameras with a basic composite video output with an RCA jack, do you know of anything that I could use to hook it up in this fashion?  Essentially I would need one to output to the red side, while the other to output to cyan (blue and green).  My very limited understanding of video formats makes this kind of confusing...as I understand in component video, the "G" isn't green, but is luminance, and the green layer is computed by subtracting R and B from luminance.  As you can see probably I don't know what I'm talking about, I'm sorry!

But is there some sort of box, and I know this is probably oversimplification, where you can have an R and a G and a B input, and then a composite or S-Video output?  And could you run separate cameras into the RGB inputs without running into other issues (like synchronization, which I know nothing of)?

viletim!

milfordman,
I think you'd get a much better resault by doing the filtering at the point wher the light enters the camera, like a colour filter, celophane, etc stuck to the lens. If you use monochrome cameras you should be able to just feed the outputs to an off te shelf RGB to video encoder. You may need to modify it slightly (remove a termination resistor) if you plan to connect one camera output to two video inputs (blue and green).

Like Lawrence said, synchronising the cameras is important.