feature idea for wine : stereo 3D

Open forum for end-user questions about Wine. Before asking questions, check out the Wiki as a first step.
Forum Rules
Locked
rofthorax
Newbie
Newbie
Posts: 4
Joined: Thu Jan 29, 2009 10:07 pm

feature idea for wine : stereo 3D

Post by rofthorax »

NVidia is pushing a 200 dollar product which is nothing more than shutter glasses and ties into DirectX 10, encouraging people to move to Vista, which I personally believe is yet another nail in the coffen of closing the PC platform.

Now I think a better idea would be to add a feature to wine that permits stereo imaging to be added to any program that makes use of 3D features in wine, to somehow establish a standard for providing this service. Proving to the consumers that you don't need special drivers, monitors or hardware to get 3D stereo vision from any application.. It would be one more thing that Wine provides that Microsoft doesn't to those who haven't bought Vista, and furthermore a reason to use wine..

And it would, I believe, be so easy to add..

I purchased some cheap stereo shutter glasses from ebay that uses what
looks like stereo headphone attachment, for 9 dollars, and this wasn't some crappy Elsa glasses that were put out 6 years ago, this was new, high quality, I will do a demonstration of the glasses on youtube.. I connected it to the microphone port of a sound adapter and the glasses darkened, so it's likely all that is needed is some way
to hack it to work via the sound adapter of a computer. Where I bought them was "ultimate3dheaven.com" via Ebay.. I foresee a whole 3D visual industry that
has not evolved because, like linux and wine, is seen as "for hobbyists".

The advantages of stereo vision:
- perception of photo manipulation
- more tangible representation of goods (ebay)
- ability to recognize more things in a stereo photo than would be possible in 2D.
- ability to stop time and observe imperceivable things like water suspended in air
- ability to see mountains that appear to our eyes as flat in 3D as if looking at a model in clay.
- perception of reflections: glass, water, smoke, anisotropical effects like glitter, brushed metal, suede.
- the perception of increased resolution due to the added resolution of the combined imagery.

Wine could bring this to anyone regardless of income level, with as little as red/green, cyan,red, blue/yellow,cross-eyed/parallel layering, or as advanced as the use of lenticular monitors, polarized projection or lcd shutter frame synchronization.

The only thing I believe that has kept this from being brought to the PC platform is the misperception that it is a "gimmick", you have two eyes not one this should have been on a computer from day one.. The use of 3D graphics libraries should make this more accessible.. But it seems that guys like NVidia only pull out such technologies when they need to make a quick buck.. Then years down the road they drop it from their drivers..

If anyone wants to discuss this with me, go to youtube and check out the user "rofthorax":

http://www.youtube.com/user/rofthorax

I pushed the development of blender (one of the few tech supports who volunteered their time with help given by Ton Roosendal before there was a [1.5] manual for blender). My interest in blender was the need for a 3D tool that artists could own that wouldn't move away from them as a result of license servers and software managed EULA, a technique of controlling the boundry of ownership from those who pay for the software to those who dictate the use of the software regardless of need.

This is a revolution happening in steps to give people access to things that are so obvious, that should have been there yesterday but have not happened probably because "there is no market for it".. That's where open source is different, open source
development doesn't have to have a market, it just needs a chance to exist, to be
seen and heard, to be perceived and worked with.. It can start at least with
and idea, build into a test, become a library, folded into an utility, created as an application and adopted as a standard, developing into a new industry.. This is how new jobs are created.. Someone reveals something that wasn't there before and others perceive applications for that and an industry develops..

Stereo vision is not just for games, or movies.. It could be used in the selling of items over the Internet, observing the interior of a new car with accuracy, observing the texture of a Van Gogh Oil painting to see the sand embedded from the beach where Van Gogh sat, seeing a prototype before it is made with both eyes, seeing at scales of dimension beyond what is humanly possible (such as perceiving the soil wasting of mining experiments in a way no 2D image can present), working at scales one can't normally (robot with stereo vision and arms, controlled by someone with keyboard and stereo glasses), seeing beyond the glare of a jewelery cabinet window to see the jewels inside. When people see something with both eyes, it has a tendency to be that much more convincing and is likely to motivate a sale like no single image can.

All that is required is first steps.. And I think a really easy way to do this would be to add this to the directX implementation in Wine such that any 3D scene can be augmented with stereovision (at least color separated anaglyph), but without modification of the API.. Offer that which Microsoft will not offer themselves in interest of leveraging peoples adoption of a OS that will likely lead to the closing of the PC architecture as well as the accessibility to hardware that Linux adopters can use.

One way to change the future, invent it.


PS - I have a sister in law who only has one eye, so I can understand the pain of not being able to see stereo imagery, but understand that if you pass by a object in 2D with a movie, your brain has the capacity to perceive in a 3D form even with one eye.. Also movies that pan (without rotation but by moving the camera) can be converted into 3D imagery.. So hanging your video camera out a car window at a number of foreclosed houses is a easy way to generate 3D imagery, take two video players side by side and displace time on one, overlay the two, and instant 3D..

Here is the reason for the need for some to see an optometrist who may be concerned that they can't see 3D stereograms or overlay images.. I found this link on a website for 3D vision:

http://www.vision3d.com/frame.html

One test that anyone can perform to improve their ability to layer images with their eyes, is to layer their index fingers atop each other, and like doing splits, widen the gap between the fingers while trying to maintain a layering of the two to see one finger..
With time you will be able to layer 2 images of great scale even with some lateral displacement. With my eyes I have the ability to bug out at about a couple of degrees hyper parallel meaning I could layer two 22" displays at around 6-8 feet.. It's a nice trick and a valuable tool for comparing photographs and identifying things such as evidence as the scene of a crime that might have been removed.
Last edited by rofthorax on Thu Jan 29, 2009 11:18 pm, edited 1 time in total.
Andrew Fenn

feature idea for wine : stereo 3D

Post by Andrew Fenn »

On Fri, Jan 30, 2009 at 11:00 AM, rofthorax <[email protected]> wrote:
Wine could bring this to anyone regardless of income level, with as little as red/green, cyan,red, blue/yellow,cross-eyed/parallel layering, or as advanced as the use of lenticular monitors, polarized projection or lcd shutter frame synchronization.
If I am correct what you're looking for is Nvidia's Stereoscopic
drivers which are available here (
http://www.nvidia.com/object/3d_stereo.html ).

Since you're wanting to use this on Linux you shouldn't need anything
apart from the nvidia driver you already have installed on your
system. Here is a guide on how to turn it on (
http://forums.nvidia.com/index.php?showtopic=40768 )
rofthorax
Newbie
Newbie
Posts: 4
Joined: Thu Jan 29, 2009 10:07 pm

Re: feature idea for wine : stereo 3D

Post by rofthorax »

Andrew Fenn wrote:On Fri, Jan 30, 2009 at 11:00 AM, rofthorax <[email protected]> wrote:
Wine could bring this to anyone regardless of income level, with as little as red/green, cyan,red, blue/yellow,cross-eyed/parallel layering, or as advanced as the use of lenticular monitors, polarized projection or lcd shutter frame synchronization.
If I am correct what you're looking for is Nvidia's Stereoscopic
drivers which are available here (
http://www.nvidia.com/object/3d_stereo.html ).

Since you're wanting to use this on Linux you shouldn't need anything
apart from the nvidia driver you already have installed on your
system. Here is a guide on how to turn it on (
http://forums.nvidia.com/index.php?showtopic=40768 )
If I'm right that only support NVidia cards from the 7 series, we are on 9 and those drivers are no longer updated..

Anyhow that's NVidia.. I'm talking beyond drivers, beyond hardware..
I'm talking about allowing anyone regardless of graphics hardware the ability to access stereo presentation.. This should happen in the libraries that provide 3D services (directx and opengl) not the hardware driver implementation, because it is not tied to the hardware implementation.. You won't get this with DirectX 9 because Microsoft is using DirectX10 to leverage consumers onto Vista regardless of the needs of the consumers, which doesn't justify the purchase of a new OS. NVidia only provides recent support for DirectX 10 and NVidia's implementation is seperate from ATI's who has a similar thing going but are using a polarized screen with polarized glasses. You still have to splurge on a 400 dollar monitor from particular brand.. The only drawback of shutter glasses is you have to have an industry that creates them (people are complaining about not being able to create jobs.. Are we stupid or what?).. And the shutter glasses make the screen darker so the screeen must be brighter.. No problem, turn off the light, your eyes will adjust to the dark, and you have a brighter screen..


{ There is however one problem with shutter glasses and LCD screens, the obvious thing I overlooked, both are polarized and a laptop screen appears dark when looking at another LCD screen, but that can be fixed by turning one LCD screen sideways..However, If the glasses lenses could be turned then you wouldn't have this problem.. That's what I determined with glasses I was using, turn the head and the display appears clear, turn the head another 90 degrees and it is dark}.

Anyhow, I've got a little vendetta against NVidia, they have been good with the graphics cards, I can't complain there.. But I think their marketing department is really starting to piss me off now..

Keep in mind that manufacturers are known to place evangelists and scoffers in discussion groups.. I'm not easily thrown off by shallow attempts to veer my focus.. It's more than conspiracy theory, every person who has ever been involved with a standards committee knows that there is always industry representatives there to screw up the standard because anything good for the consumer is bad for business.. Open source is made better in that it lays plain for all to see that which manufacturers and the software business would fear (with an intent to make a profit on peoples ignorance). Keep people ignorant and you have a business model..

Anyone having had a OpenGL programming course as I, know this.. It's just a matter of rendering two images where there is one, offsetting the camera in the computation by a small delta of distance **. .. DirectX is a bit more than a offshoot of opengl but it must use the same matrix multiplication math that opengl uses with the same or similar 3D pipeline.

** Note: if the second camera position is not aligned parallel, does not have the same settings or doesn't share the same subject material as the first you will not have a 3D effect. If you focus the cameras in on one point you only get a 3D effect in one spot (depending on how good your brain is at discerning information) you might get a 3D effect elsewhere, I've made stereograms where bricks oriented at angles of 30 degrees of each other appear to merge into one warped brick, yea the brain can do that. If the camera doesn't have the same settings, the image will appear to blink at you because your brain can't determine which is correct. And if neither shares any subject material, you don't see anything in 3D of course, so the separation must be small enough to permit someone to focus on small things like toys on a floor. The distance between eyes scales with the size of the subject material, so if you are looking at mountains -- you need a wider separation -- otherwise the mountains will appear flat. In a game like 2142 or BF2 (I'm a hot shot commander at those) when zooming in and out from the troops to see the whole map, this could be made into a 3D effect of being able to zoom in on individual squads by narrowing the eye separation as you zoom up on the group, zooming out would involve changing the distance of the eyes [imagine you are changing the size of your head, small when looking at squads, big when looking at the entire battle). Never direct the cameras at the assumed point of interest, you will only get a 3D effect in one spot, what you are wanting people to looking at. Your brain and eyes have the ability to actually adjust to focus in on multiple points in a 3D image even though the cameras are aligned parallel (directed beyond the subject material into the distance).

I'm saying this should not be left to the hardware manufacturers to implement but should be a feature of the 3D library that interfaces with the hardware, because getting hardware developers to agree on standards such as for stereovision is like herding cats, they each want a way to leverage the technology in their favor.. The reality of 3D vision is it's implementable at a higher level than the 3D library, it only requires the manipulation of the code before it gets passed on to the 3D hardware..

Example:
Shutter glasses: synchronize left and right eyes with odd and even frames, odds for left, even for right, using a normal double buffering scheme.
1. show left view, left lcd frame clear, right lcd frame opaque
2. show right view, right lcd frame clear, left lcd frame opaque
3. show left view, left lcd frame clear, right lcd frame opaque

Note this is easier because all you have to do is alternate the placement of the camera, you don't need to render two views at once.. Remember we are double buffering here..

For anaglyph:
Render both left and right frames (this slows down the graphics card because you have to do both at once in the hidden buffer), multiply each by a frame of color and average the two images together into the hidden buffer.. You could reduce the frame rate and hold the double buffers for two frames per frame.

For lenticular display (note, lenticular displays have the ability to present more than one stereo view), this involves moving the camera multiple times, one per interlaced lenticular view. I think there are some lenticular displays that can present 5 views, so all you need to do is render 5 views, creating one image five times as wide as before, displayed on the monitor appears of a holographic quality (requires much faster card).
How lenticular displayes work, if you've ever used a glass thermometer you understand the concept, a surface filled with thousands of small ridged lenses running vertically down the display. Each lens presents one of 5 slivers of imagery, your left eye sees one line magnified, your right eye sees another line magnified, shifting your position shifts which lines behind the curved lens are magnified. Also the resolution of the lenses determines if a 3D image can be presented, children will not be able to perceive as many dimensions as adults due to the amount of eye separation, so not everyone will see the effect.. I could have that switched around. Turning a lenticular display on its side presents a animation effect because both eyes will see the same image, but tilting the display horizontally produces a sort of animation (as found in cheap plastic ridged lenticular stickers in cracker jack boxes).

There is also another method of creating 3D imagery using the side to side motion that takes advantage of a concept that your brain processes imagery more slowly with less light, thats where the glasses had a darker lens on one eye and a clear lens on the other.. The effect alone can be used with any video image where video is constantly panning from say left to right if the right eye is darkened (right perspective is delayed, left eye sees left perspective, as image passes to right perspective, left eye is focused on next image to right, and right eye is starting to perceive the right perspective).. At least that is my explanation for the effect..

In every stereo vision effect, one eye has to see something different, that is how the effect is perceived.. There is also a 3D effect that someone came up with where prism refraction is used to present colors as being in front of other colors.. Then there are effects like Single Image Random Dot Stereograms (those noisy images that were popular in the 90s).

I've thought about writing a book on this stuff, but I think it could be a rather thin book.. The technology is not at all hard to understand, what is harder to understand is why it's been left to hardware developers to implement, why it is not presented between the graphics driver and the 3D library.. Wine as a unique position which no OS has, it has the capability to have 3D vision implemented in the translation of instructions to the backend libraries (OpenGL) or in libraries that don't support 3D vision, like stereo support in DirectX 9 API implementation without the consent of Microsoft. Just imagine if pieces of wine code could be used to replace DirectX on the windows platform and add features to XP that exist in DirectX10.. Ever thought about that? This could be one of those.. But it needs to start on a platform that is uniformly accessible, and I doubt ATI and NVidia are going to agree on one.. This is a job for open source..


I've been told "if youare so smart, why don't you do it".. Because I need help, and that starts with making others aware of the idea and the need..

PPS- What teh hardware manufacturers might have to fear from the adoption of sterevision.. This, more innovation needed.. For example, bump mapping texture color onto a object with respect to a light, not with respect to teh perspective of the viewer.. This means the bumps on the surface are not capable of self shadowing.. Because of this, two stereo views of a bump mapped surface looks like an artist painted the surface to look bumpy, like those fake bullet hole stickers people put on their trucks.. The only way to fix this is to compute the surface according to the camera view and to use self occluding surfaces like displacement mapping. There could be others, like being able to distinguish a gas/smoke effect as being a transparent texture mapped to a simple polygonal model.. But another way to look at it, more demand leads to a need for better graphics cards, more innovation.. However the executives would really prefer just to scale the design and not have to innovate even though innovation may not cost..I can't tell if Americas job loss has to do with loss of demand, a frugal market on the part of the consumer or a lack of innovation and sub-ordination to the consumer by the businesses.. But this is a industry ready to take off, and I think it's just a matter of making people more aware of the use.. I want it to be so widely used that people see the need for it but are not as fascinated by it at first, this means that it is accepted and becomes a part of daily life as all technologies achieve or attempt to achieve.. Of course people's use will be limited by their imagination, and if people remain ignorant of ideas they will likely be inspired less to use their imaginations.. And I see the key to many of the problems we are having now in this economy is due to a lack imagination.. Sure you could say the housing problem was due to a mistaken imagination, a bad vision, but it could also be argued that it was a lack of imagination on the part of those watching the problem occur.. Who knows, stereo vision could have been used to help someone realize this by seeing patterns in 3D data that are not immediately obvious by 2D data.. But to stay on top of everything we need all senses in the game, and if we only use some of our senses we are disadvantaged.

PPPS - let us not forget that blues/jazz wouldn't have been as successful a style of music if the manufacturers that produced the musical instruments so cheaply that a repressed black man could afford it and through the use of that tool communicated a response to his troubles in ways no speech can.. You make things available and affordable and then comes the art.. But that all doesn't start without demand. I believe there is a demand here, it just hasn't surfaced..
rofthorax
Newbie
Newbie
Posts: 4
Joined: Thu Jan 29, 2009 10:07 pm

Working on a blender blend file to demonstrate the above

Post by rofthorax »

I'm working on a blender (3D) .blend file that exhibits the
effects I describe.. I have the blend file done, it makes
use of scene linking and the sequence editor to layer
left and right views, I use color generators for each
eye and a multiply to simulate the filter, then I add the two
over each other. I'm also going to make a tutorial that
is going to go up on youtube showing how to use the
scene. But it contains absolutely no python code whatsoever
it is all just basic Blender configurations.. Plus this technique
will work with older versions of blender, though in the older
versions of blender there were no "track to" constraints,
the effect doesn't really need it. Also the file contains
no packaged textures or anything, all imagery is created
using a feature unique to blender of being able to composite
images without pre-rendered material by using a scene as a
source for imagery. Also I'm using the shared link capability to
allow everything to be controlled from a single view. But to
change the resolution in the left and right views, you have
to go to the scene that renders out the view for each eye
change the resolution, also the resolution of the composite
will determine the resolution that is used finally to display.


Anyhow, I'll have this stuff up soon..

I'm using some glasses that are being distributed by Dreamworks for
the half-time show at the super bowl (monsters vs. aliens).. Get those
glassess from your grocery store (in america).. For everyone else use
glasses with a yellow-tan / purplish-blue filters (left and right). The
composite uses two color generators, to get it to work with other glasses
like red/green or magenta/cyan, and such, change the color filters so they
match the colors used on the glasses, the filter color of the opposite eye
should appear black with the original, and vice versa. So purple content in
the yellow filter appears darker than purple, and the yellow content in the
purple filter appears darker than yellow. The reason for the selection of color on anaglyph glasses is due to the subject material.. Red subject material doesn't do
well because yellow and purple share red, but green, grey and blue appear better..
A strict filter (red/green or red/blue) tends to only work with grey subject matter, because red, green and blue are primary colors and always appear black when filtered with other primaries.

Anyhow, stay tuned at
http://www.youtube.com/user/rofthorax

I will put the tutorial and blend file there..
DaVince
Level 8
Level 8
Posts: 1099
Joined: Wed Oct 29, 2008 4:53 pm

Post by DaVince »

Wow, man, that's a lot of text. You might wanna keep a blog for your ideas (and related files). :)

The whole stereoscopic 3D with-a-standard idea sounds interesting, but it'll take both time and sufficient interest from the right developers and parties in order for it to get there. I'm not putting much hope on this unless NVidia's new glasses suddenly become insanely popular (which I doubt because it requires their newer video cards, twice the processing power and a 120 Hz monitor).
Locked