Watters Poker Tournament
a 5 min interactive panoramic video
2048x1024, flv 83MB
download and play local:
4096x2048, mp4 189MB:
3072x1536, mp4 96MB:
2048x1024, mp4 45MB:
1440x720, mp4 23MB:
1024x512, mp4 15MB:
using VRPlayer and watch on a rift.
For Oculus Rift:
Media -> format -> Mono
-------> projection -> Sphere
Making of the WPT
720x540, xVid AVI
17MB 6 min
A time lapse movie showing the setup and the filming of
the first two characters. No sound but with a few text comments.
This is my first interactive panoramic video I have published. I shot it in April 2006,
but did not finish until Dec 2008. It is a poker game with the camera positioned over the center of the table. I did an earlier test
run in Feb 2006 with a circular fisheye lens and only captured from the horizon down. The second time I used my full frame fisheye on my Nikon D70s.
I shot my VR video by pointing my camera in different directions for each sequence. I tethered the camera to my computer capturing a jpg image every couple seconds
(jpg transfer faster than raw). There was no option with the software to trigger
camera from computer but save only on camera and not transfer to the computer.
I wanted to display as many frames per second as possible, and to compensate for
the camera capturing slowly I acted everything out is slow motion too. I played
the audio at slow speed and timed my acting to it. It only helped marginally. Instead of a frame ever 2 seconds I got a frame every 1.5 sec for playback.
I used video editing software to do the blending of each sequence to the timing
of the audio. In 2008 I was only able to output to 2048X1024. Although it was possible
to input images as big as 6000x3000 I had to down sample them all to 4096X2048 to
prevent it from crashing when rendering.
In Jan 2014 I rendered a new version to 4096X2048. Still struggling at incorporating the multiple audio tracks to create directional sound.
I was able to batch transform each sequence of images, but I did the blending using
masks in the video editing software. The timing of the triggering of the camera
for each sequence was not constant. By using the video editing software I was able
to position the time of each transition precisely to match the dialog and audio.
In some directions I might have 4 frames to stretch out to last 20 seconds in another
direction I might have 20 frames.
I did take advantage of of the fact that a large percentage of the scene does not
change from start to end and use a single uncompressed image to cover this. Only
the pieces that actually get updated are revealed with a mask and get blended into
the frame. I am hoping that this would go a long ways in shrinking the size of the
finished compressed video.
This is really is an experiment that I hope to improve on next time with
a better script and better actors. ;-) (I play all 5 characters in this one)
I also set up the audio to be 5.1 sound so the direction could change as the video
is panned. But none of the the viewing software has this feature yet.
So for this render the movie has stereo sound that does not change with rotation.