The University of Stony Brook, Nvidia and Adobe are presenting at Siggraph 2018 with their paper on infinite walking using Dynamic Saccadic Redirection. This is a really neat interpretation of the age old problem of redirected walking. The last “great” solution I encountered was at VRDC 17 presented by Mahdi Azmandian at the Mixed Reality Lab, USC Institute for Creative Technologies. Regretfully this approach still required a 30’x30′ area to handle any walking area. This will be the room space size as offered by the new Steam VR lighthouse tracking v2.0. However I have not had an opportunity to play with that tech yet.
The researchers at Stony Brook utilize an embedded eye within the HMD to track saccade eye movement. During this eye movement, evidently the environment can be rotated incrementally to keep the participant within a confined space without noticing the effect or causing nausea. I have no idea about the details of this project. The folks at Stony Brook are being hush-hush until Siggraph. I suppose we’ll just have to wait until August to understand what this new technology entails.
The folks over at Disney Research have just developed a haptic jacket to produce physical sensations within an immersive experience. On a side not this is kind of strange since Disney has recently closed their Carnegie-Mellon research lab but the video claims to originate from the lab. I hope the research lives on past the CMU lab. The Void, which is also now a Disney property, has already invented and has been employing a haptic rumble jacket participants put on when going through the Star Wars or Ghostbusters experiences.
Different from the Void jacket, the Disney jacket is laced with an array of “Force units”> Each unit is a small pouch that can either vibrate or expand with air. The intensity of the vibration and the air expansion is controllable within the experience. These are all of the details I have for now. This is a cool interface device but I am curious about the air compression needed for each force unit. I understand this is just a prototype. However, the complications of moving air will almost this this to be an exclusive location-based device. This is fine for location based experiences but outside of VR porn enthusiasts, I don’t see how this could ever be targeted for home usage.
A year and a half I had the privilege to attend SteamVR training. Steam tracking is amazing piece of engineering. The solution has only improved with version 2.0 which has just now started releasing.
I was all hopped up and ready to start creating my own Steam VR controllers and tracked object. That was until Triad Semi-conductors, the manufacturer of the new tracking chip released the prices for it’s SDK.
Wow! almost $600 for one kit. My dreams for creating my own controllers for Steam VR were dashed. I did not have those kind of resources. Until I had an actual product to work with this tool kit would have to remain out of my grasp.
Today I learned about another player in the Steam VR line, Virtual Builds, that has just released a new kit for $200.
If purchase a-la-carte the board and sensors would probably still cost you the same. Still, $200 is a lot cheaper than $600.
When I can get back into the game of creating my own controllers this will definitely one of the first places I check out!
It’s not big secret that I am a big HTC Vive fan. I love Steam VR tracking. I purchased an HTC Vive as soon as they became available. I love the overall product.
I also have not been blogging much lately. In the world of immersive experiences there has not been much development happening anywhere outside ILMXlab/Void and Dreamscape Immersive. These companies keep very close to themselves so there has not been much news to report on.
Last month HTC Vive released their Pro Model. I have not purchase one of these new models yet because, as far as immersive experience development is concerned, the new product does not have any must have feature. Tracking 2.0 will not yet be available until who knows when. The wire-less adapter would be nice but not essential for exploration. The new improved display and headset would also be nice but, once again not essential for the exploration I am doing.
Just recently, HTC released a new SDK for the PRO enabling it to generate AR. This is really cool since, at first glance, anything you can do with Magic Leap, you can do with Vive. I have not had an opportunity to play with the Magic Leap SDK which was also released about a month ago. However, If given the opportunity to work with just one hardware set, I would choose Vive. I know very little about Magic Leap and that is the problem. Outside of Magic Leap all information is rare. Once Tracking 2.0 becomes available I will probably get myself a new Vive Pro. If that means I won’t need to purchase a Magic Leap Dev kit then I will be all over that.
I have included some test footage from Ghost Project Studios who is one of the first adapters of this new technology. Really exciting Stuff. Why is this exciting? You’ll just have to stay tuned to a new concept I am working on concerning the three contributing levels of immersive experiences. Until then, Enjoy these videos:
There are two bits of information out there giving some direction of where HTC and Steam will be taking with the Vive.
In fact, I know this may be too early guessing, maybe the next generation of HMD coming from this collaboration won’t even be called the ‘Vive‘ any longer. What are your thoughts?
On first order, the engineers at SteamVR reminded all future developers to start ordering their new, STeamVR 2.0, base stations. The new base stations will not be compatible with the old HMDs. These new base stations will only be compatible with the new TS4231 sensors. Good for backwards compatibility, these new sensors will still respond with the old lighthouse base stations. These new base stations will be cheaper, have no moving parts and will not have sync issues. Steam is asking manufacturers to start making orders now. The manufactures must buy them in bulks of 45 at $60 a piece and supply no packaging and no mounting equipment. The retail price of the new base stations will probably be more expensive than $60 but we’ll just have to wait for the MSRP in 2018.
What is really exciting about these new bases is that they will soon be able to support up to four base stations working in conjunction with each other, covering volumes of up to 10 cubic meters. That is really big! In fact it is so big that that should be a sufficient enough space to implement redirected walking seamlessly without resetting. Of course there would be caveats in the environment to compensate for the limited space. However with a 10mX10m space you should only have to worry about a reset ever 13m which is still quite a large distance! This is super exciting and more information as things continue to develop.
The next bit of information invites even more room for conjecture. HTC has just applied for a New Zealand patent for a new HMD called the HTC Eclipse. The HTC Focus was thought to be the new, wireless mobile headset to be compatible with the new Windows 10 VR suite. The new HTC Eclipse has these particular tags: head mounted display for computer simulated reality, motion tracking sensors, handheld computer simulated reality controllers. ” Is this an indication of the next generation of VR? Time will only tell. However, the simultaneous release of the next generation of tracking and this new HMD may be more than coincidence.
It’s been a while since I have made a post but this one really makes me excited. Jasper Brekelmans, a Netherlands-based 3D tech artist, has recently released a motion capture tool offering an easy way to record OpenVR tracking data from headsets, motion controllers and Vive Trackers for both Vive and Rift setups. Called OpenVR Recorder, the data collected by the program can be used for 3D animation and visual effects production, with many other potential applications in tracking and research.
This tool really has me excited and is one I have been waiting for a long time. The brilliance of this tool is that the entire experience can be recorded as a simple stream of tracking data. That data can then be fed back into the game engine and re-rendered with a dynamic camera, ala mechinema. Or, if a really fine output is desired then a more cinematic representation can be created by piping this stream into a commercial DCC and then re-rendered frame by frame using a package such as Renderman. The beautiful thing about this approach that it is just the tracking data and the CG reality can be built surrounding this. New types of cinematic recording can now be forged and then catered to desire of the end result. Fantastic tool. I can’t wait to get my hands on it!
Sensics, a long time manufacturer of high end Head Mounted Display devices, has recently released its new professional grade HMD geared towards VRArcades and amusement parks.
This new headset has two versions: $2160.00 and $2590.00. Sanitation and Resolution are it’s big selling points. Hygienically, the new units include a machine-washable, hypoallergenic face mask that physically separates from the display. This detached face mask has two advantages. First it is easily set aside and sanitized for later use apart from the disply, expensive part of the HMD. Secondly, participants can strap in and adjust the headset for optimal fit before clipping into the display. Both of these contribute to greater customer throughput. No longer does the attraction need to stop between sessions so the new users can exchange sets with the old. The new users prep themselves before the start of the experience, receive the display portion from the prior users then immediately start the experience with minimum downtime. Experience operators then sanitize the used face masks and help the next participants prepare for their own experiences.
The resolution of the more expensive unit is 1440×1600@90 Hz LCD which is 70{76c5cb8798b4dc9652375d1c19c86d53c1d1411f4e030dd406aa284e63c21817} larger than the Vive/Oculus display at 1080X1200. Whether the rendering engine can handle that much more throughput is an entirely different issue and will need be explored. The cheaper unit resolution is 2160X1200@90Hz OLED. Here are some of the image comparisons:
Currently the headsets will be equipped with integrated 9-axis orientation trackers which is very similar to what you currently have in your cell phone. For room scale and larger experiences, this headset lends itself naturally to an OptiTrack or Vicon tracking solution. Regretfully, this does not sound immediately compatible with Steam VR tracking. However, a third party could very well create an attached controller which could track the headset as an added component to an existing Vive setup.
While this system sounds very interesting, at this moment in time the cost for these units is very expensive and prohibitive. Past the R&D and prototyping stages this tool would be very useful for commercial usage.
As promised, here is the second test for the iMyth MOCAP Suit. As a full disclaimer, the system is still very primitive and has far to go yet. But forward progress is being made.
The system is made with 5 Steam VR controllers mounted on the interactor; one on the waist, two on the wrists and two on the feet. iMyth member Chris Brown devised this first iteration. At this moment in time, there are only positional offsets represented. This is no orientation information yet. That will be the next step. Similarly, there are no scale adjustments made for the differences in scale between the interactor and the avatar. Once those are calibrated with the proper pole vector simulation, the animation will appear much smoother and accurate. There is a certain amount of latency present in the system. We will need to look into that further. Quite possibly translating the blueprints to actual C++ classes will speed things up. However, for these early experimental stages, blueprints will work just fine. The system is implemented using Steam VR tracking and the UE4 game engine. More really good stuff to come!
Yesterday was a very special day as it marked the first successful Baby test of the new, iMyth MOCAP suit. It was a very simple test but worked well. Jon Albertson was the brave volunteer who doned the new MOCAP suit which consisted simply of a Vive HMD, a MOCAP Belt, two MOCAP hand trackers and two MOCAP foot trackers. The belts and the trackers all worked very successfully, driving simple objects within the virtual frame work.
Although the trackers drove very simple objects in VR, the test was very promising as it set up for the next phase, driving an articulated character. Hopefully we will have updates in the very near future demonstrating the exciting new phase!
It has been a long me coming in order to create real-time, digital humans. Although not quite there, a very talented group just put together the best attempt yet:
Mike Seymour, co-founder and interviewer for FXGuide, teamed up with companies such as Epic Games, Cubic Motion and 3Lateral to create this impressive show piece. The demonstration was created for the Siggraph 2017 Conference where Mike Seymour’s avatar interviewed, live on stage, leading industry figures from Pixar, Weta Digital, Magnopus, Disney Research Zurich, Epic Games, USC-ICT, Disney Research and Fox Studios.
Mike Seymour was scanned as part of the Wikihuman project at USC-ICT with additional Eye scanning done at Disney Research Zurich. The graphics engine and real time graphics are a custom build of Unreal Engine. The face tracking and solving is provided by Cubic Motion in Manchester. The State of the art facial rig is made by 3Lateral in Serbia. The complex new skin shaders were developed in partnership with Tencent in China. The technology uses several AI – deep learning engines in tracking, solving, reconstructing and recreating the host and his guests. The research into the acceptance of the technology is being done by Sydney University, Indiana University and Iowa State University. The guest’s avatars are made from single still images by Loom.ai in San Francisco.
While this experience does play at 30fps and 90fps (with wider camera angle) it does come with a cost. Its was created with nine PC’s with 32Gig RAM each and 1080ti Nvidia cards. Here are the other technical facts:
MEETMIKE has about 440,000 triangles being rendered in real time, which means rendering of VR stereo about every 9 milliseconds, of those 75{76c5cb8798b4dc9652375d1c19c86d53c1d1411f4e030dd406aa284e63c21817} are used for the hair.
Mike’s face rig uses about 80 joints, mostly for the movement of the hair and facial hair.
For the face mesh, there is only about 10 joints used- these are for jaw, eyes and the tongue, in order to add more an arc motion.
These are in combination with around 750 blendshapes in the final version of the head. mesh.
The system uses complex traditional software design and three deep learning AI engines.
MIKE’s face uses a state of the art Technoprop’s Stereo head rig with IR computer vision cameras.
The University research studies into acceptance aim to be published at future ACM conferences. The first publication can be found at ACM Conference
FACS real time facial motion capture and solving[Ekman and Rosenberg 1997]
Models built with the Light Stage scanning at USC-ICT
Advanced real time VR rendering
Advanced eye scanning and reconstruction
New eye contact interaction / VR simulation
Interaction of multiple avatars
Character interaction in VR at suitably high frame rates
Shared VR environments
Lip Sync and unscripted conversational dialogue
Facial modeling and AI assisted expression analysis.
This is all very exciting for the evolution of immersive experiences and the evolution of the Digital Interactors. As Immersive experiences become more life-like, the need to create photo-real interactor become more demanding. So far this is the best example of real time facial MOCAP I have seen yet.
In the video above the animation in the mouth seems to be the weakest component. The eyes, nose, hair and wrinkles really seem to shine. The other videos I have seen show of the eyes more than the mouth. I will have to study their approach in order to understand why the mouth animation does not look as good. It is a real head-scratcher especially since the system employs 750 blend targets. The animation is not quite photo real. However, animation such as this may be good enough for stylized characters. Hopefully there is time for the animation technology to catch up to the rendering performance.
Regardless of the mouth, the system was still able to be put together with off the shelf software and components. They were working with a special build of the UE4 editor. How unique that build was has yet to be seen. This is all very inspiring since anyone could start getting similar results now. Companies such as iMyth need to be employing this technology today in order to keep up with the developmental curve. Once tutorials employing these concepts start becoming commonplace, everyone and their uncle will be employing them.
For a company such as iMyth what needs to be developed is a an animation system such that the models will animate similarly despite the face that drives it. Any unique character may have multiple interactors driving them at different times. MEETMIKE was created based on the real Mike Seymour. Immersive Experience companies will not be able to create personalized versions of each character to correspond with each possible interactor. One character model will be built and rigged while multiple interactors need to be able to drive it. This adds an entire new level of complexity to the implementation.
Along with finger tracking, Facial MOCAP was one of the big hurdles for Immersive Experiences to overcome. Maybe with MEETMIKE this barrier has now been overcome.