Author: Chris

  • Update on Steam VR 2.0

    Update on Steam VR 2.0

    There are two bits of information out there giving some direction of where HTC and Steam will be taking with the Vive.

    In fact, I know this may be too early guessing, maybe the next generation of HMD coming from this collaboration won’t even be called the ‘Vive‘ any longer. What are your thoughts?

    On first order, the engineers at SteamVR reminded all future developers to start ordering their new, STeamVR 2.0, base stations. The new base stations will not be compatible with the old HMDs. These new base stations will only be compatible with the new TS4231 sensors. Good for backwards compatibility, these new sensors will still respond with the old lighthouse base stations. These new base stations will be cheaper, have no moving parts and will not have sync issues. Steam is asking manufacturers to start making orders now. The manufactures must buy them in bulks of 45 at $60 a piece and supply no packaging and no mounting equipment. The retail price of the new base stations will probably be more expensive than $60 but we’ll just have to wait for the MSRP in 2018.

    What is really exciting about these new bases is that they will soon be able to support up to four base stations working in conjunction with each other, covering volumes of up to 10 cubic meters. That is really big! In fact it is so big that that should be a sufficient enough space to implement redirected walking seamlessly without resetting. Of course there would be caveats in the environment to compensate for the limited space. However with a 10mX10m space you should only have to worry about a reset ever 13m which is still quite a large distance! This is super exciting and more information as things continue to develop.

    The next bit of information invites even more room for conjecture. HTC has just applied for a New Zealand patent for a new HMD called the HTC Eclipse. The HTC Focus was thought to be the new, wireless mobile headset to be compatible with the new Windows 10 VR suite. The new HTC Eclipse has these particular tags: head mounted display for computer simulated reality, motion tracking sensors, handheld computer simulated reality controllers. ” Is this an indication of the next generation of VR?  Time will only tell. However, the simultaneous release of the next generation of tracking and this new HMD may be more than coincidence.

  • Open VR Recorder is Here!

    Open VR Recorder is Here!

    It’s been a while since I have made a post but this one really makes me excited. Jasper Brekelmans, a Netherlands-based 3D tech artist, has recently released a motion capture tool offering an easy way to record OpenVR tracking data from headsets, motion controllers and Vive Trackers for both Vive and Rift setups. Called OpenVR Recorder, the data collected by the program can be used for 3D animation and visual effects production, with many other potential applications in tracking and research.

    This tool really has me excited and is one I have been waiting for a long time. The brilliance of this tool is that the entire experience can be recorded as a simple stream of tracking data. That data can then be fed back into the game engine and re-rendered with a dynamic camera, ala mechinema. Or, if a really fine output is desired then a more cinematic representation can be created by piping this stream into a commercial DCC and then re-rendered frame by frame using a package such as Renderman. The beautiful thing about this approach that it is just the tracking data and the CG reality can be built surrounding this. New types of cinematic recording can now be forged and then catered to desire of the end result. Fantastic tool. I can’t wait to get my hands on it!

  • Sensics’ New Professional Grade HMD

    Sensics’ New Professional Grade HMD

    Sensics, a long time manufacturer of high end Head Mounted Display devices, has recently released its new professional grade HMD geared towards VRArcades and amusement parks.

    This new headset has two versions: $2160.00 and $2590.00. Sanitation and Resolution are it’s big selling points. Hygienically, the new units include a machine-washable, hypoallergenic face mask that physically separates from the display. This detached face mask has two advantages. First it is easily set aside and sanitized for later use apart from the disply, expensive part of the HMD. Secondly, participants can strap in and adjust the headset for optimal fit before clipping into the display. Both of these contribute to greater customer throughput. No longer does the attraction need to stop between sessions so the new users can exchange sets with the old. The new users prep themselves before the start of the experience, receive the display portion from the prior users then immediately start the experience with minimum downtime. Experience operators then sanitize the used face masks and help the next participants prepare for their own experiences.

    The resolution of the more expensive unit is 1440×1600@90 Hz LCD which is 70{76c5cb8798b4dc9652375d1c19c86d53c1d1411f4e030dd406aa284e63c21817} larger than the Vive/Oculus display at 1080X1200. Whether the rendering engine can handle that much more throughput is an entirely different issue and will need be explored. The cheaper unit resolution is 2160X1200@90Hz OLED. Here are some of the image comparisons:

    Currently the headsets will be equipped with integrated 9-axis orientation trackers which is very similar to what you currently have in your cell phone. For room scale and larger experiences, this headset lends itself naturally to an OptiTrack or Vicon tracking solution. Regretfully, this does not sound immediately compatible with Steam VR tracking. However, a third party could very well create an attached controller which could track the headset as an added component to an existing Vive setup.

    While this system sounds very interesting, at this moment in time the cost for these units is very expensive and prohibitive. Past the R&D and prototyping stages this tool would be very useful for commercial usage.

     

  • A New Potential in Full Body MOCAP

    A New Potential in Full Body MOCAP

    I just found out today about a new company offering their solution to a full body MOCAP suit. Introducing the new Enflux Full Body MOCAP suit. This suit is interesting because unlike the Perception Neuron with nodes you attach to your body, the Enflux suit has nodes embedded within the fabric of the suit. How the suit deals with the offset in armature scale is an unknown. Hopefully they have solved that small  issue.

    The suit is driven by 10 IMU sensors; five located in the pants and five located in the shirt. Evidently the electronics are easily removed in order to facilitate easy washing of the suit. Each node is rated to plus or minus 2 degrees in roll, pitch and yaw. Their are currently making developers’ suits available to the public for $500. There is also a $100 headband that can be used for head tracking. Currently the technology is available for Blender and Unity. There is no documentation discussing availability for UE4. Overall this looks like a cost effective alternative to the Perception Neuron Suit that can be easily applied and removed, via the suit, and is hygienic, easy to wash.

    Similar to the Perception Neuron, this could be used as a poor man’s MOCAP solution. At $500 less than the Perception Neuron this may seem like a more cost effective solution.

    Regretfully, from an iMyth perspective, I am going to take a back seat on this technology until something a little more second generation arrives. The first and foremost reason is that this is an IMU driven sensing solution. IMUs are great at measuring relative accelerations and displacements. Without a relative world space anchor they have a bad problem of “drifting” away. The drift is caused by an inherit flaw in the electronic’s calculations. As each node iterates over the solution, the amount of drift increases, somewhat randomly, over time. We found a solution to this by using an HTC Vive headset as the anchor point for all character calculations. While not perfect or optimal, it did provide a suitable solution to keep the character in the same relative space. A better solution would be to use a Steam VR tracker at the waist, at the wrists and ankles and on the head. If you are going to this extent, all the suit really offers is economic solutions for the elbows and knees. 2 degrees of float in all of the calculations seems like a heavy priced to pay. That will come across as a lot of float :(.

    They offer a headband with another tracking node in it for $100. This may be great for a non-real time capture performance. However, I’m not quite sure how this would work with an HMD over the user’s head. The Enflux suit also does not provide support for articulated fingers. This is something the Perception Neuron does provide. Understandably this was a design choice in order to keep the cost of the suit down. Will there be some integration with an articulated glove in the future? We’ll just have to wait and see.

    Enflux has a very reasonable entry into the Full Body MOCAP market. Being cheaper than the Perception Neuron may give them the competitive edge they need in order to stay alive. However, being dependent on a pure IMU solution leaves the door open to much better tracking technologies come with the second generation.

  • iMyth MOCAP Suit Test #2

    iMyth MOCAP Suit Test #2

    As promised, here is the second test for the iMyth MOCAP Suit. As a full disclaimer, the system is still very primitive and has far to go yet. But forward progress is being made.

    The system is made with 5 Steam VR controllers mounted on the interactor; one on the waist, two on the wrists and two on the feet. iMyth member Chris Brown devised this first iteration. At this moment in time, there are only positional offsets represented. This is no orientation information yet. That will be the next step. Similarly, there are no scale adjustments made for the differences in scale between the interactor and the avatar. Once those are calibrated with the proper pole vector simulation, the animation will appear much smoother and accurate. There is a certain amount of latency present in the system. We will need to look into that further. Quite possibly translating the blueprints to actual C++ classes will speed things up. However, for these early  experimental stages, blueprints will work just fine. The system is implemented using Steam VR tracking and the UE4 game engine. More really good stuff to come!

  • iMyth MOCAP Suit

    iMyth MOCAP Suit

    Yesterday was a very special day as it marked the first successful Baby test of the new, iMyth MOCAP suit. It was a very simple test but worked well. Jon Albertson was the brave volunteer who doned the new MOCAP suit which consisted simply of a Vive HMD, a MOCAP Belt,  two MOCAP hand trackers and  two MOCAP foot trackers. The belts and the trackers all worked very successfully, driving simple objects within the virtual frame work.

    Although the trackers drove very simple objects in VR, the test was very promising as it set up for the next phase, driving an articulated character. Hopefully we will have updates in the very near future demonstrating the exciting new phase!

  • Staw Wars/The Void Coming to Orlando

    Staw Wars/The Void Coming to Orlando

    I suppose it was only a matter of time. I gained wind that the Void had become part of the Disney Accelerator a couple of weeks ago. Last week I had learned that the Void would be opening an installation here in Orlando. I was not quite sure and I should have put two and two together earlier. It’s official, ILMX and The Void will be opening a Star wars experience, “Secrets of the Empire” in Orlando some time around the holidays. I don’t have any details other than what I have mentioned above.

    From the cover art I see that it is going to be a very similar experience as the Ghost Buster’s experience except that it is going to take place in the Star Wars theme world. The one major difference will be the inclusion of a digital interactor, K-2SO. There is some test footage of the autonomous robot posted in the Forbes internet article, ILMX Autonomous Interactors. Mind that this interactor is autonomous and not driven by a human being.

    I’m very excited to see the results of this. I would think that ILMX has created the majority of the experience already and will spend the next couple of months shoe-horning it into the Void system. I will have a full review once the attraction is available.

  • The New Standard In Digital Interactors

    The New Standard In Digital Interactors

    It has been a long me coming in order to create real-time, digital humans. Although not quite there, a very talented group just put together the best attempt yet:

    Mike Seymour, co-founder and interviewer for FXGuide,  teamed up with companies such as Epic Games, Cubic Motion and 3Lateral to create this impressive show piece. The demonstration was created for the Siggraph 2017 Conference where Mike Seymour’s avatar interviewed, live on stage, leading industry figures from Pixar, Weta Digital, Magnopus, Disney Research Zurich, Epic Games, USC-ICT, Disney Research and Fox Studios.

    Mike Seymour was scanned as part of the Wikihuman project at USC-ICT with additional Eye scanning done at Disney Research Zurich. The graphics engine and real time graphics are a custom build of Unreal Engine. The face tracking and solving is provided by Cubic Motion in Manchester. The State of the art facial rig is made by 3Lateral in Serbia. The complex new skin shaders were developed in partnership with Tencent in China. The technology uses several AI – deep learning engines in tracking, solving, reconstructing and recreating the host and his guests. The research into the acceptance of the technology is being done by Sydney University, Indiana University and Iowa State University. The guest’s avatars are made from single still images by Loom.ai in San Francisco.

    While this experience does play at 30fps and 90fps (with wider camera angle) it does come with a cost. Its was created with nine PC’s with 32Gig RAM each and 1080ti Nvidia cards. Here are the other technical facts:

    • MEETMIKE has about 440,000 triangles being rendered in real time, which means rendering of VR stereo about every 9 milliseconds, of those 75{76c5cb8798b4dc9652375d1c19c86d53c1d1411f4e030dd406aa284e63c21817} are used for the hair.
    • Mike’s face rig uses about 80 joints, mostly for the movement of the hair and facial hair.
    • For the face mesh, there is only about 10 joints used- these are for jaw, eyes and the tongue, in order to add more an arc motion.
    • These are in combination with around 750 blendshapes in the final version of the head. mesh.
    •  The system uses complex traditional software design and three deep learning AI engines.
    • MIKE’s face uses a state of the art Technoprop’s Stereo head rig with IR computer vision cameras.
    • The University research studies into acceptance aim to be published at future ACM conferences. The first publication can be found at ACM Conference
    • FACS real time facial motion capture and solving[Ekman and Rosenberg 1997]
    • Models built with the Light Stage scanning at USC-ICT
    • Advanced real time VR rendering
    • Advanced eye scanning and reconstruction
    • New eye contact interaction / VR simulation
    • Interaction of multiple avatars
    • Character interaction in VR at suitably high frame rates
    • Shared VR environments
    • Lip Sync and unscripted conversational dialogue
    • Facial modeling and AI assisted expression analysis.

    This is all very exciting for the evolution of immersive experiences and the evolution of the Digital Interactors. As Immersive experiences become more life-like, the need to create photo-real interactor become more demanding. So far this is the best example of real time facial MOCAP I have seen yet.

    In the video above the animation in the mouth seems to be the weakest component. The eyes, nose, hair and wrinkles really seem to shine. The other videos I have seen show of the eyes more than the mouth. I will have to study their approach in order to understand why the mouth animation does not look as good. It is a real head-scratcher especially since the system employs 750 blend targets. The animation is not quite photo real. However, animation such as this may be good enough for stylized characters. Hopefully there is time for the animation technology to catch up to the rendering performance.

    Regardless of the mouth, the system was still able to be put together with off the shelf software and components. They were working with a special build of the UE4 editor. How unique that build was has yet to be seen. This is all very inspiring since anyone could start getting similar results now. Companies such as iMyth need to be employing this technology today in order to keep up with the developmental curve. Once tutorials employing these concepts start becoming commonplace, everyone and their uncle will be employing them.

    For a company such as iMyth what needs to be developed is a an animation system such that the models will animate similarly despite the face that drives it. Any unique character may have multiple interactors driving them at different times. MEETMIKE was created based on the real Mike Seymour. Immersive Experience companies will not be able to create personalized versions of each character to correspond with each possible interactor. One character model will be built and rigged while multiple interactors need to be able to drive it. This adds an entire new level of complexity to the implementation.

    Along with finger tracking, Facial MOCAP was one of the big hurdles for Immersive Experiences to overcome. Maybe with MEETMIKE this barrier has now been overcome.

     

  • Variety’s Location Based State of the Industry

    Variety’s Location Based State of the Industry

    Here is a quick synopsis of the the Variety article, Location Based VR.

    • The folks at Variety are very excited about the Void’s new Immersive Experience called “Curse of the Serpent’s Eye”, an immersive take on the Indiana Jones theme. It will premier next month and be the second installment after the Ghost Buster’s experience. Interesting enough The Void’s Co-Founder, James Jensen, identifies that the best immersive experiences are the ones with real, physical props. CEO, Cliff Plummer is very bullish on Immersive Experience being a draw back to Malls and Movie Theaters. He says, “The studios are looking for new revenue streams. We (the Void) have one, and it’s easy for them to relate to.” The Void has also been admitted to Disney’s Accelerator Start-ups.
    • 20th Century Fox President of innovation, Salil Mehta, agrees, “We believe that location-based VR will be the way that many people experience virtual reality for the first time. It’s an incredible opportunity for us to create industry-defining immersive experiences that can’t be replicated in your living room.”
    • FoxNext is developing an “Alien” immersive experience and has invested in on of The Void’s competitors, “Dreamscape Immersive“.
    • Lionsgate Interactive Ventures and Games president Peter Levin endorsed location-based VR wholeheartedly at the recent VRTL industry conference: “We are extremely bullish on it.”
    • Paramount unveiled an immersive experience supporting “Transformers: The Last Knight”.
    • Doug Griffin, from Nomadic, says, “We’ve heard over and over from film studios that location-based is becoming part of their strategy moving forward.”
    • It seems everyone is disappointed at the rate which VR has been accepted by the public. They see location based installment, similar to those in China, as being avenues which the average person can try out the newest VR experiences without having to plop down the money to get started in VR.
    • The Imax VR experience Center is taking a little bit of a different approach as it focuses more on individual pods for participants to experience instead of the complicated setups such as the Void. They have locations in Los Angeles and  New York will be opening soon in Toronto, Manchester and Shanghai. Imax is using these installments as a soft launch before embarking on a flood over 1000s of movie theaters.
    • Problems identified are throughput and the inevitability that home VR system will get better. Similarly there is the issue of price tag. Many of the experiences vary from $30 to $15 for a 15 minute experience. Nomadic’s Griffin thinks lower prices are key to taking location-based VR mainstream. “We want to bring this medium of entertainment to neighborhoods everywhere,” he says. “We don’t charge a price that is out of reach for those smaller neighborhoods and communities.”
    • Griffon also believes by creating Modular set pieces each location will need to go through very small down time shifting experiences.Nomadic’s Griffin thinks lower prices are key to taking location-based VR mainstream. “We want to bring this medium of entertainment to neighborhoods everywhere,” he says. “We don’t charge a price that is out of reach for those smaller neighborhoods and communities.”
    • Wisely, companies such as the Void realize that content is king and are creating pipelines for producing new experiences every 3 to six months. We’ll have to see how well that pans out 🙂 Smartly they are investigating the concept of creating persistent avatars and monetizing their product tie-ins with the avatars.
  • ILMX Autonomous Interactors

    We don’t necessarily hear much from ILMX. Much like their cousin company, Imagineering, they sort of wait for a groovy time to spring a really pleasant surprise on an unsuspecting audience. I’m not at all surprised by this as ILMX shows off how it’s autonomous interactors can collaborate with participants to create to create dynamic, interactive stories. Check out this video from Fortune Magazine.

    https://www.facebook.com/FortuneMagazine/videos/10155203389867949/

    I love how the robot goes through its protocol but still responds to the participants’ inputs. That is what a true interactors should be doing. There are of course glitches such as character inter-penetration and a certain amount of latency, but that does not matter. All the participant knows is that they are dealing with another “being” in the experience and they are in the driver’s seat for creating their own immersive story experience.