We don’t necessarily hear much from ILMX. Much like their cousin company, Imagineering, they sort of wait for a groovy time to spring a really pleasant surprise on an unsuspecting audience. I’m not at all surprised by this as ILMX shows off how it’s autonomous interactors can collaborate with participants to create to create dynamic, interactive stories. Check out this video from Fortune Magazine.
I love how the robot goes through its protocol but still responds to the participants’ inputs. That is what a true interactors should be doing. There are of course glitches such as character inter-penetration and a certain amount of latency, but that does not matter. All the participant knows is that they are dealing with another “being” in the experience and they are in the driver’s seat for creating their own immersive story experience.
I received some bitter sweet news today from the folks at Valve Software.
After much anticipation, the engineers at SteamVR have found a way to create a cheaper, more versatile Lighthouse solution. The old sensor chip, the TS3633, produced a single “envelope” pulse per laser or sync blinker hit. That hit allows the Watchman module in the device to time the difference between the sync and the laser hit and compute angle from that difference. There is a new sensor chip from Triad Semiconductor, the TS4231 which is actually simpler and cheaper to produce. What’s most important, it provides a burst of data per laser or sync hit. Using that data allows information to be transmitted on the laser itself, which we can be used to learn about the source of that laser.
This new capability to encode information in the laser is significant for two reasons:
It allows support for more than two base stations, and thus larger tracking volumes.
It allows a base station to function without including a sync blinker, which is the source of most of the interference between base stations (and is also a significant driver of base station cost.) This technology is called sync-on-beam.
Depending on tests, these new sensors will start rolling out in late June. The new lighthouses won’t start rolling out until November.
Here is a diagram of the compatibility of the two systems:
Regretfully the old sensors will not work with the new Light House configuration. The new sensors will work with the old but not the other way around. Regretfully I just invested in 7 new Tracker pucks. They can still be used with the old system but not the new.
So this news is bitter sweet. iMyth will have ware house scale VR in 2018. Regretfully We will need to retrofit will all new sensors. I suppose it was a good thing I had not had a chance to turn the printed iMyth props into controllers since they will be need to be redone anyway. At close to $500 per prop, I suppose the damage could have been much worse. I have recently finished long conversations with the folks at Optitrack and Motion Reality. It seems now Steam VR can join them in the House Scale/Warehouse Scale VR race. Because Steam VR tracking is not bound by cameras, there may be possibility to have vertical structures as well. Very cool!
I guess I’m just being a geek here. I would not normally create a posting like this but I think it is just so cool.
It’s very hard to understand just what the Steam VR lighthouses are doing and how they track an object’s position in real time. In my opinion it is a minor engineering miracle. The lighthouses basically works as thus:
The LEDs flash which tell the controller’s sensors to get ready for a scan
The horizontal(lower) wheel spins and emits a lazer beam spanning from right to left, 120 degrees
The sensor counts the time between the led flash and the lazer pulse
The LEDs flash again
The vertical wheel(right) spins and emits a vertical beam, 120 degrees
The sensors records the second time difference
The computer then uses the two time differences to compute the sensor’s location and orientation
https://gfycat.com/BleakAcclaimedJellyfish
The Lighthouse fundamental operation is cool. But what steams my noodle is that it is doing the cycle 60 times a second!
Yesterday, HTC Vive Senior Engineer James Xiong posted a Vive IK Demo on GitHub, which includes a reference source code and a Unity Demo. Too bad it was not for UE4 but methinks this can be easily transferred. The demo experience employs 3 Steam VR trackers mounted to each ankle and one to the participant’s waist. From this the lower body animation is more or less “guessed”. This has to be an approximation since without understand the pole vector constraints imposed by the knee, it’s really impossible to understand the direction the leg is pointed. However, certain basic cases can be considered and predicted with fairly decent accuracy.
iMyth was planning on inventing its own solution for this problem. However, having code to start from will definitely help!
A couple of months ago, iMyth created a rough prototype of its immersive experience. One of iMyth’s key components is physical props and sets. We integrated very inexpensive props and sets into our experience. While not the most sturdy these set pieces did an outstanding job demonstrating the physically immersive concept.
Bandai Namco has taken this concept one step further and created an experience based on the Doraemon Anywhere Door theme world. Using the HTC Vive, Leap Motion, and a few simple props tracked with attached Vive controllers, the team was able to create two very interesting interactive props.
https://gfycat.com/SpiffyDisgustingHowlermonkey
The first of these props is the door. Using a very simple props door placed cleverly inside the Vive play space in order to avoid loosing tracking. The use Leap Motion to track the participant’s hands which of course frees up the Vive Controllers. One of the controllers is placed on the edge of the door and is used to track the door opening and closing. It is a simple concept but the physical component is extraordinarily impact-full. What I really want to know is where did they get the great door prop? Notice that the prop doesn’t have a footprint larger that what it would have in real life. How did they anchor it? It looks solid.
https://gfycat.com/ImpishCautiousGermanspitz
The second object is a simple desk. Once again, all the creators did was attach a second controller to the drawer of the desk. The desk itself is stationary and never moves. Once again this is very effective use of a simple concept.
I just place my order to HTC for 4 tracking “pucks”. We should get them by the end of the week. The pucks are going to be used to help track the interactor. However, I did forget that for every Vive there are two controllers. That means iMyth will now have 8 tracked objects to deal with. Where can we go from here?
IMR is at the forefront of wireless Virtual Reality data streaming. Our proprietary algorithms and hardware architecture, produce UN-paralleled results.
At IMR we have a mission to create the world’s first compression standard for VR content. We are a technology company founded by leading aerospace, computer vision and robotics experts. We have developed an algorithm and hardware that enables Wireless transmission and streaming of VR video over 802.11ac Wi-Fi and 802.11ad WiGig standards. Transforming the VR industry and empowering a completely immersed and untethered experience for multiple players.
Immersive Robotics (IMR) has developed a new compression standard for VR content. The proprietary algorithm and electronics hardware enables wireless transmission and streaming of Virtual Reality (VR) video over 802.11ac Wi-Fi (5GHz) & the latest 802.11ad WiGig (60GHz) standards. The following is the description of its potency:
Rapid Data Transmission
With a 95{76c5cb8798b4dc9652375d1c19c86d53c1d1411f4e030dd406aa284e63c21817} compression rate, IMR’s technology allows for compression and decompression with a record breaking introduced latency of less than 1 ms. This translates to zero perceived latency by the player, increasing user comfort and the elimination of motion sickness caused by latency within VR play.
Image Quality
The quality of the decompressed image is indiscernible from the original with no motion blur or introduced artifacts.
Eye Tracking
IMR’s algorithm leverages a suite of highly “VR-optimized” techniques to reduce required bandwidth and operate at an extremely low latency. One option feature is an input for eye tracking data which allows for further dynamic control and greater compression efficiency.
Versatile
IMR’s technology utilizes both the 802.11ac and 802.11ad wireless standards. This enables current generation HMDs to be supported via the AC standard, and future proofs the technology by enabling it to handle up to 2x 4K VR video transmission over the AD standard.
Our technology is designed to operate across all VR and telepresence robotics applications and each has their own requirements for the wireless. Our technology provides the necessary compression/decompression at ultra low latencies for ALL these applications, and we are working with and looking to partner with different wireless manufacturers and communication link suppliers to push this technology into each area.
KwikVR’s unique advantage over other wireless competitors is hard to tell, because we have not been able to test our competitors’ solutions. They are all claiming an impossible one or two millisecond latency overhead, so I would say our main advantage is to be honest. Also, our solution does not use 60GHz Wi-fi at the top of the head of the user, which might be better for health reasons. Using 5GHz Wi-fi is also less prone to obstruction issues when it comes to the Wi-fi signal. We believe that our latency overhead is close to optimal, but only the customers will be the judges.
I think you can classify Wireless VR into what type of radio it uses and what type of compression. Of course all systems have to deliver under a frame of round trip latency.
Various Radio Types:
WiFi 802.11ac 5GHHz & 2.4GHz
WiFi 802.11ad 60GHz
5G LTE cellular for cloud VR (various frequencies)
Proprietary radio in unlicensed frequency (e.g. 5GHz)
Our solution uses WiFi 802.11ac and LTE. This has the benefits of not needing line of sight transmission. 60GHz transmission suffers from large attenuation when propagating through physical barriers including humans. 802.11ac can travel much longer distance than 60GHz and provide multiple room coverage. 802.11ac is also much cheaper and requires much smaller wireless antennas than 60Ghz. Placement of the transmitter is not important with 802.11ac unlike 60GHz. 802.11ac is also lower power giving longer battery life of the HMD.
Various Compression Types
JPEG (Intra frame) with 3:1 compression
JPEG 2000 (Intra frame) with 6:1 compression
MPEG H.264 (Intra and Inter frame) 100:1 compression
MPEG H.265 (Intra and Inter frame) 200:1 compression
Proprietary Compression
Our solution uses MPEG H.265/HEVC compression which provides 200:1 compression. E.g. a source of 1080p60 requires 3,000 Mbps to transmit uncompressed. We compress this to 15 Mbps a compression ratio of 200:1. This allows headroom for error correction and higher resolutions and frame rates as well as data rates that can be delivered from the cloud over 5G LTE and fibre networks. Standards based systems also allow off the shelf mobile chipsets to be used to build into mobile HMDs. We will adopt future H.265 profiles which can provide even better compression using tools like multi view and screen content coding tools.
While other vendors are focused on bringing wireless accessories to today’s HMDs, Nitero is the only company developing an integratable solution that will support the aggressive requirements of future VR HMDs.
The solution’s novel micro-second latency compression engine provides royalty-free, visually lossless encoding, adding end-to-end latency of one millisecond. At power below one Watt, it can be integrated into future headsets without the need for expensive heat sinks or vents. In fact, adding Nitero’s wireless solution will be significantly less expensive than cables, resulting in an overall cost reduction, which is critical for VR adoption going forward.
Interoperable with WiGig, Nitero has customized for the unique challenges in the VR/AR use cases with advanced beam-forming that supports NLOS at room-scale. Additionally, back-channel support for computer vision, eye-tracking, 3D-audio and other forthcoming technologies can be supported simultaneously with the VR display, without needing another chipset.
Some of the industry leaders that have supported Nitero via investment and collaboration include Valve Software, Super Ventures, and the Colopl VR Fund, along with others not publicly announced.
We use a combination of video compression and proprietary streaming protocol that allows us to stream high resolutions to multiple headsets. Our solution is designed primarily for Theme Parks and Arcades that want to put two or more people in the same tracked space.
Our thesis is that in the future you will always need some amount of compression, either when resolutions get higher (4K and above. We need 16K for retina resolution), or if you try to put the server outside the local network. Ideally, you could put a GPU farm in the cloud and have all the content available immediately thus even eliminating the need of a PC at home! I think that in five years the only computer you would need at home would be a small mobile chip, probably built into the headset itself.
Of course, any sort of compression introduces latency. However, there’s been a lot of development in the past two years to go around that. We’ll be releasing a network aware technology similar to Spacewarp that’s used by Oculus. And companies like Microsoft have done a lot of research on reducing latency by doing predictive (also known as speculative) rendering. Project Irides, for example, is able to compensate for 120 ms of network latency in their demo. We’ve been talking to one of the lead researchers of Irides for a while, and we’ll release similar technology in 2017. So I would say that the future of wireless VR is very bright!
HTC/Intel Aliance
WiGig (Intel’s chosen solution) is, as the name suggests, a wireless multi-gigabit networking standard which dramatically increases over-the-air bandwidth over standard WiFi over short distances (the same room). In actual fact, the name ‘WiGig’ is a shortening of the organisation (Wireless Gigabit Alliance) which helped define the IEEE 802.11ad 60GHz standard. WiGig is aimed at very high bandwidth data uses, such as the broadcast of multi-gigabit uncompressed video and audio streams. Although its uses are more limited (short range, doesn’t work well through walls) it is ultimately a very high speed general purpose network standard in the same way as other WiFi standards. Bottom line, if you buy an 802.11ad compatible router, it’ll not only be backwards compatible with your older devices, you’ll be able to use that extra bandwidth for any sort of data transfer, not just video and audio. WiGig data rates max out at 7 gigabits per second per channel.
TPCast
60GHz wireless technology is being investigated by a number of companies for use in wireless VR applications, including one that Valve invested in separately from HTC. While the frequency provides lots of bandwidth, it isn’t great at penetrating surfaces, meaning that it’s most effective when the transmitter has direct line-of-site to the receiver. The TPCAST wireless Vive kit has the transmitter mounted on the user’s head to give it a direct view to the receiver, but there’s certainly times during room-scale VR play where the user may be turned away from the receiver with their head tilted at an angle that would break line of sight; the player’s hands could also get in the way, though it isn’t clear yet how these situations might impact the TPCAST device’s performance, mostly because we don’t know the recommended setup for the system which could possibly use multiple receivers or recommend a special placement to prevent transmission issues.
This is starting to get a bit comical. It seems everywhere you look there is new immersive experience company sprouting. Not more than two days ago I blogged about Knott’s Berry Farm entering the immersive experience game. Now I am reporting the emergence of yet another company called Nomadic. This time, instead of a location based experience, the Nomadic folks are focusing on the physical environment itself. This is a very cool concept. I will have to reach out to them to explore opportunities for partnership.
Tech-wise, it looks like they are employing the typical Optitrack setup as a turn key VR experience station. This will be very interesting to see how they pull of objects such as doors, walls and windows which may occlude the line of sight of the optical cameras. Also what should be explored is the possibility of partnering with piecemeal items instead of an entire “turnkey” system, similar to the one offered by Zero Latency.
Nomadic isn’t the first company to add physical cues to virtual reality experiences. But the company does have a novel concept of getting these kinds of experiences out in the marketplace. Instead of building and operating its own VR locations, Nomadic wants to partner with bigger players that already have a lot of real estate at their disposal and are now looking for the next big thing to retain and monetize audiences. Think mall operators, theater chains and the likes.
The Nomadic website gives a good indication of the makeup of the company. I recognize many of the names from days at Electronic Arts. It seems like they have a very solid team in the works. I hope some kind of partnership can be reached.
Lat month, in December, iMyth Demonstrated full body tracking for its interactor using the Perception Neuron tracking outfit. It was an impressive bit of engineering to get the system up and working and coordinated with the Vive and UE4. For the prototype example this tech strategy worked well. The participant understood the interactor and there was never any “strong” concerns about the problems with the tech.
Regretfully there were many Problems.
The Perception Neuron was “sensitive”. Sometime it wanted to come out and play. Some days we could not get it to talk no matter what we did.
IMU Drift. Because the Neuron suit was only implemented with IMU trackers the suit never really understood its world space location and the interactor kept drifting off.
Spazzing out. Occasionally, the character was just sort of go into a seizure, freeze up or go a little crazy. For the prototype this was fine but we need a more consistent solution.
Finger Tracking. Did the Perception neuron actually track the interactor’s fingers during the actual experience? Regretfully we never tested. I have a concern they did not work.
The Perception Neuron Suit was actually owned by one of our Technical Producers, Andrew. We are very grateful Andrew allowed iMyth to use the suit. Regretfully the suite left when Andrew graduated.
In Mid-December, I also attended journeyed to Seattle to get Vive Lighthouse training. The Lighthouse id the tracking technology Valve uses to track all of the objects in the environment. With this technology we could track an almost infinite number of objects and build our own Vive Controllers. I immediately started thinking about how we could use this technology to easily track the interactor and participant.
It seems like a couple of other companies were thinking about the same thing. Steve Bowler at Cloadgate experimented with strapping Vive Controllers to his feet. He was able to get some pretty good results.
Steve’s solution was a great first start. However it was a bit problematic since he required an entirely new Vive setup to get the controllers. He did not have any waist tracking, (Important for tracking overall body movement). Also, controllers strapped to your feet are a bit awkward.
Now it seems like the folks at IKinema have taken things one step further and and created their own Vive tracker controllers which are mounted on the feet and waist.
This demonstration looks really sharp and claim to have no post-processing. I believe them. However the imYth solution is still not quite there. Project Orion may still take some time to become commercially available. Their solution required two controllers, one in each hand. For iMyth I am seeking a hands free solution so we would require a tracking device on the wrists and eventually some IMU finger tracking. Ikinema is also a Motion tracking company so I I would not be surprised if we wanted their technology, we would also probably need to purchase a suite of re-targeting tools which would be overkill for what we want. I don’t know. I’d like to get in contact with the Ikinema guys and investigate their plans for release.
So this leads ups back to iMyth creating their own tracking solution. I just received an email from Reid at Triad Semiconductor saying they just received the go ahead to produce the boards needed to create custom controllers. Those will be available in 8 weeks. This will be exciting since iMyth will not only be tracking the interactor and participant but multiple other participants and multiple set pieces and props as well. Very exciting times!
OK – everyone loves to pick on the Japanese for their quirky fascinations and the lengths they will go in to satisfy these interests. In this case a Japanese inventor may be on to something that is really useful for the iMyth Experience. A VR company called “Vaqso” has come up with a method for delivering small amount of reconfigured fragrances to the participant from a small module mounted beneath the HMD.
Here is the article posted from Anime News:
Vaqso VR is a device about the size of a candy bar that attaches to the bottom of VR goggles with magnets. It will react to different games and provide smells depending on the in-game situation. In a demonstrative game, the player could sniff gunpowder during target shooting and the aroma of peach when one was shot. A girl appeared at the end of the game, along with the smell of shampoo. But developers are interested in a much wider array of scents — fried chicken, curry, cypress, even nasty smells like rotting flesh and sewage for games like Resident Evil. The smells will also strengthen or weaken depending on the player’s distance from their source.
The VR device will operate by cartridges, each of them containing a different smell; the prototype has space for three, but the developers hope to fit more than five in the final product. Shutters open at the front of the device at a signal from the game, and fans blow the smell toward the player. The device is battery-operated, and each cartridge lasts about a month, or dozens of hours of playtime.
Vaqso’s CEO, Kentarō Kawaguchi, is also the CEO of ZaaZ, a company that has been developing smell-generating devices to use in sales promotion — to create the right atmosphere for a bookstore, say, or to make a vending machine or poster exude the right scent. His special advisor is Fumio Kurokawa, an opinion writer active in the video game and entertainment industries (for instance, he was once vice president of Bushiroad, the producer of Cardfight!! Vanguard). The current Vaqso VR is only a prototype, and the final product is scheduled for release at the end of the year.
This sounds just Like device iMyth could use for its experiences. For each experience, the participant could load a pre-configured cartridge and Wa-La, instant Smell`oVision. Now how this all ties in with the interface is an entirely different issue. However, this does give proof that folks are working on these issues.
Body presence in VR has been one of those topics that folks knew could be achieved but really “had not gotten” around to working on. Steve Bowler at Cloud Gate Studio seems to be making strides towards this direction. He recently posted on tweeter a video with himself with hand and feet controllers as demonstrated in this video.
What’s cool about Steve’s implementation is that he is using two Vive’s on Two Computers to track the participant. Based on the amount of control he had I would say he is holding the controllers in his hands; as shown in the videos. But he also has pretty good control of his ankles which means he has mounted the controllers on each of his feet. This means he is approximating waist position and probably using simple IK for Knee and elbow position. He admits on the video that he doesn’t have good control over the waist. By the looks of the video his wrists are detached from his arm. He probably is approximating the pole vectors for the elbows and knees as well.
The iMyth experience used two Vive’s in tandem to track the participant as well as the interactor. It’s great seeing someone hook up two Vives on the participant to see what happens. In the future, iMyth will need to use multiple trackers on one character. If combined with some IMU hand tracking device such as the Manus Glove then I think all that would be necessary would be trackers at each wrist. Trackers would then need to be added for each foot and waist. I Think in order to get the leg rotation correct, trackers would be necessary for the knees. I’m not sure about the elbows at first. Somewhere down the liner they will be a necessity. However, since elbows are not a particularly popular point on the body to touch, I think their priority can be dropped.
It will be exciting to see how much the Vive tracking pucks cost. Either iMyth will need to incorporate those pucks or create its own, home brewed version.