Here's a tutorial on how to do this in Unity: Īnd finally, it might be that what you're looking for is a "VR spectator camera" (not be confused with the Mixed Reality Companion Kit's SpectatorView or MRTK's SpectatorView, which are intended to overlay AR holograms on video footage from other devices). The VR user can look at the texture and they can see what the first camera is looking at (albeit, in 2D). One camera records the VR scene and then renders it to a render texture that is on a virtual TV in the game. Some people use this for security cameras in video games. If you're sure this is what you want, can you explain more about what you're using it for?Ī much better approach would be to use Unity's Render Texture functionality to render one camera to a texture. This might work for a debug mode, but it will cause simsickness and eye strain in your users. If you're trying to get two camera perspectives to render in the same HMD (one in each eye? overlayed by depth?) you're probably not going to end up with something very desirable. With Rokoko's inertial mocap tools however, up to 5 performer's motions can be captured at once.I'm not sure exactly what you're asking for, but there are a few things that come to mind that you might be looking for. motion capture tools that can capture more than 1 performer in the same recording. Multiple performer recordings: today, there are no convincing A.I.Inertial tracking can offer more flexibility in this regard as well (the tracking area is as big as the WiFi range of your router and lighting or other environmental factors are irrelevant). Tracking space: background and lighting are important to capture a clear video (and thus a good animation), as is distance to the camera.Face and finger capture: even though video based face capture solution, like Rokoko Face Capture, are possible, this is not the case yet for finger tracking: a solution like the Smartgloves is still needed.This is not an issue with Rokoko's inertial mocap tools: real-time integrations are supported for all major 3D software. motion capture, post processing is heavy but needed to generate the animation file, meaning it's very hard (unless you cut some big corners) to generate the animation in real-time. Real-time vs post-processed data: with A.I.Though Rokoko Vidion dual-camera largely mitigates this, with sensor based mocap, this is not an issue at all (and actually one of the main reasons even high-end productions turn to inertial mocap, like 's Dulux commercial). This means that occluded limbs will translate in less good tracking capabilities, either because the performer is outside of the video frame or because the position of the body of the performer makes it more difficult for the A.I. relies on a complete view of the performer to estimate the skeleton's position. Data quality: especially for more complex motions, inertial mocap tools, like the Smartsuit Pro II, provide higher fidelity capture.Even though its ease of use, free price and data quality are very appealing, there will still be many situations where robust mocap tools like the Smartsuit Pro II, Smartgloves and Face Capture are needed, for example: Rokoko Vision is a great entry point in the world of motion capture, as well as a handy tool for pre-visualisation.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |