by hahaxdxd123 on 6/6/23, 5:34 PM with 0 comments
It supports pre-rendered 3d content such as memories or recordings through special devices - however neither the presentation nor the press release have examples of creating these on the fly or interacting with 3D content.
Given what we know from the presentation:
- that there's a separate real time subsystem running on the R1 chip which actually renders the environment (which presumably developers will not be allowed access to)
- that there is an M2 chip but no dedicated graphics card
- and that in general it is incredibly expensive to dynamically render anything at 2x 5k resolution
apps like static overlays over environments seem feasible to me, but not necessarily the mixed-reality app where (as a paltry example), you play an FPS inside your home or something. Or even something less demanding where you practice as a surgeon in 3d and slice open a realistic body.
Not very familiar with VR - is this a solvable problem with the current generation of hardware we have?