- cross-posted to:
- linux_gaming@lemmit.online
- cross-posted to:
- linux_gaming@lemmit.online
He’s right, a product like that would have failed dramatically. At this point I just want them to release a dumb AF, streaming-only, inside-out tracking VR headset that connects to PCs. Forget trying to cram an expensive Qualcomm or AMD chip in there, it will never give you the ideal VR experience. Make something that’s $200 bucks, connects to any PC running SteamVR, and just does extremely well with streaming and low-latency. Both Airlink and VRDesktop have already shown that its possible to get extremely close to a cabled experience. All that’s left is some polish.
100%. I’ve attempted to shop for this exact thing. I won’t give zuck any money so. ¯\_(ツ)_/¯
I just want to be able to buy something like an Oculus CV1 without Oculus software/proprietary hardware and a nicer screen. I’m still rocking the same unit I got several years ago and it’s still plenty fine for most things.
All of the fancy things like wireless and no-tower tracking are nice, but I imagine a lot of players are going to be seated and just want the immersion. Why not have a $300-400 offering that does this?
I could be wrong on this since I have no source but I always assumed that Oculus headsets were cheap because they’re a Meta product and you’re actually paying for it with your data/telemetry.
Like, the ungodly amount they spent on VR R&D is absolutely not being made up for by the few hundred dollar price tag on their headsets — I bet that barely covers the cost of materials. That must be for a reason.
the consumer version 1 was before meta bought them
I mean isnt this why people bought the HP Reverb?
Its partially a self inflicted problem if you need Valve to do it.
Why not have a $300-400 offering that does this?
I think having base stations not only increases price but also makes it unapproachable for a vast majority of people. Personally, I didn’t even consider the CV1 or the Index because I just didn’t have a room that could properly accommodate them. For the sitting use case, no-tower tracking is actually very suitable and probably works better.
even at ideal condition there is about 1~2ms latency(streaming 1080p game), while hitting 90hz requires 11ms frame time. so you are asking the game to at least perform at 111fps or above to function under said ideal condition. I think if some manufacturer can put together a chip set where they do the frame gen tech on the head set side, so the game just need to run at 60fps it would be a better option.(like PS VR ) Frame gen does require some other buffers to generate the in between frames, so that’s more info to stream over the bandwidth.
That’s basically what I got. Xreal Air (formerly Nreal until a C&D from Epic). 1080p per eye and something like 49PPD with a 45° FOV. Tracking is 6DOF and requires software on the host (only complaint) and connectivity is via a USB-C cable (uses DP alt mode).
It’s nearly as “dumb” as an HMD can get. From the teardowns that I’ve seen, it’s really just got an MCU, a GPIO expander, a 6DOF chip, and the displays + drivers. And I love that about it. No batteries or anything to worry about.
45 FOV sounds really narrow, aren’t the headsets pushing like 100 degrees or so?
It is pretty narrow but also what makes it work, IMO. I don’t have them for immersion but for display replacement. The narrow field of view lets the 1080p display have nearly 0 screen door effect. Plus, the birdbath optics are really cheap compared to waveguides or fancy lenses in VR headsets.
Ah, ok. My reason to get a headset is for immersion with seated gameplay. Games like ETS2 and Elite:Dangerous.
Makes sense. Yeah. Any birdbath setup will be wrong for that. They typically get great PPD but, narrow FOV.
This has been on my radar for a while to compliment my steam deck. But I believe it doesn’t do head tracking with the steam deck or does it? I just want a floating screen in front of me that stays still when moving my head around, otherwise I’m gonna hurl!
I think it’s great for my Deck but, that will indeed be a problem. The headset contains only the sensors and display systems but, none of the logic circuitry to “pin” displays. Including that would increase the price a good deal.
Understood thanks for the feedback. After posting you reignited my interest and I found out that they also have their product called beam which would do the trick to make a spatial display… if you’re willing to cough up another 120 for it!
If you’re willing to cough up another 120 for it!
Yeah… I’m not :P But, I am plotting a DIY solution. A solution that will probably cost more than $120 on components but, I think it will still be worth it.
So, I’ve got one for my steam deck and it’s less an issue than you might think, in my opinion.
When you’re focused on the screen, it doesn’t create too much incongruity when the background shifts, and it’s easy to just let you brain parse the screen as something that just floats in front of you.
It’s not immersive enough to get the inner ear involved and confused. It’s a lot closer to holding a phone sideways about six inches from your face and moving your head around.The only time it felt weird was when I was using it in a well lit room, and I shifted my focus to something not on the screen, that was closer than the apparent distance to the floating display. It was weird feeling my vision try to reconcile that the nearer thing was moving behind the far thing.
Can it be used wirelessly with the Deck?
No, it does have to be attached by a cable. You can get an adapter that lets you charge while using it.
The glasses are basically a monitor.
… they’re in the walls!
Quality is irrelevant, reduce retail price.
Headsets in the thousand-dollar range are plenty good and still not selling. Take the hint. Push costs down. Cut out everything that is not strictly necessary. Less Switch, more Game Boy.
6DOF inside-out tracking is required, but you can get that from one camera and an orientation sensor. Is it easy? Nope. Is it tractable for any of the companies already making headsets? Yes, obviously. People want pick-up-and-go immersion. Lighthouses were infrastructure and Cardboard was not immersive. Proper tracking in 3D space has to Just Work.
Latency is intolerable. Visual quality, scene detail, shader complexity - these are nice back-of-the-box boasts. Instant response time is do-or-die. Some monocular 640x480 toy with rock-solid 1ms latency would feel more real than any ultrawide 4K pancake monstrosity that’s struggling to maintain 10ms.
Two innovations could make this painless.
One, complex lenses are a hack around flat lighting. Get rid of the LCD backlight and use one LED. This simplifies the ray diagram to be nearly trivial. Only the point light source needs to be far from the eye. The panel and its single lens can be right in your face. Or - each lens can be segmented. The pyramid shape of a distant point source gets smaller, and everything gets thinner. At some point the collection of tiny projectors looks like a lightfield, which is what we should pursue anyway.
Two, intermediate representation can guarantee high performance, even if the computer chokes. It is obviously trivial to throw a million colored dots at a screen. Dice up a finished frame into floating paint squares, and an absolute potato can still rotate, scale, and reproject that point-cloud, hundreds of times per second. But flat frames are meant for flat screens. Any movement at all reveals gaps behind everything. So: send point-cloud data, directly. Do “depth peeling.” Don’t do backface culling. Toss the headset a version of the scene that looks okay from anywhere inside a one-meter cube. If that takes longer for the computer to render and transmit… so what? The headset’s dinky chipset can show it more often than your godlike PC, because it’s just doing PS2-era rendering with microsecond-old head-tracking. The game could crash and you’d still be wandering through a frozen moment at 100, 200, 500 Hz.
I think the slightly more viable version of the rendering side is to send a combination of low res point cloud + low res large FOV frames for each eye with detailed (!) depth map + more detailed image streamed where the eye is focused (with movement prediction) + annotation on what features are static and which changed and where light sources are, allowing the headset to render the scene with low latency and continuously update the recieved frames based on movement with minimal noticable loss in detail, tracking stuff like shadows and making parallax with flawlessly even if the angle and position of the frame was a few degrees off.
Undoubtedly point-clouds can be beaten, and adding a single wide-FOV render is an efficient way to fill space “offscreen.” I’m just cautious about explaining this because it invites the most baffling rejections. At one point I tried explaining the separation of figuring out where stuff is, versus showing that location to you, using beads floating in a fluid simulation. Tracking the liquid and how things move within it is obviously full of computer-melting complexity. Rendering a dot, isn’t. And this brain case acted like I’d described simulating the entire ocean for free. As if the goal was plucking all future positions out of thin air, and not, y’know, remembering where it is, now.
The lowest-bullshit way is probably frustum slicing. Picture the camera surrounded by transparent spheres. Anything between two layers gets rendered onto the further one. This is more-or-less how “deep view video” works. (Worked?) Depth information can be used per-layer to create lumpen meshes or do parallax mapping. Whichever is cheaper at obscene framerates. Rendering with alpha is dirt cheap because it’s all sorted.
Point clouds (or even straight-up original geometry) might be better at nose-length distances. Separating moving parts is almost mandatory for anything attached to your hands. Using a wide-angle point render instead of doing a cube map is one of several hacks available since Fisheye Quake, and a great approach if you expect to replace things before the user can turn around.
But I do have to push back on active fake focus. Lightfields are better. Especially if we’re distilling the scene to be renderable in a hot millisecond, there’s no reason to motorize the optics and try guessing where your pupils are headed. Passive systems can provide genuine focal depth.
That last paper is from ten years ago.
My suggestions are mostly about maintaining quality while limiting bandwidth requirements to the headset, wouldn’t a lightfield require a fair bit of bandwidth to keep updated?
(Another idea is to annotate moving objects with predicated trajectories)
Less than you might think, considering the small range perspectives involved. Rendering to a stack of layers or a grid of offsets technically counts. It is more information than simply transmitting a flat frame… but update rate isn’t do-or-die, if the headset itself handles perspective.
Optimizing for bandwidth would probably look more like depth-peeled layers with very approximate depth values. Maybe rendering objects independently to lumpy reliefs. The illusion only has to work for a fraction of a second, from about where you’re standing.
How does it handle stuff like fog effects, by the way? Can it be made to work (efficiently) with reflections?
The “deep view” link has video - and interactive online demos.
Alpha-blending is easy because, again, it is a set of sorted layers. The only real geometry is some crinkly concentric spheres. I wouldn’t necessarily hand-wave Silent Hill 2 levels of subtlety, with one static moment, but even uniform fog would be sliced-up along with everything else.
Reflections are handled as cutouts with stuff behind them. That part is a natural consequence of their focus on lightfield photography, but it could be faked somewhat directly by rendering. Or you could transmit environment maps and blend between those. Just remember the idea is to be orders of magnitude more efficient than rendering everything normally.
I thought the windows MR lineup filled that gap pretty well. Much cheaper than most of the other alternatives back then but it never really took off and MS has quietly dropped it.
Still $300 or $400 for a wonky platform. That’s priced better than I thought they were, but the minimum viable product is far below that, and we might need a minimal product, to improve adoption rates. The strictly necessary components could total tens of dollars… off the shelf.
deleted by creator
Imho apple has it right for high-end vr: no wires because wearable computer.
Only if latency is really really low. VR is one of those things that’s really sensitive to even the slightest bit of lag.
I don’t mind cables if it’s a single USB-C cable.
I mean a wireless wearable computer would have network lag but the important lag – between the rendering GPU and the screen – would be nil.
I mean streaming the video data to the headset Miracast-style would be dumb, I agree, but afaik all the rendering hardware in the new Apple headset is inside the headset.
The new Steam Deck and its copycats have proven that a compact gaming-grade PC is doable. If not fully contained in the headset, then make a little fannypack with a cable up to vr headset, but you can walk around untethered otherwise.
Ah right, I see what you mean. I was thinking of gaming that requires a separate computer, but like you said, there can be other ways to do that.