by kogir on 2/2/24, 6:49 PM with 32 comments
by adanto6840 on 2/2/24, 8:03 PM
https://cavern.sbence.hu/cavern/
https://github.com/VoidXH/Cavern
The visualizer, which is what I was _most_ interested in (along with software decoding) is written in C# and the rendering is done in Unity -- both things I valued & thought were cool. In theory, you could build a DIY multi-channel "receiver" with this type of software if given enough audio outputs (and/or put something like Dante to use).
I explored it a bit further but it's relatively cost prohibitive, especially if you want to do something like accept HDMI input, it gets messy. AFAICT, at least when I went down this research path a few months back, even finding & getting dev kits/boards with HDMI input (of semi-recent generation) was non-trivial & pretty pricey.
by omnicognate on 2/3/24, 8:27 AM
However, I like to own music and that is simply impossible at the moment for most Atmos recordings. I would love to build a library of such recordings, preferably in physical form, and would happily spend quite a lot of money doing so. But Apple Music is basically the only way I can listen to anything.
I can't help but suspect this is entirely deliberate, an attempt to use this innovation to hasten the passing of the concept of owning music into the past.
Sadly, I also worry the move to streaming means an awful lot of music is eventually going to be lost forever.
by recursive on 2/2/24, 8:13 PM
by aidenn0 on 2/2/24, 7:33 PM
by theandrewbailey on 2/3/24, 2:55 AM
by NoPedantsThanks on 2/2/24, 8:13 PM
The use of Atmos in music is just plain bad. How many pop recordings are actually mixed for Atmos? I can't imagine that it's as many as Apple is presenting "in Atmos" on Apple Music. So is there some post-processing BS going on, a la "Q-Sound" and other fake surround over the past few decades?
Here's an example of Atmos messing up music. It's too bad it happens, too, because the Atmos versions of songs seem to be less dynamically compressed: https://www.youtube.com/watch?v=xUgfp6mFG2E
by atoav on 2/3/24, 9:52 AM
If you have long cable runs I'd use an optical signal or a balanced line signal (this is why professional audio gear has balanced outputs and inputs with TRS 6.3mm or XLR-3 connectors).
There are simple adapters that allow you to send 4 balanced audio signals over existing ethernet connections. With CAT6 you can easily push balanced signals over a kilometer (long beyond the 100m treshold of actual CAT6 ethernet) without any noticable degradation.
If you have unbalanced signals from weak sources (vinyl needle?) you should keep the cable runs short, but even if the driver is good it can help to add a balun (passive or active) to run the thing balanced when the cable run is longer than 10 meters or is in a harsh environment (e.g. power chords with bursty loads emitting EMF).
by qozoq on 2/2/24, 8:17 PM
Otherwise it's gripes over finding the ideal combination of TV picture settings AND OS display settings. The TV is an OS of it's own, of course. How does one go about tweaking two sets of settings that overlap?
by duped on 2/2/24, 7:42 PM
In theory, the recent(ish)ly standardized SMPTE 2098-2 bitstream protocol will allow for 3rd party encoders/decoders of object-based "immersive audio." In practice, 2098-2 is the bastard child of Atmos and DTS:X and I kind of doubt we'll ever see a FOSS decoder.
But anything's possible.