Ask Lemmy
A Fediverse community for open-ended, thought provoking questions
Please don't post about US Politics. If you need to do this, try !politicaldiscussion
Rules: (interactive)
1) Be nice and; have fun
Doxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them
2) All posts must end with a '?'
This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?
3) No spam
Please do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.
4) NSFW is okay, within reason
Just remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com.
NSFW comments should be restricted to posts tagged [NSFW].
5) This is not a support community.
It is not a place for 'how do I?', type questions.
If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.
Reminder: The terms of service apply here too.
Partnered Communities:
Logo design credit goes to: tubbadu
view the rest of the comments
I thought it would be simple: just make the mono/stereo/etc mixes easier to understand, and leave the advanced stuff to people with a million speakers.
I guess that's too simple?
I would bet there is one mix created in surround sound (7.1 or Dolby Atmos or whatever), and then the end-user hardware does the down-mixing part, i.e. from Atmos with ~20 speakers to a pair of airpods.
In the music world, we usually make stereo mixes. Even though the software that I use has a button to downmix the stereo output to mono, I only print stereo files.
It's defintely good practice to listen to the mix in mono for technical reasons and also because you just never know who's going to be listening on what device---the ultimate goal being to make it sound as good as possible in as many listening environments as possible. Ironically, switching the output to mono is a great way to check for balance between instruments (including the vocals) in a stereo mix.
At any rate, I think the problem of dynamics control---and for that matter, equalization---for fine-tuning the listening experience at home is going to vary wildly from place to place and setup to setup. Therefore the hypothetical regulations should help consumers help themselves by requiring compression and eq controls on consumer devices!
Side tip: if your tv or home theater box has an equalizer, try cutting around 200-250hz and bring the overall volume up a tad to reduce the muddiness of vocals/dialogue. You could also try boosting around 2khz, but as a sound engineer primarily dealing with live performances, I tend to cut more often than I boost.
My TV is insulting like that. It technically has an EQ, but it makes no perceivable difference no matter what I do in it.
But assuming it worked, wouldn't doing that strictly with sound frequencies cause issues? Like, okay, most voices are louder because I boosted their frequency, but now that one dude with a super low voice is quieter, plus any music in the show is distorted. Or something like that.
I wish they just provided separate tracks that you could control. One track for dialogue, one track for music, one track for sound effects, and maybe one track for less important voices. Then let us adjust the volume of each. That would help so much. And they basically HAVE to do it at some point in the process anyway if they want multilingual dubbing to work.
Speaking of dubbing: recently I've taken to watching more content dubbed in French strictly because it's almost always intelligible, contrary to the aRtIsT aCcUrAtE volumes of the original. Pretty sad that I have to do that though.
What the hell!
Not necessarily. Regardless of vocal range, around 400hz-2000hz makes up the body of what you hear in human speech, or the notes for instryments carrying a melody. Below that, say, 160-315hz is going to be the "warmth" and "fullness" of the sound, while 2.5khz-8khz is going to be the enunciation and clarity (think ch-sounds, ess-es, tee-s, etc).
Sure, if you start really going hard on an EQ, you could absolutely throw everything out of balance
if you cut out 12db at 250hz, all the warmth will be gone and everything will sound thin. If you scoop a bunch of 400hz-1.6khz, it will sound like a walkie-talkie, and if you make a large boost around 3khz-8khz, then everything will probably sound harsh and scratchy.
This is where, the listening environment becomes important to consider. Do you live near a busy highway or do you have a loud air conditioner? You don't need to answer these questions in public, but those kinds of ambient sounds can compete with the enunciation frequencies, or add to the buildup of "mud" in the lower part of the spectrum.
The size, shape, material properties etc. of your room and furniture also play a role here. For example, a bunch of bare walls and hard surfaces will cause a lot of the high frequencies to bounce around, potentially causing a buildup of harshness. This is why recording studios and your high school band hall probably have those oddly-shaped, cloth-covered wall "decorations" that serve to neutralize the cavernous sound you'd get in a large, bare room.
Overall, compensating for the environment is where you should probably aim your EQ. That is, even if source material varies wildly, it's probably best to try to EQ to the room you're in rather than each, individual program.
The way to do it is to find a song you know by heart, that you know how it sounds in the best way possible (there are a few that, to me, sound great in my car and on my favorite pair of headphones, so I use those), and play that through your TV. Then, fiddle with the EQ until it's as close to the ideal sound in your head as you can get it.
My TV is the LG CX. It's cool in some ways, but overall I'm not too impressed. Some days I think maybe I should've splurged and gotten a Sony.
Hmm, then the issue I could see if going by EQ is if there are several voices at the same time (say, background characters taking indistinctly behind a conversation), depending on how crap the mix is, trying to enhance voices might enhance the background ones as well.
That's an edge case, but a more common one is when there's music with sounds in the same frequency range as human voices over a scene and the music competes with the voices. Then playing with the EQ might distort the music in such a way that it still kills the voices while making the music inaccurate.
That's why I really wish we had several channels whose volumes can be individually changed like in video games. That would be the ultimate tool to adjust things. Even if you don't know anything about what the hell "hertz" means and equalizers confuse you, you could do a lot without distorting anything. And if you do understand how equalizers work, you could combine both to get a really fine-tuned experience.
The music tip isn't bad, but on my TV the answer is "you can't really do that" lol. There are various ways to distort a piece with sound profiles, but none that I know of to keep it accurate.
What I usually do is always use subtitles, and switch between "OLED Surround Pro", "Standard" and "Game" to see which sounds the best. Then if a movie/show stands out as having incredibly bad sound (ahem Christopher Nolan ahem) I either bust out the French dub or "enjoy" the tinny sounds of "Clear Voice IV".