The fact that I have to turn on closed captioning to understand anything tells me these producers have no idea what we want and shouldn’t be telling us what settings to use.
One problem is that the people mixing the audio already know what is being said:
Top-down processing
(or more specifically, top-down auditory perception)
This refers to perception being driven by prior knowledge, expectations, and context rather than purely by sensory input. When you already know the dialog, your brain projects that knowledge onto the sound and experiences it as “clear.”
TV shows changed completely in the streaming age it seems.
These days they really are just super long movies with glacial pacing to keep users subscribed.
You know when something doesn't annoy you until someone points it out?
It's so obvious in hindsight. Shows like the Big Bang theory, House and Scrubs I very rarely caught two episodes consecutively (and when I did they were on some release schedule so you'd forgotten half of the plot by next week). But they are all practically self contained with only the thread of a longer term narrative being woven between them.
It's doubtful that any of these netflix series you could catch one random episode and feel comfortable that you understand what's going on. Perhaps worse is the recent trend for mini-series which are almost exactly how you describe - just a film without half of it being left on the cutting room floor.
That was the principle many years ago, you had to leave the world exactly in the state you found it in.
If John dumped Jane at the beginning of the episode, they had to get back together at the end, otherwise the viewer who had to go to her son's wedding that week wouldn't know what was going on. There was no streaming, recaps were few and far between, and not everybody had access to timeshifting, so you couldn't just rely on everybody watching the episode later and catching up.
Sometimes you'd get a two-episode sequence; Jane cheated on John in episode 1 but they got back together in episode 2. Sometimes the season finale would permanently change some detail (making John and Jane go from being engaged to being married). Nevertheless, episodes were still mostly independent.
AFAIK, this changed with timeshifting, DVRs, online catchup services and then streaming. If viewers have the ability to catch up on a show, even when they can't watch it during first broadcast, you can tell a long, complex, book-sized story instead of many independent short-stories that just happen to share a universe.
Personally, I much prefer the newer format, just as I prefer books to short stories.
> That was the principle many years ago, you had to leave the world exactly in the state you found it in.
This is not true as a generality. e.g. soap operas had long-running stories long before DVRs. Many prime-time dramas and comedies had major event episodes that changed things dramatically (character deaths, weddings, break-ups, etc.), e.g. the whole "Who shot J.R." event on *Dallas*. Almost all shows that I watched as a kid in the 80s had gradual shifts in character relationships over time (e.g. the on-again/off-again relationship between Sam and Diane on Cheers). Child actors on long-running shows would grow up and the situations on the show changed to account for that as they move from grade school, to high school, to college or jobs.
Parent comment was (I think), specifically talking about sitcoms from what I understood.
Sitcoms are - and I know this is a little condescending to point out - comedies contrived to exist in a particular situation: situation comedy → sitcom.
In the old day, the "situation" needed to be relatively relatable and static to allow drop-in viewers channel surfing, or the casual viewer the parent described.
Soap operas and other long-running drama series are built differently: they are meant to have long story arcs that keep people engaged in content over many weeks, months or years. There are throwbacks to old storylines, there are twists and turns to keep you watching, and if you miss an episode you get lost, so you don't ever miss an episode - or the soap adverts within them, their reason for being for which they are named - in case you are now behind with everything.
You'll find sports networks try to build the story arc around games too - create a sense of "missing out" if you don't watch the big game live.
I think the general point is that in the stream subscription era, everything has become like this "don't miss out" form, by doubling down on the need to see everything from the beginning and become a completist.
You can't easily have a comedy show like Cheers or Married... With Children, in 2026, because there's nothing to keep you in the "next episode" loop in the structure, so you end up with comedies with long-running arcs like Schitt's Creek.
The last set of sitcoms that were immune to this were probably of the Brooklyn 99, Cougartown and Modern Family era - there were in-jokes for the devotees, but you could pick up an episode easily mid-series and just dive in and not be totally lost.
Interesting exception: Tim Allen has managed to get recommissioned with an old style format a couple of times, but he's had to make sure he's skewing to an older audience (read: it's a story of Republican guys who love hunting), for any of it to make sense to TV execs.
Soap operas use entirely different tactic - every information is repeated again and again and again. They are meant to be half watched by people who work while watching them. So you need to be able to miss half the episode and still caught up comfortably.
That is why slow graduate changes.
Neither of these could afford serious multi episodes long arc with nuance played out the way current series can have.
The Polish "paradocumentary" format is like this, but taken to an extreme. Such shows are mostly dialog interleaved with a narrator describing exactly what just happened. There's also a detailed recap of everything that happened in the episode so far after every ad break, of which there are many.
It's basically daytime TV, to be watched at work, often as background, and without looking at the actual screen very often.
Many many years ago... it was already changing in the 90s and 2000s to slow changes per episode, with a callout for a little bit afterwards for anyone who missed the episode where the change occurred.
I think the slow changes in the 2000s and early 2010s were the sweet spot - a lot of room for episodic world and character building that would build to interspersed major episodes for the big changes.
> That was the principle many years ago, you had to leave the world exactly in the state you found it in.
This doesn't make sense; no show I know from that time followed that principle - and for good reason, because they'd get boring the moment the viewer realizes that nothing ever happens on them, because everything gets immediately undone or rendered meaningless. Major structural changes get restored at the end (with exceptions), but characters and the world are gradually changing.
> If John dumped Jane at the beginning of the episode, they had to get back together at the end, otherwise the viewer who had to go to her son's wedding that week wouldn't know what was going on.
This got solved with "Last time on ${series name}" recaps at the beginning of the episode.
I remember when slight hint of multiepisode story was revolutionary and everybody was tallking about it as a great thing. By today standards, nothing was happening.
> Major structural changes get restored at the end
This is the point. There persistent changes in these shows tended to be very minor. Nothing big ever happened that wasn’t fully resolved by the time the credits rolled unless it was a 2-part episode, and then it was reset by the end of the second episode.
How old are you? Because I promise you, that description was pretty much spot-on for most shows through most of the history of TV prior to the late 1990s. My memory is that the main exception was daytime soap operas, which did expect viewers to watch pretty much daily. (I recall a conversation explaining Babylon 5's ongoing plot arc to my parents, and one of them said, "You mean, sort of like a soap opera?") Those "Previously on ___" intro segments were quite rare (and usually a sign that you were in the middle of some Very Special 2-part story, as described in the previous comment).
Go back and watch any two episodes (maybe not the season finale) from the same season of Star Trek TOS or TNG, or Cheers, or MASH, or Friends, or any other prime time show at all prior to 1990. You won't be able to tell which came first, certainly not in any obvious way. (Networks didn't really even have the concept of specific episode orders in that era. Again looking back to Babylon 5 which was a pioneer in the "ongoing plot arc" space, the network deliberately shuffled around the order of a number of first-season episodes because they wanted to put stronger stories earlier to hook viewers, even though doing so left some character development a bit nonsensical. You can find websites today where fans debate whether it's best to watch the show in release order or production order or something else.)
By and large, we all just understood that "nothing ever happens" with long-term impact on a show, except maybe from season to season. (I think I even remember the standard "end of episode reset" being referenced in a comedy show as a breaking-the-fourth-wall joke.) Yes, you'd get character development in a particular episode, but it was more about the audience understanding the character better than about immediate, noticeable changes to their life and behavior. At best, the character beats from one season would add up to a meaningful change in the next season. At least that's my memory of how it tended to go. Maybe there were exceptions! But this really was the norm.
> Again looking back to Babylon 5 which was a pioneer...
Heh I was going to reply "B5 is better than TNG", but thought "better check all the replies first". Wherever there's discussion of extended plots there's one of us nerds. (If anyone hasn't seen it... yes half the first season is rough, but you get a season's worth of "The Inner Light"-quality episodes by the end and for all the major characters; TNG, while lovely, has just a few because there's so little character development besides Picard)
Babylon 5 was mostly in order, if you want to see something really screwed up check out the spinoff Crusade. On top of what the network did it was written more serially than Babylon 5 was.
Most shows were like that. Yes, there was some minor character growth and minor plot development over seasons most shows basically reset every episode. You almost have to when you’re targeting syndication because reruns don’t always happen in order and they often run so frequently that viewers can’t catch them all anyway.
Arguably there are lots of films which could have done with being 4-5 hours long, and were compressed to match conventions and hardware limits for 'movies'.
Lots of novelizations fall into this category. Most decently dense and serious novels cannot be done justice to in 2 hours. The new TV formats have enabled substantial stories to be told well.
The Godfather parts I and II is just one story cut in half in a convenient place. Why not cut it into 4 50 minute eps and an 80 minute finale? (Edit: this substantially underestimates the running time of the first two Godfather movies!)
People are going to pause your thing to go to the toilet anyway. You might as well indicate to them when's a good time to do so.
Obviously there are also quite a few movies where 90 minutes is plenty. Both formats seem needed.
A recent example is the Wicked movie musical. It’s not a film and its sequel. It’s two parts of the stage musical produced as a film and cut in half, released a year apart.
The alternative is the 1980s version of Dune, which tried to fit a massive novel into a single mass-market film runtime. It was fantastic, but people who hadn’t read the novel were left very short on story. The newer movies I’ve heard are much better in this regard, and it’s understandable because the runtime of the combined films is longer. The Dune 2000 (AKA SciFi Presents Frank Herbert’s Dune) miniseries was even better in some ways than the original film, largely for the same reasons.
Ender’s Game deserved to be at least two parts, because even the main character got no real character development. You barely learn Val exists, there’s really no Peter, and you barely meet Bean or Petra. There’s no Alai, Achilles, Fly, or Crazy Tom. There’s no zero-G battles at Battle School. The computer game is never even mentioned but is integral to the book. I don’t think it’s even mentioned in the film that Ender is a third child and why that’s important. It could have been a much better film in two or three parts.
This is something that always irked me about those old shows. Even kids ones when I was still a child. Absolutely zero story progression, nothing that happens matter.
This used to irk me too. And I liked the epic stories that really became mainstream in the 2010s. But the problem is, nowadays the progression in each episode has become minuscule. It’s not an epic told in 15 stories, it’s just one story drawn out in 15 chapters. It’s often just a bridge from one cliffhanger to the next.
For example most of new the Star Trek stuff, none of the episodes stand by themselves. They don’t have their own stories.
I agree, but when rewatching older Trek shows it is also a bit infuriating how nothing really has an impact.
Last season of TNG they introduced the fact that warp was damaging subspace. That fact was forgotten just a few episodes later.
I think Strange New Worlds walks that balancing act particularly well though.
A lot of episodes are their own adventure but you do have character development and an overarching story happening.
> when rewatching older Trek shows it is also a bit infuriating how nothing really has an impact
TNG: You get e.g. changes in political relationships between major powers in the Alpha/Beta quadrant, several recurring themes (e.g. Ferengi, Q, Borg), and continuous character development. However, this show does much better job at exploring the Star Trek universe breadth-first, rather than over time.
DS9: Had one of the most epic story arcs in all sci-fi television, that spanned multiple seasons. In a way, this is IMO a golden standard for how to do this: most episodes were still relatively independent of each other, but the long story arcs were also visible and pushed forward.
VOY: Different to DS9, with one overarching plot (coming home) that got pushed forward most episodes, despite individual episodes being mostly watchable in random order. They've figured out a way to have things have accumulating impact without strong serialization.
> Last season of TNG they introduced the fact that warp was damaging subspace. That fact was forgotten just a few episodes later.
True, plenty of dropped arcs in TNG in particular. But often for the better, like in the "damaging subspace" aspect - that one was easy to explain away (fixing warp engines) and was a bad metaphor for ecological anyway; conceptually interesting, but would hinder subsequent stories more than help.
> VOY: Different to DS9, with one overarching plot (coming home) that got pushed forward most episodes, despite individual episodes being mostly watchable in random order. They've figured out a way to have things have accumulating impact without strong serialization.
I wouldn't say they had any noticeable accumulating impact.
Kim was always an ensign, system damage never accumulated without a possibility of repair, they fired 123 of their non-replaceable supply of 38 photon torpedoes, the limited power reserves were quickly forgotten, …
Unless you mean they had a few call-back episodes, pretty much the only long-term changes were the doctor's portable holo-emitter, the Delta Flier, Seven replacing Kes, and Janeway's various haircuts.
> True, plenty of dropped arcs in TNG in particular. But often for the better, like in the "damaging subspace" aspect - that one was easy to explain away (fixing warp engines) and was a bad metaphor for ecological anyway; conceptually interesting, but would hinder subsequent stories more than help.
That and beta-cannon is this engine fix is why Voyager's warp engines moved.
The Doylist reason is of course "moving bits look cool".
The wildest dropped Arc were the absolutely horrifying mind control parasites. But like that the warp core speed limit I see why, you'd have to change the whole tone of the show if you wanted to keep them as a consistent threat.
To be fair, there were a couple of times where they mentioned being allowed to exceed warp speed limits for an emergency. Otherwise, they were usually traveling under Warp 6.
Agreed about strange new worlds. It’s what makes it the best Trek in 20 years - besides lower decks, of course. It feels like Star Trek again, because the episodic story telling allows to explore, well, strange new worlds.
It's a different medium, and it's intentional. And not even new either. The Singing Detective, Karaoke and Cold Lazarus did the same thing decades ago. Apparently they were successful enough that everybody does it now.
Google currently has an advertising campaign for Gemini (in conjunction with Netflix!) which is all about how you can use AI to tell you what the key episodes are so that you don’t need to watch the whole thing. If that isn’t an admission that most of it is filler I don’t know what is…
I think this is less “Netflix vs old TV” and more episodic vs serialised, and the serialised form definitely isn’t new.
Buffy is a great example: plenty of monster of the week episodes, but also season long arcs and character progression that rewarded continuity. The X-Files deliberately ran two tracks in parallel: standalone cases plus the mythology episodes. Lost was essentially built around long arcs and cliffhangers, it just had to make that work on a weekly broadcast cadence.
What’s changed is the delivery mechanism, not the existence of serialisation. When your audience gets one episode a week, with mid-season breaks, schedule slips, and multi-year gaps between seasons, writers have to fight a constant battle to re-establish context and keep casual viewers from falling off. That’s why even heavily serialised shows from that era often kept an episodic spine. It’s a retention strategy as much as a creative choice.
Streaming and especially season drops flip that constraint. When episodes are on demand and many viewers watch them close together, the time between chapters shrinks from weeks to minutes. That makes it much easier to sustain dense long-form narrative, assume recent recall, and let the story behave more like a novel than a syndicated procedural.
So the pattern isn’t new. On demand distribution just finally makes the serialised approach work as reliably at scale as it always wanted to.
> When your audience gets one episode a week, with mid-season breaks, schedule slips, and multi-year gaps between seasons
Multi-year gaps between seasons is a modern thing, not from the era you're talking about. Back then there would reliably be a new season every year, often with only a couple of months between the end of one and the beginning of the next.
> Streaming and especially season drops flip that constraint.
How does completely dropping a season flip that? Some shows with complicated licensing and rights have caused entire seasons to be dropped from a given streaming service and it’s very confusing when you finish season N and go right into season N+2.
Except when, for some reasons, the recent trend is to release an episode per week even though they have all of them filmed and could just drop a whole season.
As a binge watcher, this irks me to no end; I usually end up delaying watching episode 1 until everything is released, and in the process forget about the show for half a year or something, at which point there's hardly any conversation happening about it anymore.
Yes. Arguably the new Netflix mini series and extended episode formats are better for decent shows. To be fair, they are much worse for garbage shows. But 20x25 minute episodes is still an option, so what's the problem.
One Battle After Another - skip everything in the earlier timeline at the beginning of the movie. Nothing is lost. It might even be better, because what exactly is happening is a bit of a mystery but you still get all the info you need in the end.
As opposed to the House model where every episode is exactly the same with some superficial differences?
I like the long movie format, lots of good shows to watch. Movies feel too short to properly tell a story. It's just like a few highlights hastily shown and then it's over.
A lot of this is personal preference, but I still feel like the most memorable shows tend to be the ones that have a bit of both. Season-long stories, but also episodes that can stand on their own.
In a show like Stranger Things, almost none of the episodes are individually memorable or watchable on their own. They depend too much on the surrounding episodes.
Compare to e.g. Strange New Worlds, which tells large stories over the course of a season, but each episode is also a self-contained story. Which in turn allows for more variety and an overall richer experience, since you can have individual episodes experiment with wacky deviations from the norm of the show. Not all of those experiments will land for everybody (musical episodes tend to be quite divisive, for example), but there is a density to the experience that a lot of modern TV lacks.
The original Law & Order did a masterful job of this. Each episode (with very few exceptions) is self-contained, but deeper themes and character development run through them in long (often multi-season) arcs to reward the long-term viewer. But there was rarely more than one episode per season that was solely for the long-term viewer.
Sure, it's completely different from procedural comedic shows like House and there's some great shows to watch!
Still, sometimes it feels like the writers weren't granted enough time to write a shorter script. Brevity isn't exactly incentivized by the business model.
I feel like there are plenty of examples of movies that tell a good story. I think the reason people like long form television over movies is a movie requires an emotional commitment that it will end. But there’s always another episode of television.
I'm fine with this. I always wished regular movies were much longer. I wish lord of the rings movies included all the songs and poems and characters from the book and lasted like 7 hours each.
I seem to remember that it was The X-Files that first pioneered the “every episode is a mini-movie” and it showed in the production at the time compared to other stuff.
Could be mis-remembering though, when I think about early anthologies like Twilight Zone or Freddy’s Nightmares.
Rod Serling derogatorily coined the term "Soap Opera" because those also pioneered the ad break, typically for products aimed at housewives, e.g. soap.
Honestly what I don't get is how this even happened though: it's been I think 10 years with no progress on getting the volume of things to equal out, even with all the fancy software we have. Like I would've thought that 5.1 should be relatively easy to normalize, since the center speech channel is a big obvious "the audience _really_ needs to hear this" channel that should be easy to amplify up in any downmix....instead watching anything is still just riding the damn volume button.
I toyed with the idea of making some kind of app for this but while it may work on desktop it seems less viable for smart tvs which is what I primarily use.
Though I have switched to mostly using Plex, so maybe I could look into doing something there.
Never really tried anything. Just thought about it but I don't know the first thing about audio programming and like I said it doesn't seem viable for smart tvs anyway so I never did anything with it.
Map the front speaker outputs to the side speakers and the problem will be mitigated. I have been using this setup for about 2 years and it lets me actually hear dialog.
There's been a lot of speculation/rationalisation around this already, but one I've not seen mentioned is the possibility of it being at least a little down to a kind of "don't look back" collective arrogance (in addition to real technical challenges)
(This may also apply to the "everything's too dark" issue which gets attributed to HDR vs. SDR)
Up until fairly recently both of these professions were pretty small, tight-knit, and learnt (at least partially) from previous generations in a kind of apprentice capacity
Now we have vocational schools - which likely do a great job surfacing a bunch of stuff which was obscure, but miss some of the historical learning and "tricks of the trade"
You come out with a bunch of skills but less experience, and then are thrust into the machine and have to churn out work (often with no senior mentorship)
So you get the meme version of the craft: hone the skills of maximising loudness, impact, ear candy.. flashy stuff without substance
...and a massive overuse of the Wilhelm Scream :) [^1]
[^1]: once an in joke for sound people, and kind of a game to obscure its presence. Now it's common knowledge and used everywhere, a wink to the audience rather than a secret wink to other engineers.
> This may also apply to the "everything's too dark" issue which gets attributed to HDR vs. SDR
You reminded me of so many tv shows and movies that force me to lower all the roller shutters in my living room and I've got a very good tv otherwise I just don't see anything on the screen.
And this is really age-of-content dependent with recent one set in dark environments being borderline impossible to enjoy without being in a very dark room.
"everything's too dark": it could be temporizing or parallel construction, but my understanding is that "everything's too dark" originates from trying to optimize the color map to show off some detail that we obviously don't care about.
It hasn't. We've been having these same problems for decades. There was a while scandal about cable TV channels winding down the volume of shows so ads could play even louder.
I think a good chunk of it has to do with the TVs themselves. I don't have any extra sound system attached to my TV, so I'm working with whatever sound comes out of the TV itself. As TVs get thinner, speakers also get smaller, and focused downwards. So we're using tiny speakers that are pointed indirectly towards me.
I could probably fix over half of the problems I have with TV audio with a decent sound bar, and a good one is a decent percentage of the cost of a brand new TV.
> think a good chunk of it has to do with the TVs themselves
I have a thin TV. I have to turn on subtitles for modern films. For older movies from the same streaming service, however, I can understand everything fine.
Silly question -- for the older movies, have you seen them before?
If one of the arguments is that the people doing the sound mixing know the audio/words so they are oblivious to the difficulty that a new viewer will have understanding the words, it's also possible that a repeat viewer might also have similar biases with older media.
I can think of half a dozen different other reasons why there's a difference between older and newer media. I don't think it's just one thing. I do think differences in thin TVs is one factor, but not the only one. I have a few different generations of LCDs at home. I generally can understand spoken words better on my oldest one. It's also the thickest, so it should have the largest speakers.
But, I think another factor is the digital audio profiles. If you're mixing for just stereo (or even mono), you're probably going to get an easier to understand audio track. If you're mixing for surround-sound (and not listening on a 5.1 external receiver), the TV is going to have a more difficult time and the viewer is probably going to get a lower-quality audio track compared to a track mixed specifically for just two channels.
But, at least I now have a project -- I'll pull an older movie from Netflix that I haven't seen to test my theory...
> for the older movies, have you seen them before?
Mostly no. (Not a big movie rewatcher.)
> can think of half a dozen different other reasons why there's a difference between older and newer media
Would love for someone to study this. I think I can eliminate TV, streaming provider and my self as variables, given, again, highly non-scientifically, I've personally noticed the difference with those held constant.
That said, I'm researching sound bars to see if the TV speakers are part of the problem.
Netflix records many shows simultaneously in the same building. This is why their shows are all so dark - to prevent light bleeding across sets. I wonder if this is also true for keeping the volume down.
The darkness of shows has more to do with the mastering monitors having gotten so good that colorists don’t even notice if the dynamic range is just the bottom half or less. Their eyes adjust and they don’t see the posterisation because there isn’t any… until the signal is compressed and streamed. Not to mention that most viewers aren’t watching the content in a pitch black room on a $40K OLED that’s “special order” from Sony.
Look at any setup audio is being mixed on and tell me how many sound bars do you see there? How many flat panels with nothing more than the built in speakers being used? None. The speakers being used and the tricks the equipment do to make multichannel audio work with fewer speakers plays havoc on well mixed audio. Down mixing on consumer device is just never going to sound great
There’s something to what you’re saying - but it’s also something of a spectrum.
Our need to turn up the volume in dialog scenes and turn it back down again in action scenes (for both new and old content) got a lot less when we added a mid-range soundbar and sub to our mid-range TV (previously was using just the TV speakers). I’m not sure whether it’s sound separation - now we have a ‘more proper’ center channel - or that the ends of the spectrum - both bass and treble - are less muddy. Probably a combination of the two.
Audio has become a WTF situation for me. I grew up with speakers that had a driver for low ends some where in the 8"-12", a driver for the mids typically in the 4"-6", and then a third for the highs in various forms of a tweeter. These were all internally crossed over so that only a single cable was necessary.
Now, we have "satellite" speakers that are smaller than the tweeter and are touted as being all that's necessary. Sound bars are also using speakers the size of an old tweeter just in an array with separation between left/right sides smaller than the width of the TV. Some how, we let the marketing people from places like Bose convince us that you can make the same sound from tiny speakers.
Multichannel mixes used to also include a dedicated stereo mix for those mere mortals without dedicated surround setups. These were created in the same studio with mixing decisions made based on the content. Now, we just get downmixes made by some math equation on a chip that has no concept of what the content is and just applies rules.
Bit of a tangent, but another WTF for me has been the mainstream return of mono audio for music (HomePods, Echos, many Bluetooth speakers etc.), after decades of everything being at least stereo.
Ugh, mono. Phasing is an effect that is used quite a bit, and when things are 180° out of phase and mixed to mono, oops, no more audio. Of course, my favorite use of 180° phasing was Dolby ProLogic to send that to the mono rear speakers. It was always fun listening to music that was not mixed for ProLogic but used 180° phasing as an effect in ProLogic decoding enabled. Random things would play from the rear speakers.
My theory is that many people don’t have space for speakers. Soundbar sounds better than TV speakers.
I also think the focus on surround means people don’t consider stereo speakers. Good bookshelf speakers are better than surround kits and easier to install. I also wonder if normal speakers are no longer cool.
Finally, I wonder if people don’t like the big receivers. There are small amplifiers but I can’t find one that works in home theater with HDMI port.
It was garbage before streaming services took off. Dark Knight Rises is one example. I can remember renting DVDs in the mid to late 2000s from Netflix and they had a similar issues.
Dark Knight is an edge case because Christopher Nolan is a special kind of retarded when it comes to mixing his movies. He literally refuses to accept that people want to understand what characters are saying. [0]
But here's the thing: Most movies are mixed for 5.1 or more surround setups, where the front middle speaker has most of the dialog. Just boost that speaker either via setting or in a stereo/virtual surround by a significant amount and add some volume compression and you get something that's reasonable on a home theater setup.
That interview is maddening. Of course people flinch more at bad sound versus bad imagery, they're completely different senses. Our hearing is deeper and more archaic, more directly connected to our emotional than language neural centers, and harder to shut off.
Imagine someone being perplexed at people's "conservatism" with regard to smell. Pump an even slightly unpleasant odor into the theater and people walk out in droves. The tolerance for these types of risky moves definitely varies by sense.
Eh, if you ask people what they want they'll say a faster horse.
I can understand his point that you can go wild with visual effects in movies so he wants to experiment with sound. I do think his experiments are not successful though but you can't always pick winners.
I just wish I could get the unedited movies for home and have black boxes to fix the resolution instead of getting an edited movie. I don't mind not being able to hear the words when I can read them plus it removes second screen temptations.
Well, somehow, most of short-form content on YouTube doesn't have this problem. Perfectly clear dialogs.
I think the main problem is that producers and audio people are stupid, pompous wankers. And I guess it doesn't help that some people go to cinema for vibrations and don't care about the content.
The problem is that a lot of content today is mixed so that effects like explosions and gunshots are LOUD, whispers are quiet, and dialog is normal.
It only works if you're watching in a room that's acoustically quiet, like a professional recording studio. Once your heater / air conditioner or other appliance turns on, it drowns out everything but the loudest parts of the mix.
Otherwise, the problem is that you probably don't want to listen to ear-splitting gunshots and explosions, then turn it down to a normal volume, only to make the dialog and whispers unintelligible. I hit this problem a lot watching TV after the kids go to bed.
Yes, seems like both audio and video are following a High Dynamic Range trend.
As much as I enjoy deafeningly bright explosions in the movie theater, it's almost never appropriate in the casual living room.
I recently bought a new TV, Bravia 8ii, which was supposedly not bright enough according to reviewers. In it's professional setting, it's way to bright at night, and being an OLED watching HDR content the difference between the brightest and darkest is simply too much, and there seems to be no way to turn it down without compromising the whole brightness curve.
The sound mixing does seem to have gotten much worse over time.
But also, people in old movies often enunciated very clearly as a stylistic choice. The Transatlantic accent—sounds a bit unnatural but you can follow the plot.
Lots of the early actors were highly experienced at live stage acting (without microphones) and radio (with only microphone) before they got into video.
Yes, I forgot to mention that by "old movies" I mean things like Back to the Future. After a lifetime of watching it dubbed, I watched it with the original audio around a year ago, and I was surprised how clear the dialogues are compared to modern movies.
To be fair, the diction in modern movies is different than the diction in all other examples you mentioned. YouTube and live TV is very articulate, and old movies are theater-like in style.
That's interesting. I have heard many people complaining about the sound mix in modern Spanish productions, but I never have problems understanding them. Shows from LATAM are another topic though, some accents are really difficult for us.
I "upgraded" from a 10 year old 1080p Vizio to a 4K LG and the sound is the worst part of the experience. It was very basic and consistent with our old TV but now it's all over the place. It's now a mangled mess of audio that's hard to understand.
I had the same issue, turn on the enhanced dialogue option. This makes the EQ not muffle the voices and have them almost legible. I say almost because modern mixing assume a center channel for voices that no TV have.
Perhaps a mixing issue on your end? Multi-channel audio has the dialog track separated. So you can increase the volume of the dialog if you want. Unfortunately I think there is variability in hardware (and software players) in how to down-mix, which sometimes results in background music in the surround channels drowning out the dialog in the centre channel.
It's reasonable for the 5.1 mix to have louder atmosphere and be more dependent on directionality for the viewer to pick the dialog out of the center channel. However, all media should also be supplying a stereo mix where the dialog is appropriately boosted.
> Multi-channel audio has the dialog track separated. So you can increase the volume of the dialog if you want
Are you talking about the center channel on an X.1 setup or something else? My Denon AVR certainly doesn't have a dedicated setting for dialog, but I can turn up the center channel which yields variable results for improved audio clarity. Note that DVDs and Blurays from 10+ years ago are easily intelligible without any of this futzing.
It's an issue even in theaters and is the main reason I prefer to watch new releases at home on DVD (Dune I saw in the theater, Dune 2 I watched at home.)
I have the same sound issues with a lot of stuff, my current theory at this point is that TVs have gotten bigger and we're further away from them but speakers have stayed kinda shitty... but things are being mixed by people using headphones or otherwise good sound equipment
it's very funny how when watching a movie on my macbook pro it's better for me to just use HDMI for the video to my TV but keep on using my MBP speaker for the audio, since the speakers are just much better.
If anything I'd say speakers have only gotten shittier as screens have thinned out. And it used to be fairly common for people to have dedicated speakers, but not anymore.
Just anecdotally, I can tell speaker tech has progressed slowly. Stepping in a car from 20 years ago sound... pretty good, actually.
I agree that speaker tech has progressed slowly, but cars from 20 years ago? Most car audio systems from every era have sounded kinda mediocre at best.
IMO, half the issue with audio is that stereo systems used to be a kind of status symbol, and you used to see more tower speakers or big cabinets at friends' houses. We had good speakers 20 years ago and good speakers today, but sound bars aren't good.
On the other side being I needed to make some compromises with my life partner and we ended up buying a pair HomePod mini (because stereo was a hard line for me).
They do sound pretty much ok for very discreet objects compared to tower speaker. I only occasionally rant when sound skip a beat because of WiFi or other smart-assery. (Nb: of course I never ever activated the smart assistant, I use them purely as speakers).
A high end amp+speaker system from 50 years ago will still sound good. The tradeoffs back then were size, price, and power consumption. Same as now.
Lower spec speakers have become good enough, and DSP has improved to the point that tiny speakers can now output mediocre/acceptable sound. The effect of this is that the midrange market is kind of gone, replaced with neat but still worse products such as soundbars (for AV use) or even portable speakers instead of hi-fi systems.
On the high end, I think amplified multi-way speakers with active crossovers are much more common now thanks to advances in Class-D amplifiers.
I feel like an Apple TV plus 2 homepod minis work well enough for 90% of people’s viewing situations, and Apple TV plus 2 homepods for 98% of situations. That would cost $330 to $750 plus tax and less than 5 minutes of setup/research time.
The time and money cost of going further than that is not going to provide a sufficient return on investment except to a very small proportion of people.
Speakers haven't gotten a lot cheaper either. Almost every other kind of technology has fallen in price a lot. A good (single) speaker, though, costs a few hundred euros, which is the same it has pretty much always costed. You'd think that the scales of manufacturing the (good) speakers would bring the costs down, but apparently this hasn't happened for whatever reason.
I have a relatively high end speaker setup (Focal Chora bookshelves and a Rotel stereo receiver all connected to the PC and AppleTV via optical cable) and I suffer from the muffled dialogue situation. I end up with subtitles, and I thought I was going deaf.
I strongly recommend you try adding a center channel to your viewing setup, also a subwoofer if you have the space. I had issues with clarity until I did that.
I don't find the source anymore but I think that I saw that it was even a kind of small conspiracy on tv streaming so that you set your speakers louder and then the advertisement time arrive you will hear them louder than your movie.
Officially it is just that they switch to a better encoding for ads (like mpeg2 to MPEG-4 for DVB) but unofficially for the money as always...
I feel like the Occam's Razor explanation would be that way TVs are advertised makes it really easy to understand picture quality and far less so to understand audio. In stores, they'll be next to a bunch of others playing the same thing such that really only visual differences will stand out. The specs that will stand out online will be things like the resolution, brightness, color accuracy, etc.
I think the issue is dynamic range rather than a minor conspiracy.
Film makers want to preserve dynamic range so they can render sounds both subtle and with a lot of punch, preserving detail, whereas ads just want to be heard as much as possible.
Ads will compress sound so it sounds uniform, colorless and as clear and loud as possible for a given volume.
> I don't find the source anymore but I think that I saw that it was even a kind of small conspiracy on tv streaming so that you set your speakers louder and then the advertisement time arrive you will hear them louder than your movie.
It's not just that. It's obsession with "cinematic" mixing where dialogues are not only quieter that they could, to make any explosion and other effects be much louder than them, but also not enough above background effects.
This all work in cinema where you have good quality speakers playing much louder than how most people have at home.
But at home you just end up with muddled dialogue that's too quiet.
I think it isn't a mixing issue, it's an acting issue.
It's the obsession with accents, mixed with the native speakers' conviction that vowels are the most important part.
Older movies tended to use some kind of unplaceable ("mid atlantic") accent, that could be easily understood.
But modern actors try to imitate accents and almost always focus on the vowels. Most native speakers seem to be convinced that vowels are the most important part of English, but I think it isn't true. Sure, English has a huge number of vowels, but they are almost completely redundant. It's hard to find cases where vowels really matter for comprehension, which is why they may vary so much across accents without impeding communication. So what the actors do is that they focus on the vowels, but slur the consonants, and you are pretty much completely lost without the consonants.
The Mid-Atlantic accent has fallen out of favor since at least the latter part of the 50s.
The issue with hard to understand dialog is a much more recent phenomenon.
I have a 5.1 surround setup and by default I have to give the center a boost in volume. But still you get the movie where surround (sound effects) is loud and the center (dialog) is low.
I watch YouTube with internal TV speakers and I understand everything, even muddled accents. I cannot understand a single TV show or movie with the same speakers. Something tells me it's about the source material, not the device.
Well of course, YouTube is someone sitting in front of the camera with no background noise and speaking calmly.
In a movie the characters may be far away (so it needs to sound like that, not like a podcast), running, exhausted, with a plethora of background noises and so on.
That would be true, except even in calm scenes in movies it's an issue. Unless I turn the volume high enough, in which case music and sfx become neighbor-waking loud. To be clear: I'm not talking about scenes where characters speak over an explosion. The overall mix does not allow having the same volume for all scenes of the movie, pick your poison: wake the neighbors or don't understand dialogues.
Somehow youtube videos don't have this issue. Go figure /s
It's the same idea, a narrated youtube video is meant to have the same volume throughout, while a movie is meant to have quiet and loud parts.
The problem, as you say, is that if you don't want to have loud parts, you lower the volume so that loud is not loud anymore, and then the quiet but audible parts become inaudibly quiet.
I consider this to be a separate issue to the lack of clarity of internal speakers, and a bit harder to solve because it stems from the paper thin walls common in the US and other places.
You can usually use audio compression to fix this if you can't play the movie at the volume level it's meant to be played.
Did you mean to reply to a different comment? What does calibration or 75 dB have to do with anything I said?
The most common experience on the poorly mixed content that several in this thread are complaining about are: the volume setting necessary for intelligible audio results in uncomfortably loud audio in other parts.
This is a defect of the content, not of the system it's playing on.
A YouTube video is likely a single track of audio or a very minimal amount. A movie mixed for Dolby Atmos is designed for multiple speakers. Now, they will create compromised mixes for something like a stereo setup, and a good set of bookshelf speakers will be able to create a phantom center channel. However, having a dedicated center channel speaker will do a much better job. And using the TV's built in speakers will do a very poor job. Professional mixing is a different beast than most YouTube videos, and accordingly, the sound is mixed quite different.
Yup, I definitely do agree those are wildly different beasts. But the end result is, the professional mixing is less enjoyable than amateur-ish youtube mixing. Which is a shame, really. Mixing is a craft that is getting ruined (imho) by the direction to perform theatrical mixes (where having building-shaking sfx is not an issue) or atmos mixes (leaving no budget/time for plain stereo mixes).
The crux of the issue IMHO is the theatrical mixes. Yes I can tune the TV volume way up and hear the dialogue pretty well. In exchange, any music or sfx is guaranteed to wake the neighbors (I live in a flat, so neighbors are on the other side of the wall/floor/ceiling).
As someone with a dedicated center speaker, people doing audio mixing do not effectively use it. I even have it manually boosted. Sometimes it's 10% better than without one, but nowhere near enough to make a real difference.
YouTube very likely has only a 2.0 stereo mix, TV shows and movies are mostly multichannel. Something tells me it's about the source material being a poor fit for your setup.
Which is just another drama that should not be on consumers shoulders.
Every time I visit friends with newer TV than mine I am floored by how bad their speakers are. Even the same brand and price-range. Plus the "AI sound" settings (often on by default) are really bad.
I'd love to swap my old tv as it shows it's age, but spending a lot of money on a new one that can't play a show correctly is ridiculous.
I really don't want to install multiple new devices. I don't care about the cost, the inconvenience and hassle is a PITA. Plus then you had to fiddle with multiple volume controls instead of one to make it work for your space.
No thank you. We should make the default work well, and if people want a sound optimized experience that requires 6x the pieces of equipment let those who want to do the extra work do what they need to for the small change in audio quality.
Without that change in defaults more and more people will switch to alternatives, like TikTok and YouTube, that bother to get understandability as the default rather than as something requiring hours of work and shopping choices.
> Plus then you had to fiddle with multiple volume controls instead of one to make it work for your space.
Most AVRs come with an automatic calibration option. Though there are cheap 5.1 options on the market that will get results multiple times better than your flatscreen can produce.
> We should make the default work well
Yep, movies should have properly mastered stereo mixes not just dumb downmixes from surround that will be muddy, muffled and with awful variations in loudness.
However getting a better sound system is a current solution to the problem that doesn't require some broad systemic change that may or may not ever happen.
A far better solution that I take: not consume the media at all. Not only is there an abundance of media these days, but there are many many other better ways to spend time, such as writing comments on Hacker News that very few people will ever see.
I have spent about half an hour investigating sound bars as a result of these discussions, and that's a loss of life that I can never get back, and I regret spending that much time on the problem.
Couldn't they be miles better if we allowed screens to be thicker than a few millimeters?
I believe one could do some fun stuff with waveguides and beam steering behind the screen if we had 2 inch thick screens. Unfortunately decent audio is harder to market and showcase in a bestbuy than a "vivid" screen.
If someone buys a TV (y'know, a device that's supposed to reproduce sound and moving pictures), it should at least be decent at both. But if people want a high-end 5.1/7.1/whatever.1 sound then by all means they should be able to upgrade.
My mum? She doesn't want or need that, nor does she realistically have the space to have a high-end home-cinema entertainment setup (much less a dedicated room for it).
It's just a TV in her living room surrounded by cat toys and some furniture.
So, if she buys a nearly €1000 TV (she called it a "stupid star trek TV") it should at least be decent—although at that price tag you'd reasonably expect more than just decent—at everything it's meant to do of the box. She shouldn't need to constantly adjust sound volume or settings, or spend another thousand on equipment and refurbishment to access to decent sound.
In contrast, they say the old TV that's now at nan's house has much better sound (even if the screen is smaller) and are thinking of swapping the TVs since nan moved back in with my mum.
Good speakers isn't really compatible with flatness of modern tv's. You can certainly make one with good speakers, but it would look weird mounted on the wall. Buying external speakers seems like a decent tradeoff for that.
Sure, it would be nice if TVs could have good sound out of the box if that meant no other tradeoffs. But if it means making the TV thicker (and, as other comments have pointed out, it probably would) then I'd be against it, since I never use the built-in TV speaker and frankly don't think anyone should.
Honestly I think high-end TVs should just not include speakers at all, similar to how high-end speakers don't contain built-in amplifiers. Then you could spend the money saved on whatever speakers you want.
> She shouldn't need to constantly adjust sound volume or settings, or spend another thousand on equipment and refurbishment to access to decent sound.
Everyone cares about hearing the words. Those who care about hearing nuanced and buy extra sound equipment are a distinct and much much much smaller set of viewers. Yet only tha smaller set seems to be able to get decent results.
A sound bar, even though fairly bad, is still a million times better than internal speakers, and you'd need a very exotic setup to be unable to fit one.
I'm surprised given you care about audio that you can even tolerate internal speakers. I'd just not use that TV and watch wherever you have better audio.
Various sections of my screen (LG C series) are significantly thicker than 30mm.
Also - this isn’t a speaker problem this is a content problem. I watched the princess bride last week on the TV, and didn’t require captions, but I’m watching Pluribus on Netflix and I’m finding it borderline impossible to keep up without them.
Imagine if we said “hey your audio is only usable on iPhone if you use this specific adapter and high end earphones”. Somehow the music industry has managed to figure out a way to get stuff to sound good on high end hardware, and passable on even the shittiest speakers and earbuds imaginable, but asking Hollywood blockbusters to make the dialog literally audible on the most popular device format is too much?
Im a bit confused why you’re surprised to see American terminology on a site with a predominantly American user base, or why it’s worth commenting on.
That said, I’m Irish and live in the UK. You’ve never heard people say “I’ll hoover that”, or “you can google that”? Kleenex and band aid are definitely American ones but given the audience I thought it was apt
Apple TV (the box) has an Enhance Dialogue option built-in. Even that plus a pair of Apple-native HomePods on full volume didn’t help me hear wtf was going on in parts of Pirates of the Caribbean (2003) on Disney. If two of the biggest companies on the planet can’t get this right, I don’t know who can.
"Reduce Loud Sounds" does dynamic range compression. If you pair this with "Enhance Dialogue" you'll probably have an easier time making out what is said.
The problem is multi-faceted. There was a YouTube video from a few years ago that explains this[1]. But, I kind of empathise with you; I and some friends also have this issue sometimes when watching things.
It really isn't. I've never, never had a hard time hearing the voiceover whenever the ads decide to intrude. Sound editor and mixer is a full time job. The audio problem starts and ends with them not doing their job. If the source is mumbled, the experience needs to be fixed in post or redone. Else garbage in, garbage out. It's only multi faceted in regards to letting the quality of the finished product slip on every check down the line.
As mentioned elsewhere: no problem with youtube videos (even with hard accents like scottish) but a world of pain for tv shows and movies. On the same TV.
Oh, and the youtube videos don't have the infamous mixing issues of "voices too low, explosions too high".
It's the source material, not the device. Stop accusing TV speakers, they are ok-tier.
So what about older films? Can you understand Die Hard on the same set? What about Lord of the Rings? That would help to determine whether it's newer films or your newer speakers that are the problem since millions of people have enjoyed those films with no problems.
Replying late, but yes, I have less trouble with older films. It is a mix of more articulate acting (worse on-set mics so actors spoke rather than mutter), and less over-the-top mixing.
For current movies, some of the most legible are "children" oriented movies: I watched the Dragons set and it was trouble-free.
Many tvs have special sound modes for old people that boost the vocal range significantly. Makes the overall audio sound like crap, so pretty close match for youtube audio.
You do realize that "voices too low, explosions too high" is because of the audio mixing in the movies and how it sounds on shitty integrated speakers right?
When you have a good setup those same movies sound incredible, Nolan films are a perfect example.
I understand it perfectly well, yes. It is an audio mixing made for theaters with sound isolation so that it's absolutely possible to hear the dialogue. I have no trouble understanding the dialogue with the volume tuned up to what I would have in a theater.
Yet I do live in a flat, in Paris, with neighbors on the same floor, on the floor above, and on the floor below. Thus I tune the volume to something that is acceptable in this context.
Or I should say, I spend the whole movie with the remote in my hand, tuning the volume up and down between voices and explosions.
Theatre mix is a bad home mix. It is valid for home cinema. Not for everyday living room.
Yes I could buy a receiver and manually EQ all channels and yadda yadda yadda. I live in an apartment. My 65" LG C2 TV is already ginormous by parisian flat standards. Ain't nobody got space for a dedicated receiver and speakers and whatnot. I tuned the audio, and some properly mixed movies actually sound great!
As an added bonus, I had troubles with "House of Guinness" recently both on my TV and with good headphones, where I also did the volume dance.
IMHO there's no care spent on the stereo mixes of current movies and TV shows. And to keep your example, Nolan shows are some of the most understandable and legible on my current setup :)
Another fact is, I have no trouble with YouTube videos in many languages and style, or with video games. You know, stuff that care about legibility in the home.
Soundbars are a good option, but spend some time reading reviews as there is a huge gap between the cheaper ones and good quality that will actually make a difference.
My brother has 2 of the apple speakers in stereo mode and they sound pretty good imo.
I have an eye-wateringly expensive 7.1 surround system in the living room, and a pair of full size HomePods either side of the TV in my studio. I prefer the audio from the HomePods.
I'm listening to a majority of video content in my stereo headphones on PC. They are good and quality of every source is good. Everything sounds fine except for some movie and some TV shows specifically. And those are atrocious in clarity.
Regarding internal speakers, I have listened to several cheap to medium TVs on internal speakers, and yes on some models the sound was bad. But it doesn't matter, because the most mangled frequencies are high and low, and that's not the voice ones. When I listen on the TV with meh internal speakers I can clearly understand without any distortion voices in the normal TV programming, in sports TV, in old TV shows and old movies. The only offenders again are some of he new content.
So no, it's not the internal speakers who are at fault, at all.
> Do you spend the effort of specifically selecting stereo tracks (or adjusting how it gets downmixed)?
Umm, isn't that literally a job description of a sound engineer, who on a big production probably makes more in a year than I will do in my whole lifetime?
Is spending a few hours one time to adjust levels on a track, which will run for likely millions of hours across the world such a big ask? I think no, because not every modern movie is illegible, some producers clearly spend a bit of effort to do just that what you wrote. But some just don't care.
> Umm, isn't that literally a job description of a sound engineer, who on a big production probably makes more in a year than I will do in my whole lifetime?
Well, if your setup is stereo then either selecting a stereo track is your job, or your job is to adjust the downmix that is done by your computer because you didn't select the stereo track.
I agree that providing a good stereo mix is the sound engineer's job, but nothing beyond that.
> I agree that providing a good stereo mix is the sound engineer's job, but nothing beyond that.
That's the whole point of this whole thread, no one asks for anything more or out of ordinary. Stereo tracks sometimes have unreasonably bad quality. Nolan even admitted he does this on purpose.
Do you realize that phones, tablets, laptops, most PCs don't have an option of "just add speakers"? You are technically correct, yes full Dolby Duper Atmo 9.2.4.8.100500 system is better. But people without them are not using their setups incorrectly, they have valid setups they have valid use case and they don't get basic level of quality which IS possible and WAS possible just a few years ago with proper channel mixing.
It is entirely the fault of people buying shitty plastic shovelware pc laptops that they ended up with laptops with dogshit speakers. You can buy laptops with good sounding speakers
If you are at an arm's length from a proper amplifier/speaker setup, why are you using a tiny screen to watch movies? (that was a rhetorical question)
Phone/tablet/laptop etc. in my top comment was not a technological limitation, like "oh no, we don't have a port or protocol to connect o speakers and so we can't use them". It was a logistical limitation. Like being physically in place without speakers or possibility to even buy them. Traveling, renting, having big family and only one set of speakers, and so on. Situations where you can't just pluck a Dolby setup from a thin air but do still watch movies.
Here is a datapoint - in the whole world around 1-2 *billion* headphones are sold, every single year. I would bet that at least a double digit percentage of those numbers had been used to watch a movie at least once. Proposing that all those people in all those situations bought themselves a surround speaker setup just to understand voice track in the movies is an inane take.
Dude, you are completely daft here, bringing some imaginary stuff like "morality" and "equity" into a technical discussion. The fact is that sound producers can easily fix stereo tracks to be legible and they actually did it for decades, before the recent hype came. And you are white knighting billionaires working for megacorps, for no discernible reason. What would happen to you personally if Nolans of the world would mix a better stereo track (which they already do anyway)? Your ego will be hurt? Or what? No one is "taking" precious Dolby Atmo from you. Better stereo tracks can exist in this world at the same time as theatrical surround tracks, surprise surprise.
I can't find the article now but supposedly it's because "new" (within the last 10ish years) productions are created for multiple devices and audio engineers target the lowest-common denominator, which are smartphones.
If on a PC, there are numerous websites with various VLC "movie" settings to combat this issue. I've tried several with mixed results, I always end up reverting to default at some point because for some movies, they work, but other movies not so well, and it's horribly annoying to constantly tweak VLC advanced settings (too many clicks IMO). The idea being that with VLC, you can change frequency volumes to raise typical frequencies for voices and an in-turn lower other frequencies typical in actions scenes e.g. for explosions.
I don't understand how your inability to understand dialog negates a producer giving appropriate instructions on visual settings? The post was good advice, and your train of thought feels like some sort of fallacy.
To be a bit more helpful, what are you using to listen to the show? There are dozens of ways to hear the audio. Are you listening through the TV speakers, a properly set up center channel speaker, a Kindle Fire tablet, or something else? Providing those details would assist us in actually helping you.
Conspriacy theory ... TVs have bad sound so you're compelled to by a soundbar for $$$
I've certainly had the experience of hard to hear dialog but I think (could be wrong) that that's only really happened with listening through the TV speakers. Since I live in an apartment, 99% of the time I'm listening with headphones and haven't noticed that issue in a long time.
I don't think the bad sound is necessarily deliberate, its more of a casualty of TV's becoming so very thin there's not enough room for a decent cavity inside.
I had a 720p Sony Bravia from around 2006 and it was chunky. It had nice large drivers and a big resonance chamber, it absolutely did not need a sound bar and was very capable of filling a room on its own.
Soundbars are usually a marginal improvement and the main selling point is the compact size, IMO. I would only get a soundbar if I was really constrained on space.
Engineering tradeoffs--when you make speakers smaller, you have to sacrifice something else. This applies to both soundbars and the built-in speakers.
Like all conspiracy theories, this seems rooted in a severe lack of education. How exactly do you expect a thin tiny strip to produce any sort of good sound? It's basic physics. It's impossible for a modern tv to produce good sound in any capacity.
My Mac is pretty thin. It provides pretty good sound. My older LCD TVs (before my current one) all provided good sound. So no, I don't need to have a server lack of education. All I need is my actual experience to know that it's not impossible for a thin TV to have reasonable sound.
It's easier to believe in conspiracy than do a few minutes of research to discover that you need a good quality sound system to have good quality sound.
I had the same thing with Severance (last show I watched, I don't watch many) but I'm deaf, so thought it was just that. Seemed like every other line of dialogue was actually a whisper, though. Is this how things are now?
Our tv’s sound is garbage and I was forced to buy a soundbar and got a Sonos one. Night mode seems to crush down the sound track. Loud bits are quieter and quiet bits are louder.
Voice boost makes the dialogue louder.
Everyone in the house loves these two settings and can tell when they are off.
I suspect downmixes to stereo and poor builtin speakers might be heavily contributing to the issue you describe. Anecdotally, I have not encountered this issue after I added a center channel.
Nor do I have any issues with the loudness being inconsistent between scenes. I suspect that might be an another thing introduced by downmixing. All the surround channels are "squished" into stereo, making the result louder than it would have otherwise been.
One big cause of this is the multi-channel audio track when all you have is stereo speakers. All of the dialog that should be going into the center speaker just fades away, when do you actually have a center the dialog usually isn't anywhere near as quiet.
Depending on what you're using there could be settings like stereo downmix or voice boost that can help. Or see if the media you're watching lets you pick a stereo track instead of 5.1
We've been mixing vocals and voices in stereo since forever and that was never a problem for clarity. The whole point of the center channel is to avoid the phantom center channel collapse that happens on stereo content when listening off center. It is purely an imaging problem, not a clarity one.
Also, in consumer setups with a center channel speaker it is rather common for it to have a botched speaker design and be of a much poorer quality than the front speakers and actually have a deleterious effect to dialog clarity.
It's a clarity problem too. Stereo speakers always have comb filtering because of the different path lengths from each ear to the two speakers. It's mitigated somewhat by room reflections (ideally diffuse reflections), but the only way to avoid it entirely is by using headphones.
Try listening to some mono pink noise on a stereo loudspeaker setup, first hard-panned to a single speaker, and then centered. The effect is especially obvious when you move your head.
Welp we had no issues in ye ol days. When DVD releases were expected to be played on crappy TVs. Now everything is a theatre mix with 7.1 or atmos and whatnot.
Yes we know how to mix for stereo. But do we still pay attention to how we do?
This is a gross simplification. It can be part of the explanation, but not the whole one, not even the most important.
It mostly boils down to filmmaker choices:
1. Conscious and purposeful. Like choosing "immersion" instead of "clarity". Yeah, nothing speaks "immersion" than being forced to put subtitles on...
2. Not purposeful. Don't atttibute to malice what can be explained by incompetency... Bad downmixing (from Atmos to lesser formats like 2.0). Even if they do that, they are not using the technology ordinary consumers have. I mean, the most glaring example is the way the text/titles/credits size on screen have been shrinking to the point of having difficulties reading them. Heck, often I have difficulties with text size on by FullHD TV, just because the editing was done on some kind of fancy 4k+ display standing 1m from the editor. Imagine how garbage it looks on 720 or ordinary 480!
For the recent example check the size (and the font used) of the movie title in the Alien Isolation movie and compare it to the movies made in the 80-90s. It's ridiculous!
There are many good youtube videos that explain the problem in more details.
Using some cheap studio monitors for my center channel helped quite a bit. It ain't perfect, I still use CC for many things, but the flat mid channel response does help with speech.
There's a things called 'hidden hearing loss' in which the ability to pause midband sounds specifically in complex/noisy situations degrades. This is missed by standard tests, which only look for ability to hear a given frequency in otherwise silent conditions.
This is probably the sound settings on your TV. Turn off Clear Voice or the equivalent, disable Smart Surround, which ignores 2.0 streams and badly downmuxes 5.1 streams, and finally, check your speaker config on the TV - they’re often set to Showroom by default, which kills voice but boosts music and sfx, and there should also be options for wall proximity, which do matter, and will make the sound a muddy mess if set incorrectly.
For an interesting example that goes in the opposite direction, I've noticed that big YouTube creators like MrBeast optimize their audio to sound as clear as possible on smartphone speakers, but if you listen to their content with headphones it's rather atrocious.
Americans also seem to believe that their accent, which generally sounds awful to other speakers, is somehow natural and easy to understand for everyone.
I turn on closed captions for most American films, but I find that I rarely need them for British ones.
What a weird comment. I think that probably most Americans, like most people of any nationality, could give two shits if people elsewhere find their accent hard to understand, or “awful”.