Share your experience!
Hi All,
I really, really hoped earlier in the year that the problems I was experiencing with my 8005C (2015) were a thing of the past. Unfortunately for the past couple of days the audio on the Amazon app on the TV has been badly out of sync. All of the other Amazon apps on e.g. my Sony Blu Ray player and PS4 are absolutely fine, and it's not affecting Netflix etc. So it is defnitely the TV.
Is anyone else having the same problem and is there a cure for this irritating issue? It's the only app where I can access 4K playback so I'm keen to get it resolved.
Well, updated to Marshmallow 6.0 and the issues are still there. Massive lip sync is mostly seen on all episodes of Grand Tour.
Some research seems that it is Amazon's DD+ stream that is the issue as problems on both ROKU and Nvidia Shield?
Nvidia Shield issues - https://forums.geforce.com/default/topic/990329/amazon-video-audio-not-in-sync-after-experience-upgr...
Roku issues - https://forums.roku.com/viewtopic.php?t=97972
@Anonymous is it possible this could be fed back to Sony?
I've had the same problem. It occurs with several Amazon prime programs but only when using Bravia app, not on other platforms or with other apps on TV.
I don't want to belabour the point I have tried to make in other posts but lip-sync error to some degree is present in essentially "all" broadcasts. According to research at Stanford University in the US over 20 years ago most people do not consciously notice 42 ms of lip-sync error.
For some of us the threshold at which we notice lip-sync error consciously might be 80 or even 100 ms. We don't see it (consciously) until something pushes it beyond our own personal threshold. Naturally we blame whatever does that. Whatever app or programme or piece of equipment that introduced the last increment of lip-sync error that went beyond what we could ignore. I've seen it called our "threshold of recognition" which varies tremenously from one person to another.
But the device that gets the blame "may have" added far less lip-sync error than was already in the signals previously added by others. That is to say a device or app that adds the last 20 ms and brings the problem into your conscious gets the blame but if the signal had not already had 40 ms error caused by others you may never have noticed it. And others with a threshold above 60 ms (in this example) may still not notice it.
This problem is exacerbated by the fact our threshold of recognition drops greatly once we notice the problem. The theory is that we "were" subconsciously looking away from the characters' lip movement to avoid the impossibility lip-sync error presents. It can't happen in the "real world" so our brains can't process sound before the event that creates the sound.
But once it exceeds our ability to look away and ignore and enters our conscious we look at the lips - we look "for" it - and can see tiny lip-sync error we never consciously noticed before. Even as little as 1 ms for some of us!
That theory - that we look away from the faces to avoid the impossible - helps explain the discovery at Stanford that even lip-sync error that we don't consciously notice causes negative feelings about the characters similar to those we have about people who don't make eye contact with us. But aparently it is the viewer who is not making eye contact with the characters when lip-sync is off.
If there is a silver lining to this aggravating problem at least once we notice lip-sync error we can take steps to correct it and eliminate the negative impact it has on our perception oif the characters documented in the Stanford report.
I've suggested this before but if you are interested in the Stanford research report Google "Reeves and Voelker Audio Asynchrony" and I am sure you will find it as it is often quoted.
Their 40 ms threshold that most people don't notice has influenced the video standards committees but what they failed to address 23 years ago and is still true today is that there is nothing in the video and audio signals to define when they were ever in-sync. The insustry has treated synchronizing audio and video with open loop controls: At each stage from content creation through broacsasting they are expected to measure the video delay each digital video effect (DVE) causes and add an equal audio delay to offset it.
In a perfect world that could work. If every video delay were measured perfectly and every audio delay added synchronization would be maintained. But it only takes "one" to drop the ball and there are hundreds of DVE's along the path and any error introduced anywhere cannot be automnatically detected downstream so errors are cumulative.
This is why you will see lip-sync error on one programme and perhspa not the following one.
And if that programme you are watching has a large error it may be the real culprint causing you to notice lip-sync error when some device or app with a smaller contribution to the error added enough to push it beyond your threshold.
I work in this area so forgive me for getting on my soap box but as I try to contribute on these forums I see so many devices getting blamed for causing the problem when in fact if there were less lip-sync error in the signals their slight addition to the error probably would not be noticed by many.
There is some light at the end of the tunnel as SMPTE has approved a standard that would add "signatures" of each video and accompanying audio frame to metadata to be passed with the signals. If this catches on it should eventually mean there will be signals that actually define when the content producer felt his ceation was in-sync and allow correcting lip-sync at any point in the broadcast chain no matter where the error originated.
I hope it will be accepted and all the eqipment required to utilize it will be purchased but my guess is that we're taling 10 yto 15 years or more if at all. The problem is "chicken or the egg" in that currently no programmes have the metadata signatures implanted so expensive equipment to read and realign audio with video would accomplish nothing. And content producers see no advantage to purchasing equipment to add signatures for which no broadcaster has equipment to utilize. And even worse - to my knowledge - no broadcast equipment manufacturer is producing and promoting products for broadcasters to utilize.
This same situation occurred almost 20 years ago. Tektronix had developed their AVDC100 which would watermark audio and video signals and use those watermarks to realign lipsync downstream. It was a fantastic product in my opinion but the induustry did not accept it and tektronix finally gave uo and exited the market. they cost about $8000 for each end which is not much to a TV station so that wasn't the problem. It was the same problem the new signature scheme has: It's worthles until "everybody" buys in to the scheme. and invests in the equipment.
But at least it is "possible" that someday the signals will be there to correct lip-sync automatically.
I think the hype around the HDMI "auto lip-sync correction" feature announced with HDMI 1.3 has caused a lot of confusion as most consumers seem to think that means it can somehow automatically correct lip-sync. Of coures it can't - since there are presently no signals in the video or audio upon which to base such automatic correction - but it makes consumers think it is possible and if it were possible then they naturally blame manufactures of equipment they think could alos maintain lip-sync automatically.
I don't envy manufacturers and broadcasters like Sky who constantly catch the blame for something they can't possibly correct. ALl they can hope to do is reduce their contribution to the problem and hope that's enough to keep it below most viewers threshold.
Incidentally all HDMI's "auto lip-sync cortrection" feature which has caused so much confusion can do is declare via EDID session with each source (like plug and play for computers) two latency values for audio and video. One set for interlaced and one for progressive video. They will be about 20 ms apart.
They are fixed values for the video delay of the TV and have nothing to do with any lip-sync error in the arriving signals. When this was announced (HDMI 1.3) most AV receivers aready offered fixed audio delay settings to compensate for a TV's video delay which did the same thing but consumers assumed it could dynamically correct for changing lip-sync which still isn't possible.
It's a worthless feature which can be confirmed by the number of TV manufacturers who actually support it. Basically NONE. Most major TV manufacturers don't put those latency values in the EDID data. And those who do (like some Sony and Toshiba TV's I've seen) make it worthless by adding audio delay intrenally the user can't turn off. , etc. I had a Toshiba that sent the data but it applied the delay (100 or 116 ms) onit's audio as well as it's s/pdif output so the user has no option to turn it off and let another device like an AV receiver or external lip-sync correction product provide the delay. That deprives the user of the TV's video delay which in cojunction with and audio delay allows correcting for audio that arrives delayed or is delayed by external speakers like sonos, etc.
@NickJoh The issue is with the app and the DD+ Stream specifically, not the hardware. Before Amazon had 5.1 there was no lipsync issues. The blame lies between Sony and Amazon.
Ive just signed up to Amazon Prime (my free month), so i;ll see if I get the same issue.
Even if I dont, there seems enough people confirming the issue anyhow.
I undertsand what you are saying. That is what caused you to notice it.
The point I was trying to make - but apparently failed to - is that it is very possible the incremental lip-sync error that has caused you to notice the problem (pushed it beyond your threshold of recognition) is far less than the erro already in the signals that you could previously ignore.
And if that existing error had not already ben there you would not be noticing the amount it increased.
I hope that makes sense.
If they are adding 20 ms they may think that's OK since it's under the 40 ms the industry accepts but if 40 is already there and you notice 60 it's not for you.
Take Sonos for example. They apparently don't think the 30 ms they delay audio is a big deal. But it is when it pushes the error beyond a viewers ability to subconsciously ignore it.
Issue added to the bugs and issues thread:
https://community.sony.co.uk/t5/android-tv/android-tv-bugs-and-issues/m-p/2300533#M16979
Just to confirm that I've got the same issue. With Gran Tour indeed. I just subscribed to Amazon Prime two days ago because I bought some stuff, and the first experience with Gran Tour has been great. Straight to UHD/HDR, no lip sync issues, just excellent.
Then few hours later, at the beginning it was playing at 480p (or even lower) and once the image stabilised to full UHD I noticed the lip sync issue (and I am not talking about 50ms. They felt like 200-300ms to me. Even more). I am checking it now (I am on Carnival holiday if you wonder why so much free time! :smileythumbsup:) and the lip synch seems to have disappeared. The problem is that I am not sure if the content is UHD (I have a 100Mbps fiber connection, 70-75Mbps effective that keeps going down...). It seems a bit jaggy to me for being UHD. More like 1080p. I am few cm from the television, so I can tell.
Yup. Few minutes later is still looks sub UHD (and speedtest.net reports 75Mbps). So I don't know. Maybe it's an issue with the Amazon servers themselves?
Anyway, on the bright side those three old farts are a lot of fun!! I love that show! Really funny.
Quinnicus schrieb:Issue added to the bugs and issues thread:
https://community.sony.co.uk/t5/android-tv/android-tv-bugs-and-issues/m-p/2300533#M16979
I am also having the lip sync and stuttering issues on The Grand Tour for exmple. Here are my findings:
1080p video + PCM audio: no problem
1080p video + Dolby Digital: no problem
4K video + PCM audio: no problem
4K video + Dolby Digital: stuttering and lip sync issues
It is the combination of 4K video and Dolby Digital Passthrough that is causing trouble for me.
Just one more piece of information... I force the Sony to output PCM by applying the respective setting under Settings > Sound > Digital audio out
This means I am changing the behaviour of the system and not Amazon, in this case decoding Dolby Digital instead of passing it through.
Added this problem also to my bugtracker.
Naznačite sviđanje na Facebooku
Pretplatite se na YouTubeu