by Marat_Dukhan on 11/2/20, 4:20 AM with 266 comments
by dharma1 on 11/2/20, 5:42 AM
It's only 720p and around 15fps but real shallow dof, very little sensor noise, autofocus works. Well worth trying if you have a Sony camera from the last few years.
Sensor size and good optics still wins. Having said that,the effort and detail gone into this feature is very impressive, enjoyed the blog post. Also webassembly SIMD looks super cool, looking forward to a new class of webapps using wasm.
by tsycho on 11/2/20, 3:27 PM
1/ Your internet connection, especially upload bandwidth and latency matter a lot.
2/ Zoom's desktop app performs very well, but its web version is atrocious. Not just because of the dark patterns they use to force you to install the desktop app, but also its performance is terrible compared to its desktop version, as well as worse than almost everything else. Unfortunately, I don't trust them and refuse to use their desktop app on anything but my iPad.
3/ Meet used to be bad like Zoom on web 6 months ago, but has improved a lot and is slowly approaching Zoom desktop in performance. I have noticed that Meet on my work GSuite calls at work perform much better than on my personal account. This might be explained by #1 above I.e. my family has worse internet connections than my coworkers, but I am not sure if all improvements have been rolled out to personal accounts.
by jtokoph on 11/2/20, 5:01 AM
by neilpanchal on 11/2/20, 6:32 AM
Google UX/UI team: Please fucking make the mute/unmute button visible at all times.
by sillysaurusx on 11/2/20, 4:58 AM
There's a tendency to think of ML as "not programming," or something other than just plain programming. But as the tooling matures, that'll go away.
(Lisp used to be considered "AI programming," till it became useful in many other contexts.)
by kerng on 11/2/20, 7:13 AM
Anyone who uses the blue realizes that it's far lacking in quality from other offerings and Google Meet UI is very bad also.
Zoom, Teams, even WebEx are superior quality and usability wise.
by obilgic on 11/2/20, 6:45 AM
by loosescrews on 11/2/20, 6:56 AM
by mike_kamau on 11/2/20, 8:27 AM
I thought the whole point of having a video call is to see who you are talking to, and their environment to further enhance the effectiveness of the conversation.
If you are in your kitchen, or under a tree, I definitely would like to see that because that environment will have an effect on how we communicate.
by hrktb on 11/2/20, 7:56 AM
> In the current version, model inference is executed on the client’s CPU for low power consumption and widest device coverage.
Naively I would think model inference done server side would have the lower CPU power (from the client point of view) and widest device coverage (client does nothing more), what am I missing ?
by nostromo on 11/2/20, 7:01 AM
It sucks and it’s distracting.
Your hair and hands pop in and out of blur. Sometimes part of your face will blur.
I don’t care if your workspace is messy or your kid walks in the room. I do care that we’re all being distracted by your weirdly blurred hair and hands.
by hota_mazi on 11/2/20, 6:53 AM
Can we get a mute button visible at all times before 2024?
by arketyp on 11/2/20, 7:11 AM
by jcims on 11/2/20, 8:25 AM
by chdjakdkgb on 11/2/20, 5:09 AM
by vinhboy on 11/2/20, 5:36 AM
I also think it makes the subject look better for some reason.
by amq on 11/2/20, 7:51 AM
by adioe3 on 11/2/20, 10:10 AM
by sercand on 11/2/20, 10:01 AM
by Nimitz14 on 11/2/20, 7:03 PM
by mft_ on 11/2/20, 7:58 AM
https://1.bp.blogspot.com/-viEA4OY0sxA/X5s7IBwoXOI/AAAAAAAAG...
As in, the blurred background looks totally different (light:dark, shapes, etc.) to the unblurred background.
(I get that they’d need to do something funky to show blurred and unblurred backgrounds with the same foreground video, and faking it is likely easier than doing it programmatically, but this is just odd/sloppy.)
by rkagerer on 11/2/20, 8:36 AM
Although there's a lot of blurring on the shoulder of the guy at the beach: https://i.imgur.com/D5ueGUh.png
by wdroz on 11/2/20, 7:44 AM
There are some works on OBS to get the green screen AI working, so I hope we will get that on GNU/Linux one day.
by kevingadd on 11/2/20, 11:32 AM
by lern_too_spel on 11/2/20, 4:16 PM
by Liskni_si on 11/2/20, 5:45 PM
When the video is encoded, the codec does motion estimation (among other things) to reduce the bandwidth required. So why don't we use the motion vectors from the video codec to modify the foreground/background mask in real time? Obviously this is going to create weird artifacts pretty soon, but it might just be good enough for a few frames before the ML model produces another accurate mask.
by supernova87a on 11/2/20, 7:02 PM
I have observed in the last couple months that whenever I create a Google Calendar invite with others, Google has started inserting a Google Meet conference as the location to meet.
It was one thing to ask/offer this as an option if you'd like to use it, but now Google is positioning it as if you had chosen that. So if you left it empty, because you usually use some other understood method with your friends/colleagues, now your participants are confused and think you wanted to use Google Meet.
I think that's going too far to get people to adopt your product.
by daxfohl on 11/2/20, 3:32 PM
by madeofpalk on 11/2/20, 10:30 AM
by alblue on 11/2/20, 7:15 AM
by mdoms on 11/2/20, 5:48 AM
by The_rationalist on 11/2/20, 12:24 PM
by acdha on 11/2/20, 3:11 PM