by loisaidasam on 3/22/18, 3:58 PM with 58 comments
by astalwick on 3/22/18, 4:10 PM
He goes on to defend his work at Google, arguing that they're similar on the surface, but Facebook is truly dangerous where Google is not.
I'm not sure I agree with that. Google is quite a bit more distributed across products and platforms, so Facebook has a simpler loop centered around the newsfeed. That said, Google can track a user's behaviour across nearly every website on the internet.
Facebook can run these "reinforcement learning on a global scale" experiments through its newsfeed. Google, it seems to me, can run them across the web as a whole.
by m_ke on 3/22/18, 4:17 PM
They might demonetize some radical channels but they're still making money on users who get to YouTube through those channels.
https://mobile.nytimes.com/2018/03/10/opinion/sunday/youtube...
https://mobile.nytimes.com/2017/11/04/business/media/youtube...
https://news.vice.com/en_us/article/d3w9ja/how-youtubes-algo...
by fortythirteen on 3/22/18, 4:28 PM
His analysis is correct, but this:
> If you work in AI, please don't help them. Don't play their game. Don't participate in their research ecosystem. Please show some conscience
is a clear cut case of the pot calling the kettle black.
by t3chn0SchO0lbus on 3/22/18, 4:15 PM
by minikites on 3/22/18, 4:23 PM
>Essentially nothing about the threat described applies to Google. Nor Amazon. Nor Apple.
>It could apply to Twitter, in principle, but in practice it almost entirely doesn't.
I don't believe this for one second. Google does the exact same "algorithmic curation" with its search results. Different people get different results based on internal profiles that Google has built: https://en.wikipedia.org/wiki/Filter_bubble. Over time that shift in search result content acts upon people in exactly the same way as the Facebook example he describes.
by throwaway84742 on 3/22/18, 4:38 PM
by spdy on 3/22/18, 4:32 PM
But he is right we are at a crossroad and the path that will be taken is clear for me. Manipulating/controlling populations is where the money will go and people will create those tools because those jobs pay well, its that simple.
In the next elections we will see deepfakes videos of candidates instantly responding to problems or defaming videos will be put out were you cant judge on the spot if its real or not. The trend of echo chambers will continue as we see it right now.
The only thing i can see is education if we look back at the recent history going from only a certain amount of people can who read/write or have access to books to everyone has to learn and has access to libraries. This is the next level.
And on the other side we have to fight for our right of privacy and kill some business models on the way. Right now this is for me on the same scale as Atomic / Chemical Weapons.
by jphalimi on 3/22/18, 4:55 PM
The real question I am having reading this thread is: is this guy being very naive, or just dishonest?
by mkrum on 3/22/18, 4:18 PM
by dblotsky on 3/22/18, 4:46 PM
by TACIXAT on 3/22/18, 5:18 PM
If you have all this data, make my life more fulfilling. I get more community out of IRC than I do on any of the major social media sites. They are really just media sites.
by antisocial on 3/22/18, 4:16 PM
by panarky on 3/22/18, 5:13 PM
If you work in AI, please don't help them. Don't play their game. Don't participate in their research ecosystem. Please show some conscience
When Facebook does something awful, their defenders rush to say "what about Google, they're even worse!"
There's a lot of false equivalence in HN discussions, but these two are not in the same galaxy when it comes to abusing the privacy of their users.
by kough on 3/22/18, 4:20 PM
by artursapek on 3/22/18, 4:08 PM
by 0majors on 3/22/18, 4:30 PM
by evc123 on 3/22/18, 4:26 PM