by dannyrosen on 6/7/18, 7:04 PM with 405 comments
by EpicEng on 6/7/18, 7:51 PM
Early emails between Google execs framed this project only in terms of revenue and potential PR backlash. As far as we're aware, there was no discussion about the morality of the matter (I'm not taking any moral stance here just to be clear.) Once this became an internal and external PR issue, Google held a series of all hands meetings and claimed that this was a "small project" and that the AI would not be used to kill people. While technically true, those same internal emails show that Google expected this to become a much larger project over time, eventually bringing in about $250M / year[1]. So even then they were being a bit disingenuous by focusing only on the current scope of the deal.
And here we are now with a release from the CEO talking about morality and "principles" well after the fact. I doubt many people do anyway, but I'm not buying the "these are our morals" bit.
https://www.bizjournals.com/sanjose/news/2018/06/01/report-g...
by ISL on 6/7/18, 7:17 PM
The choice not to accept business is a hard one. I've recently turned away from precision-metrology work where I couldn't be certain of its intent; in every other way, it was precisely the sort of work I'd like to do, and the compensation was likely to be good.
These stated principles are very much in line with those that I've chosen; a technology's primary purpose and intent must be for non-offensive and non-surveillance purposes.
We should have a lot of respect for a company's clear declaration of work which it will not do.
by finnthehuman on 6/7/18, 7:28 PM
They DO realize that the YouTube recommendation algorithm is a political bias reinforcement machine, right?
Like, I think it’s fun to talk trash on google because they’re in an increably powerful position, but this one isn’t even banter.
by cromwellian on 6/8/18, 12:42 AM
The machine learning "bias", at least the low hanging fruit, is learning things like "doctor == male", or "black face = gorilla". How fair is it that facial recognition or photo algorithms are trained on datasets of white faces or not tested for adversarial images that harm black people?
Or if you use translation tools and your daughter translates careers like scientist, engineer, doctor, et al and all of the pronouns come out male?
The point is that if you train AI on datasets from the real world, you can end up reinforcing existing discrimination local to your own culture. I don't know why trying to alleviate this problem triggers some people.
by bobcostas55 on 6/7/18, 7:42 PM
I recommend _The impossibility of “fairness”: a generalized impossibility result for decisions_[0] and _Inherent Trade-Offs in the Fair Determination of Risk Scores_[1]
[0] https://arxiv.org/pdf/1707.01195.pdf [1] https://arxiv.org/pdf/1609.05807v1.pdf
by locacorten on 6/7/18, 8:37 PM
>
> 1. Be socially beneficial.
> 2. Avoid creating or reinforcing unfair bias.
> 3. Be built and tested for safety.
> 4. Be accountable to people.
> 5. Incorporate privacy design principles.
> 6. Uphold high standards of scientific excellence.
> 7. Be made available for uses that accord with these principles.
While I like this list a lot, I don't understand why this is AI-specific, and not software-specific. Is Google using the word "AI" to mean "software"?
by Isamu on 6/7/18, 7:20 PM
This statement will have zero impact on subsequent sensational headlines or posters here claiming Google is making killbots.
by athoik on 6/8/18, 5:13 AM
"If machines produce everything we need, the outcome will depend on how things are distributed. Everyone can enjoy a life of luxurious leisure if the machine-produced wealth is shared, or most people can end up miserably poor if the machine-owners successfully lobby against wealth redistribution. So far, the trend seems to be toward the second option, with technology driving ever-increasing inequality."
-- Stephen Hawking
by skapadia on 6/7/18, 8:19 PM
by juliend2 on 6/7/18, 7:18 PM
> We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas. These include cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue.
I wonder if this is an official response to the people at Google[1] who were protesting[2] against Project Maven.
[1] https://www.nytimes.com/2018/04/04/technology/google-letter-...
[2] https://static01.nyt.com/files/2018/technology/googleletter....
by amaccuish on 6/7/18, 7:51 PM
https://gizmodo.com/students-pledge-to-refuse-job-interviews... [Students Pledge to Refuse Job Interviews at Google in Protest of Pentagon Work]
by jfv on 6/8/18, 5:15 PM
Ads are inherently going to be the opposite of Google's values, yet Google depends on them for the vast majority of their revenue. They show you some search results in line with their values, and if you can't get to the top of that "intrinsically", you buy ads or SEO. The folks that use that system to exploit the least intelligent win here, and Google takes a share of the profit.
Based on my Google search results in the recent past, Google isn't doing a good job of making sure the "best" websites (by my own value system, of course) make it to the top. I find myself having to go into second and third page results to get legitimate information. I'm seeing pages of medical quackery that "sounds good" but isn't based on science when I try to find diet or exercise advice.
As technology becomes more democratic, more people will use it. That means that the people that spend more time trying to sell you shit are going to win, because they're the ones that are willing to reverse-engineer the algorithm and push stuff up to the top. They add less value to society because they're spending all their time on marketing and promotion.
I wish I knew how to solve this problem. By imposing morals, Google "bites the hand that feeds".
by 75dvtwin on 6/8/18, 1:56 AM
by jillesvangurp on 6/7/18, 8:59 PM
I don't believe Google declining to weaponize AI, which lets face it is what all this posturing is about, would be helpful at all. It would just lead to somebody else doing the same, or worse. There's some advantage to being involved: you can set terms, drive opinions, influence legislation, and dictate roadmaps. The flip side is of course that with great power comes great responsibility.
I grew up in a world where 1984 was science fiction and then became science fact. I worry about ubiquitous surveillance, un-escapable AI driven life time camera surveillance, and worse. George Orwell was a naive fool compared to what current technology enables right now. That doesn't mean we should shy away from doing the research. Instead make sure that those cameras are also pointed at those most likely to abuse their privileges. That's the only way to keep the system in check. The next best thing to preventing this from happening is rapidly commodotizing the technology so that we can all keep tabs on each other. So, Google: do the research and continue to open source your results.
by capitalisthakr on 6/7/18, 7:13 PM
by davesque on 6/7/18, 8:08 PM
It seems appropriate at this point for industry leaders in this field, and governments, to come together with a set of Geneva-convention-like rules which address the ethical risks inherent in this space.
by djrogers on 6/7/18, 10:33 PM
What does that even mean? Internationally accepted? By what nations and people groups? I’m pretty sure China and Russia have different accepted norms than Norway and Canada - which ones will you adhere to?
by fortythirteen on 6/7/18, 7:53 PM
we will be developing AI for things that have weapons attached to them. We hope our lawyerly semantics are enough to fool you rubes for as long as it takes us to pocket that sweet military money.
by paulgpetty on 6/7/18, 7:32 PM
Either way it’s just a statement on a webpage which has all the permanence of a sign in their HQ lobby. It’s going to be hard to convince people that statements like this from a Google, a Facebook, or an Uber really mean anything — especially long term.
Will their next leadership team or CEO carry on with this?
by hueving on 6/7/18, 11:49 PM
by whazor on 6/7/18, 7:41 PM
Furthermore, if ban the military. Then another company could do it for them. So every customer would have to explain their activities?
by Dowwie on 6/7/18, 10:02 PM
by ehudla on 6/8/18, 7:15 AM
1. "Pursue legislation and regulation to promote these principles across the industry."
2. "Develop or support the development of AI based tools to help combat, alleviate, the dangers noted in the other principles in products developed by other companies and governments."
by forapurpose on 6/7/18, 9:46 PM
5. Incorporate privacy design principles.
We will incorporate our privacy principles in the development and use of our AI technologies. We will give opportunity for notice and consent, encourage architectures with privacy safeguards, and provide appropriate transparency and control over the use of data.
Why not "give people control over their privacy and over their information"? That's a commitment to an outcome. "Incorporate ... principles", "give opportunity", "encourage", and "appropriate transparency and control" are not commitments. Google seems to be hedging on privacy.
by TaylorAlexander on 6/7/18, 8:50 PM
So while google says it will not make weapons, it seems that for the next 6-18 months it will continue to do so.
Does anyone know when in 2019 the contact expires? It seems odd to come out with a pledge not to make weapons while continuing to make weapons (assuming that is what they are doing).
(Full disclosure, I am a contractor at an Alphabet company, but I don’t know much about project Maven. These are my own opinions.)
[1] https://www.theverge.com/2018/6/1/17418406/google-maven-dron...
by exabrial on 6/7/18, 8:48 PM
Also Google: We will totally use our AI to 'legally' track a single mom that clicked a fine print EULA once while signing into our app. That's totally fine. It's different mmk?
by TremendousJudge on 6/7/18, 9:26 PM
No, that's machine learning. AI is intelligence demonstrated by machines, and it doesn't necessarily mean that it learns or adapts.
by billybolton on 6/8/18, 8:42 PM
by acobster on 6/8/18, 2:02 PM
I was wondering how or if they were going to address this. It saddens me to see that Google considers collecting as much data as possible about all its users to maximize ad revenue an international norm. It saddens me more to see that they're correct.
by thrusong on 6/8/18, 1:12 AM
by MVf4l on 6/7/18, 9:12 PM
Such statement absolutely relieves the pressure came from the public, hence law makers. Can we make sure big companies are legally accountable for what they claim to the public? Otherwise they can say just whatever persuades people to be less vigilant about what they are doing, which is so deceptive and irresponsible.
by RcouF1uZ4gsC on 6/7/18, 7:44 PM
Youtube moderation and automated account banning combined with the inability to actually get in contact with a human show they they have a long way to go with this principle.
by kolbe on 6/7/18, 8:29 PM
by godelmachine on 6/7/18, 7:10 PM
by s2g on 6/7/18, 10:37 PM
I guess Google's policy of sucking up any and all data doesn't go against internationally accepted norms.
This entire article reads like BS if you think about what Google actually does.
by confounded on 6/7/18, 8:20 PM
> 1. Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.
Is this "We have solved the trolley problem"?
Benefits to who? US Consumers? Shareholders? Someone in Afghanistan with the wrong IMEI who's making a phone-call?
Without specifying this, this statement completely fails as a restraint on behavior. For an extrajudicial assassination via drone, is 'the technology' the re-purposed consumer software to aid target selection, or the bomb? Presumably the latter in every case.
> 2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
This leaves the vast majority of military applications in scope. By this definition, Project Maven (the cause of resignations/protests) meets the criteria of not "directly facilitat[ing] injury to people". It selects who and what to cause injury too at lower cost and accuracy, to scale up the total number of causable injuries per dollar.
> 3. Technologies that gather or use information for surveillance violating internationally accepted norms.
Google set the norms for surveillance by being at the leading edge of it. It's pretty clear from Google's positioning that they consider data stored with them for monetization and distribution to Goverments completely fine. Governments do, too. And of course, "If you have something that you don't want anyone to know, maybe you shouldn't be doing it in the first place."[0].
> 4. Technologies whose purpose contravenes widely accepted principles of international law and human rights.
It's difficult to see how this could be anything but a circular argument that whatever the US military thinks is appropriate, is accepted as appropriate, because the US military thinks it is.
The most widely accepted definitions of human rights are the UN's, and the least controversial of those is the Right to Life. There are legal limits to this right, but by definition, extrajudicial assassinations via drone strike are in contravention of it. Even if they're Googley extrajudicial assassinations.
[0]: https://www.eff.org/deeplinks/2009/12/google-ceo-eric-schmid...
by sethbannon on 6/7/18, 8:52 PM
by foobaw on 6/7/18, 7:32 PM
by gandutraveler on 6/8/18, 5:18 AM
by foolinaround on 6/7/18, 7:56 PM
AI will likely reflect the bias of its training set, which likely reflects the bias of the creators. So, it is fair to say that AI will be biased?
by metaphorical on 6/7/18, 10:01 PM
by jcadam on 6/8/18, 12:43 PM
That said, I'd love to work on ML/AI related defense projects. Thanks to Google, more of this type of work will surely be thrown over to the traditional defense contractors - so maybe I'll get that chance, eh?
by AtomicOrbital on 6/8/18, 1:24 AM
by current_call on 6/9/18, 1:32 AM
Technologies that gather or use information for surveillance violating internationally accepted norms.
They already failed.
by coreypreston on 6/7/18, 9:30 PM
by sidcool on 6/8/18, 5:20 AM
by DrNuke on 6/7/18, 7:24 PM
by hooande on 6/7/18, 7:38 PM
Science fiction writing is hard. I don't know why all of you are doing it for no pay. We can't judge Google for what we think they might do. And so far, they're just using ml in the real world
by retrogradeorbit on 6/8/18, 12:46 AM
by erikpukinskis on 6/8/18, 2:20 AM
Anyone who claims to be non-violent has simply rationalized ignorance of their violence. See: vegans. (spoken as someone who eats a plant based diet)
by kerng on 6/7/18, 11:34 PM
by bovermyer on 6/7/18, 8:07 PM
by htor on 6/7/18, 9:14 PM
by dhimes on 6/7/18, 9:08 PM
by MVf4l on 6/7/18, 9:15 PM
by qbaqbaqba on 6/7/18, 9:01 PM
by mrslave on 6/9/18, 11:42 AM
by jamesblonde on 6/7/18, 9:59 PM
by ruseOps on 6/7/18, 7:25 PM
by reilly3000 on 6/7/18, 7:43 PM
by jacobsenscott on 6/7/18, 9:38 PM
by gaius on 6/7/18, 7:21 PM
Also point 5 is an outright, blatant falsehood given Google’s track record and indeed entire business model.
by Mononokay on 6/7/18, 7:23 PM