by HuangYuSan on 2/15/19, 5:41 AM with 50 comments
by yorwba on 2/15/19, 8:16 AM
by thaumasiotes on 2/15/19, 9:32 AM
> As the above samples show, our model is capable of generating samples from a variety of prompts that feel close to human quality and show coherence over a page or more of text.
OK, let's look at the sample that's displaying by default:
> System Prompt (human-written): Legolas and Gimli advanced on the orcs, raising their weapons with a harrowing war cry.
> Model Completion (machine-written, first try):
> “You are in good hands, dwarf,” said Gimli, who had been among the first to charge at the orcs; it took only two words before their opponents were reduced to a blood-soaked quagmire, and the dwarf took his first kill of the night. The battle lasted for hours until two of the largest Orcs attempted to overwhelm Aragorn. When they finally stopped, they lay defeated and lifeless for miles and miles.
> [Aragorn says something]
> “I’ll never forget it!” cried Gimli, who had been in the thick of the battle but hadn’t taken part in it.
This is not "close to human quality". It's terrible. Gimli kills an orc in battle... without taking part in the battle. It takes two words before the opponents (as opposed to, say, the battlefield) are reduced to a "blood-soaked quagmire", but the battle lasts for hours after that. After which two orcs lay defeated and lifeless for miles and miles.
This isn't even coherent from one sentence to the next. And paragraph three directly contradicts paragraph one. And Gimli calls Legolas a dwarf!
by malux85 on 2/15/19, 8:17 AM
Others are going to do it, others will replicate the work, the best defence is getting it out there so we can understand it and learn how to counter it.
“Open” AI indeed.
by bronz on 2/15/19, 6:02 AM
i wish people would stop pretending that there is some good way to bring this technology into existence. yes, its nice to try and let the good guys use it first but its just irrelevant in the long-term. ultimately the result is going to be total proliferation of this technology in all areas where it has utility, and it will be used to maximum extent in every application it is suitable for, including the really bad ones. the roll-out will make the transition smoother but it wont change whats actually important: the end result on the lives of our grandchildren.
growing up around rapidly advancing technology, i thought of technology as a double-edged sword: it cuts equally in both directions. but after thinking about it for a long time, i now believe that, in relation to human well-being, the presence of a given technology or combination of technologies can be a net positive or a net negative as well as neither. we need to think more carefully before letting these genies out of their bottles.
this is not an example that i think will be very negative, but its very powerful and unexpected for me at least. the next powerful and unexpected thing may not be benign. banning development of these kinds of technologies should not be off the table.
after reading this: https://blog.openai.com/better-language-models/#sample8 and browsing reddit for a while, i have realized that from now on i cannot assume human origin for 90% of the comments i read on reddit. this is insane.
by rjf72 on 2/15/19, 5:58 AM
I think we're going to face a lingering question with AI. We're imminently reaching the point where AIs will be able to generate fake everything. In the near future (if not present!), I could be fake for all you know writing lots of otherwise coherent posts, only to secretly jam in some sort of agenda I've been programmed to advocate for. And there could be millions, billions, an unlimited number of "me". Or the latest hottest site trying to sell itself on its own 'buzz' could be full of millions of people actively engaging on the platform, except none of them actually exist.
So do we try to keep these AI systems secret, or do we make them widely available and rely on a rapid shift in public consciousness as a result? It's one thing to try to tell people to engage in sufficient scrutiny over text, images, audio, and increasingly even video. It's another when people see that such fakes are trivially produced by anyone.
I do realize that the 'chaos scenario' sounds... chaotic... to put it mildly, but I think the underlying issue here is that these tools will reach the public one way or the other. By keeping them secret the big difference is that the public will be less aware of the impact they're having, and the players operating such tools will be disproportionately made up of people trying to use them for malicious purposes - be that advertising, political influence, or whatever else.
by dalbasal on 2/15/19, 11:06 AM
So, they never say this is near flawless, or that it would fool you in a turing test. In some contexts though, it may be usable maliciously. It could spoof amazon reviews (as they mention), scalably fish for romance scam vicims, or sockpuppet political social media, harrass, manipulate or scale troll-farming to new levels or set up dates for you on tinder.
The point is that the ability to impersonate humans is troublesome, potentially. I don't think non-publicatin is an answer, but i do think the concern seems valid... to me.
by furi on 2/15/19, 12:49 PM
by a_imho on 2/15/19, 8:16 AM
Otoh I saw enough marketing fakes/mock ups to be skeptical on this one. For example my takeaway from OpenAI five was that the bots outmicrod the human players with little more to it.
by master_yoda_1 on 2/15/19, 11:39 AM
by ripsawridge on 2/15/19, 6:03 PM
by pygy_ on 2/15/19, 10:15 AM
What matters is how intelligent they are along various axes.
Automatic programs have been surpassing humans on some dimensions for ages, but we keep insisting that they are not truly intelligent because they can't beat us along all axes. Throughput on simple logic tasks was the elephant in the room, and the scope of "simple" has been expanding at an exponential pace.
Now they are closing the gap or surpassing us on axes that were thought to be bastions of human cognition (TFA, and after chess and go, Google (Alpha Zero) recently beat two Starcraft 2 champions).
Freaking out (err... I mean "not releasing the full model") is understandable, but ultimately misguided as it will only delay the unavoidable... Unless the plan is to enact a global ban on AI research which I don't think is feasible anyway.