by jerezzprime on 8/5/20, 3:09 PM with 106 comments
by tsimionescu on 8/5/20, 7:12 PM
> I picked the best responses, but everything after the bolded prompt is by GPT-3.
Based on this, I am pretty sure that the order of paragraphs and the general structure (introduction, arguments, conclusion, PS) are entirely the product of the editor, not of GPT-3. I'm assuming that this is at the level paragraphs and not individual sentences, which does leave some pretty good paragraphs.
Another question that I don't know how to answer is how different these paragraphs are to text that is in the training corpus. I would love to see what is the closest bit of text from the whole corpus to each output paragraph.
And finally, human communication and thought is not organized neatly in a uniform level of difficulty from letters to words to sentences to paragraphs to chapters to novels or anything like that, and an AI that can sometimes produce nice-sounding paragraphs is not necessarily any part of the way to actually communicating a single real fact about the world.
I still believe that there is never going to be meaningful NLP without a model/knowledge base about the real physical world. I don't think human written text has enough information to deduce a model of the world from it without assuming some model ahead of time.
by czzr on 8/5/20, 6:54 PM
by jasode on 8/5/20, 5:42 PM
I think GPT-3 is very convincing for "soft" topics like the other HN thread "Feeling Unproductive?"[2], and philosophical questions like "What is intelligence?" where debaters can just toss word salad at each other.
It's less convincing for "hard" concrete science topics. E.g. Rust/Go articles of programming to improve performance.
An interesting question is what happens when the input to the future GPT-4 is inadvertently fed by lots of generated GPT-3 output. And in turn, GPT-5 is fed by GPT-4 (which already ingested GPT-3). A lot of the corpus feeding GPT-3 was web scraping and now that source is tainted for future GPT-x models.
by QasimK on 8/5/20, 5:28 PM
It’s practically indistinguishable from a human. Not a creative, insightful and unique human. But an average human? Yes, I cannot tell the difference.
I must repeat that - I cannot tell the difference!
This can probably completely replace or supplement most online content that I see including news, certainly on the vacuous side of things of which I think there is a lot of content.
Those online recipes with irrelevant life stories before them? Replaced. Those opinion pieces in news? Replaced. Basic guides to tasks? Probably replaceable.
I know I probably only see the best output, and it would be nice if I had more context, but the peak performance is amazing
The twitter video showing GPT-3 generate HTML based on your request? I think there’s a lot of potential. I don’t knew whether it can, in general, live up to these specific examples though.
by dexen on 8/5/20, 6:24 PM
A half-joking prediction:
at some point we'll solve all arbitrarily hard milestones for AIs and will still find ourselves 'nowhere near having real general intelligence'.
At that point we might start questioning our assumptions about intelligence.
by naringas on 8/5/20, 5:15 PM
I think this AI is onto something
by plaidfuji on 8/5/20, 5:38 PM
Scary? Not GPT-3, but when GPT-6 or 7 gets involved in the political realm, that’s when people will take notice. This essay has a glimmer of “humans can’t be trusted to govern themselves” - and it’s not entirely unconvincing.
by hosh on 8/5/20, 6:40 PM
As brilliant as it is, I think this speaks more to how we as humanity think about ourselves than it does about AI.
by mellosouls on 8/5/20, 6:39 PM
https://arr.am/2020/07/22/why-gpt-3-is-good-for-comedy-or-re...
by weeksie on 8/5/20, 6:27 PM
by vadansky on 8/5/20, 7:31 PM
by Xeing0ei on 8/5/20, 5:39 PM
A few weeks ago, GPT-3 generated content looked like nonsensical content farm's content to me. Today, this article makes points and follows an argumentative line.
There are still a few oddities, but this time, it looks like thinking and not just putting related words next to one another with proper grammar.
by koeng on 8/5/20, 6:55 PM
by naringas on 8/5/20, 5:21 PM
> The point of this is to form a hypothesis. If the scientist and the experimental data say the same thing, then the scientist will think it has a hypothesis that is correct. But if the experimental data and the scientist say different things, then the scientist will think it has a hypothesis that is wrong. The scientist will think the experimental data is right, and it will change its hypothesis.
by Nasrudith on 8/5/20, 8:42 PM
Imagine say some trying to maintaining training sets per individual character and finding that they would not only provide better lines but choose different actions.
by wslh on 8/5/20, 6:25 PM
Disclaimer: I am not an avid science fiction reader but interested in sources talking about superintelligence [3]. Is superintelligence more of the same or it is more about having different layers interconnected?
[1] https://en.wikipedia.org/wiki/Ainan_Celeste_Cawley
by darepublic on 8/5/20, 8:29 PM
by noiv on 8/5/20, 6:30 PM
by motohagiography on 8/5/20, 6:29 PM
by scottlocklin on 8/5/20, 5:12 PM
by kangnkodos on 8/5/20, 7:28 PM
by namenotrequired on 8/5/20, 6:02 PM
Edit: anecdotes -> analogies
by holoduke on 8/5/20, 8:07 PM
by marta_morena_25 on 8/5/20, 5:37 PM
The main problem I see with AI is that it is very easy to approximate "general human intelligence", which is essentially equal to "being indistinguishable from the Joe next to you". But it is a completely different league to actually advance the human race. For that, statistical approximation will never work.
The next step is to create AI that innovates. As long as that isn't done, all we have is a demonstration of how "unintelligent" most human beings really are (i.e. nothing more than a statistical approximation + pattern matching... Instagram and social media essentially is like an AI forcing function for human beings, to make them become average).
And yes, we can couple AI with things like a Go-Engine, SAT solver, theorem provers, etc. to give them abilities beyond what humans can do in these categories, but who builds that? Humans... As long as AI can't build an AI for a category it knows nothing about and has had no training for, that AI remains "as unintelligent as a brick". All it can do is reproduce what its creator taught it.
That isn't necessarily a bad thing at all. This could still be extremely useful for society and put a new evolutionary pressure on the human race to become "above" average. Something that has been utterly lacking in the past century. With general, yet stupid AI becoming a reality soon, >90% of humanity is rendered obsolete. This will cause a significant pressure to improve on an unforeseen scale, which is probably a good thing overall.
Truly intelligent AI on the other hand, might as well lead to our immediate extinction, since it renders the entirety of the human race irrelevant.