by son_of_gloin on 9/28/22, 6:45 PM with 88 comments
by jackblemming on 9/28/22, 7:54 PM
by blueyes on 9/28/22, 7:46 PM
The ironic thing of course is that Yann has not been at the forefront of AI for many many years (and Gary, of course, never has). Facebook's research has failed to rival Google Brain, DeepMind, OpenAI, and groups at top universities.
So to the extent that Yann is copying Gary's opinions, it's because they both converge at a point far behind the leaders in the field. Yann should be much more concerned than Gary about that.
by mindcrime on 9/28/22, 7:59 PM
* https://books.google.com/books?id=n7_DgtoQYlAC&dq=Connection...
* https://link.springer.com/book/10.1007/10719871
* https://www.amazon.com/Integrating-Connectionism-Robust-Comm...
by serioussecurity on 9/28/22, 8:27 PM
by robg on 9/28/22, 9:24 PM
by obblekk on 9/28/22, 7:18 PM
I don't have enough context to take a side, but this is not just a rant.
Beyond their interpersonal disagreements, I do wonder if LeCunn is seeing diminishing marginal returns to deep learning at FB...
by adamsmith143 on 9/28/22, 8:34 PM
>LeCun, 2022: Reinforcement learning will also never be enough for intelligence; Marcus, 2018: “ it is misleading to credit deep reinforcement learning with inducing concept[s] ”
> “I think AI systems need to be able to reason,"; Marcus 2018: “Problems that have less to do with categorization and more to do with commonsense reasoning essentially lie outside the scope of what deep learning is appropriate for, and so far as I can tell, deep learning has little to offer such problems.”
>LeCun, 2022: Today's AI approaches will never lead to true intelligence (reported in the headline, not a verbatim quote); Marcus, 2018: “deep learning must be supplemented by other techniques if we are to reach artificial general intelligence.”
These are LeCun's supposed great transgressions? Vague statements that happen to be vaguely similar to Marcus' vague statements?
Marcus also trots out random tweets to show how supported his position is and one mentions a Marcus paper with 800 citations as being "engaged in the literature". But a paper like Attention is all you need that currently has over 40,000 citations. THAT is a paper the community is engaged with. Not something with less than 1/50th the citations.
This is a joke...
by mellosouls on 9/28/22, 7:51 PM
However he does seem to have legitimate complaints about the echo chamber the big names seem to be operating in.
by uh_uh on 9/28/22, 8:26 PM
by benreesman on 9/28/22, 8:10 PM
Machine learning researchers optimize “performance” on “tasks”, and while those terms are still tricky to quantify or even define in many cases, they’re a damned sight closer to rigorous, which is why people like Hassabis who get shit done actually talk about them in the lay press, when they deal with the press at all.
We can’t agree when an embryo becomes a fetus becomes a human with anything approaching consensus. We can’t agree which animals “feel pain” or are “self aware”. We can sort of agree how many sign language tokens silverbacks can remember and that dolphins exhibit social behavior.
Let’s keep it to “beats professionals at Go” or “scores such on a Q&A benchmark”, or “draws pictures that people care to publish”, something somehow tethered to reality.
I’ve said it before and I’ll say it again: lots of luck with either of the words “artificial” or “intelligent”, give me a break on both in the same clause.
by bjourne on 9/28/22, 9:13 PM
by strulovich on 9/28/22, 8:33 PM
https://www.facebook.com/722677142/posts/pfbid035FWSEPuz8Yqe...
by frisco on 9/28/22, 7:19 PM
by xani_ on 9/28/22, 8:18 PM
I swear same thing was being said 10+ years ago
by stephencanon on 9/29/22, 12:06 AM
by soperj on 9/28/22, 7:54 PM
by an1sotropy on 9/28/22, 8:02 PM
by julvo on 9/28/22, 9:54 PM
by projectramo on 9/28/22, 9:10 PM
Consider this:
LeCun, 2022: Today's AI approaches will never lead to true intelligence (reported in the headline, not a verbatim quote); Marcus, 2018: “deep learning must be supplemented by other techniques if we are to reach artificial general intelligence.”
How can that be something that LeCun did not give Marcus credit for? It is borderline self evident, and people have been saying similar things since neural networks were invented. This would only be news if LeCun had said that "neural nets are all you need" (literally, not as a reference to the title of the transformers paper).
And furthermore, if LeCun had said that, there are literally dozens of people who have also said that you need to combine the approaches.
He cites a single line:'LeCun spent part of his career bashing symbols; his collaborator Geoff Hinton even more so, Their jointly written 2015 review of deep learning ends by saying that they “new paradigms are needed to replace rule-based manipulation of symbolic expressions.”'
Well, sure because symbol processing alone is not the answer either. We need to replace it with some hybrid. How is this a contradiction?
To summarize: people have been looking for a productive way to combine symbolic and statistical systems -- there are in fact many such systems proposed with varying degrees of success. LeCun agrees with this approach (no one has anything to lose by endorsing adding things to any model), but Marcus insists he came up with it and he should be cited.
Ugh.
by trrbb123 on 9/28/22, 7:56 PM
In his mind he is always right. Every single tweet he made, every single sentence he has said is never wrong. He is 100% right everyone else is 100% wrong.
by etaioinshrdlu on 9/28/22, 9:45 PM
by learn-forever on 9/28/22, 9:42 PM
by whoisjuan on 9/28/22, 9:51 PM
Why do people keep upvoting his stuff?
by ndjdn on 9/28/22, 8:33 PM