by lorepieri on 8/11/23, 8:15 AM with 193 comments
by YeGoblynQueenne on 8/11/23, 12:52 PM
So some people like to repeat. Yet, outside of the hand-picked examples in the article (the 5th generation computer project? Blast from the past!) there are a whole bunch of classic AI domains where real progress has been achieved in the last few decades. Here's a few:
* Game-playing and adversarial search: from Deep Blue to AlphaGo and muZero, minimax-like search has continued to dominate.
* Automated planning and schdeduling: e.g. used by NASA in automated navigation systems on its spaceships and Mars rovers (e.g. Perserverance) [1]
* Automated theorem proving: probably the clearest, most comprehensible success of classical AI. Proof assitants are most popular today.
* Boolean satisfiability solving (SAT): SAT solvers based on the Conflict Driver Clause Learning algorithm can now solve many instances of traditionally hard SAT problems [2].
* Program verification and model checking: model checking is a staple in the semiconductor industry [3] and in software engineering fields like security.
Of course, none of all that is considered Artificial Intelligence anymore: because they work very well [4].
_____________
[1] https://www.nasa.gov/centers/ames/research/technology-onepag...
[2] https://en.wikipedia.org/wiki/Conflict-driven_clause_learnin...
[3] https://m-cacm.acm.org/magazines/2021/7/253448-program-verif...
by SeanLuke on 8/11/23, 10:31 AM
I think there's little evidence for this. What happened in the 1980s was the introduction of and overselling of expert systems. These systems applied AI techniques to specific problems: but those techniques themselves were still pretty foundational. This is like saying that because electricity was used for custom things, we started inventing custom electricity.
> Consequently, the field currently called "AI" consists of many loosely related subfields without a common foundation or framework, and suffers from an identity crisis:
Nonsense. AI of course consists of loosely related subfields with no common foundation. But even back in the 1960s, when a fair chunk of (Soft) AI had something approaching a foundation (search), the identity of the field was not defined by this but rather by a common goal: to create algorithms which, generally speaking, can perform tasks that we as humans believe we alone are capable of doing because we possess Big Brains. This identity-by-common-goal hasn't changed.
So this web page has a fair bit of apologetics and mild shade applied to soft AI. What it doesn't do is provide any real criticism of the AGI field. And there's a lot to offer. AGI has a reasonable number of serious researchers. But it is also replete with snake oil, armchair philosophers, and fanboy hobbyists. Indeed the very name (AGI) is a rebranding. The original, long accepted term was Hard AI, but it accumulated so much contempt that the word itself was changed by its practitioners. This isn't uncommon for ultrasoft areas of AI: ALife has long had this issue (minus the snake oil). But at least they're honest about it.
by d_burfoot on 8/11/23, 4:41 PM
- logical/symbolic AI, aka GOFAI, which led to work like SAT solvers and STRIPS planners
- classical label-based Machine Learning. Here the Perceptron was the starting point and the Support Vector Machine was the paradigmatic result.
- modern self-supervised raw-data ML, of which GPT is the pinnacle result.
It's very interesting to think about what motivated each era, what their blind spots were, and why people who worked in that timeframe couldn't see why the successor era was obviously (in retrospect) superior.
by myguestacc on 8/11/23, 9:54 AM
Except for that the previous subsection didn't clarify that at all.
by Joeri on 8/11/23, 4:16 PM
And yes, I know the very idea of AI rights offends those who think AI can’t be a person because it’s just an algorithm. Well, so are humans, just a DNA program executing massively parallel. The implementation does not determine personhood, only the behavior.
by stareatgoats on 8/11/23, 10:07 AM
by amelius on 8/11/23, 1:16 PM
"Artificial General Intelligence – We don't know the heck where this is going but here are some thoughts"
by tpoacher on 8/11/23, 12:58 PM
Given very specific, practical, functional definitions, AGI is a breeze.
by mwlp on 8/11/23, 11:37 AM
by xvilka on 8/11/23, 9:35 AM
by sidcool on 8/11/23, 9:33 AM
by janalsncm on 8/11/23, 11:01 PM
Here is a video demonstrating the working memory of a chimpanzee. It is obviously considerably better than a human’s. Given this information we must accept one of the following are true:
- humans do not have the highest general intelligence of all animals
- working memory is not a necessary component of general intelligence
- other human capabilities (communication for example) can make up for our working memory deficiencies
by mbgerring on 8/11/23, 6:17 PM
So much of the literature takes the idea that this is something that should be built for granted, and only asks whether it should be done. I literally do not understand why anyone wants to build this in the first place.
by aaroninsf on 8/11/23, 4:59 PM
As others have said, skipping over the entire era of classic AI in the LISP/Prolog era from SHRDLU to Scripts, Plans, Goals, and Understanding, is an egregious ommission.
Also,I don't immediately find a discussion of either multi-agent coordination or multi-modal ML models.
by danbruc on 8/11/23, 12:20 PM
by wslh on 8/11/23, 2:25 PM
I know these words are in the introduction but until now ALL projects failed. Not logical pedantry intended.
A little bit offtopic but I think currently the greatest superintelligence observed is the Universe or G‑d for believers.
by plamic on 8/11/23, 11:42 AM
by Ntuthuko_hlela on 8/11/23, 10:14 AM
by mz00 on 8/11/23, 9:33 AM
by heyitsguay on 8/11/23, 9:41 AM
by stanfordkid on 8/11/23, 1:49 PM
Are there specific structures and architectures that have evolved that are very unique which give humans, say, language ability or visual processing? Certainly. Perhaps by gods spark or some random chance on the board game of life human beings developed examples of very particular structures. Perhaps there are undiscovered ones lying within the minds of peregrine falcons, tree roots or deep sea squid. We don't even know how to look for them because we don't even know such perception and intelligence exists.
The point I'm trying to make is that there is no "goal post" of AGI, there is no quantification of intelligence yet. We don't even know what sorts of intelligence exist out there because we haven't even begun to fully characterize what it is. It seems foolish to me to search for something when we can't even define it.
It's like trying to find "the ultimate general animal" when what you really have is a phylogenetic tree of huge diversity.