by iafisher on 4/17/24, 7:46 PM with 141 comments
by gumby on 4/17/24, 10:04 PM
I was one of the first hires on the Cyc project when it started at MCC and was at first responsible for the decision to abandon the Interlisp-D implementation and replace it with one I wrote on Symbolics machines.
Yes, back then one person could write the code base, which has long since grown and been ported off those machines. The KB is what matters anyway. I built it so different people could work on the kb simultaneously, which was unusual in those days, even though cloud computing was ubiquitous at PARC (where Doug had been working, and I had too).
Neurosymbolic approaches are pretty important and there’s good work going on in that area. I was back in that field myself until I got dragged away to work on the climate. But I’m not sure that manually curated KBs will make much of a difference beyond bootstrapping.
by blacklion on 4/18/24, 12:42 PM
And there were description if EURISCO (with claims that it not only "win some game" but also that it "invented new structure of NAND-gate in silicon, used by industry now") and other expert systems.
One of the mentioned expert systems (without technical details) said was 2 times better in diagnose cancer than best human diagnostician of some university hospital.
And after that... Silence.
I always wonder, why did this expert system were not deployed in all USA hospitals, for example? If it is so good?
Now we have LLMs, but they are LANGUAGE models, not WORLD models. They predict distribution of possible next words. Same with images — pixels, not world concepts.
Looks like such systems are good for generating marketing texts, but can not be used as diagnosticians by definition.
Why did all these (slice of) world model approaches dead? Except Cyc, I think. Why we have good text generators and image generators but not diagnosticians 40 years later? What happens?..
by toisanji on 4/17/24, 9:34 PM
by mtraven on 4/17/24, 10:45 PM
by TrevorFSmith on 4/18/24, 1:16 AM
by viksit on 4/17/24, 11:57 PM
the frames, slots and values integrated were learned via a RNN for specific applications.
we even created a library for it called keyframe (modeling it after having the programmer specify the bot action states and have the model figure out the dialog in a structured way) - similar to how keyframes in animation work.
it would be interesting to resurrect that in the age of LLMs!
by carlsborg on 4/18/24, 8:32 AM
The lead author on [1] is Kathy Panton who has no publications after that and zero internet presence as far as i can tell.
[1] Common Sense Reasoning – From Cyc to Intelligent Assistant https://iral.cs.umbc.edu/Pubs/FromCycToIntelligentAssistant-...
by shrubble on 4/17/24, 11:02 PM
by rhodin on 4/17/24, 9:21 PM
[0] https://writings.stephenwolfram.com/2023/09/remembering-doug...
by mindcrime on 4/18/24, 2:44 PM
Does anybody have any insights into where things stand at Cycorp and any expected fallout from the world losing Doug?
by acutesoftware on 4/18/24, 4:45 AM
I am really pleased they continue to work on this - it is a lot of work, but it needs to be done and checked manually, once done the base stuff shouldn't change much and it will be a great common sense check for generated content.
by avodonosov on 4/17/24, 10:20 PM
by SilverSlash on 4/18/24, 8:45 AM
by nikolay on 4/17/24, 11:30 PM
by bilsbie on 4/18/24, 4:34 PM
Or for quality checks during training?
by ragebol on 4/18/24, 5:57 AM
Have some vector for a concept match a KB entry etc, IDK :).
by blueyes on 4/17/24, 10:46 PM
by HarHarVeryFunny on 4/18/24, 12:45 PM
Ultimately it failed, although people's opinions may differ. The company is still around, but from what people who've worked there have said, it seems as if the original goal is all but abandoned (although Lenat might have disagreed, and seemed eternally optimistic, at least in public). It seems they survive on private contracts for custom systems premised on the power of Cyc being brought to bear, when in reality these projects could be accomplished in simpler ways.
I can't help but see somewhat of a parallel between Cyc - an expert system scaling experiment, and today's LLMs - a language model scaling experiment. It seems that at heart LLMs are also rule-based expert systems of sorts, but with the massive convenience factor of learning the rules from data rather than needing to have the rules hand-entered. They both have/had the same promise of "scale it up and it'll achieve AGI", and "add more rules/data and it'll have common sense" and stop being brittle (having dumb failure modes, based on missing knowledge/experience).
While the underlying world model and reasoning power of LLMs might be compared to an expert system like Cyc, they do of course also have the critical ability to input and output language as a way to interface to this underlying capability (as well as perhaps fool us a bit with the ability to regurgitate human-derived surface forms of language). I wonder what Cyc would feel like in terms of intelligence and reasoning power if one somehow added an equally powerful natural language interface to it?
As LLMs continue to evolve, they are not just being scaled up, but also new functionality such as short term memory being added, so perhaps going beyond expert system in that regard, although there is/was also more to Cyc that just the massive knowledge base - a multitude of inference engines as well. Still, I can't help but wonder if the progress of LLMs won't also peter out, unless there are some fairly fundamental changes/additions to their pre-trained transformer basis. Are we just replicating the scaling experiment of Cyc, just with a fancy natural language interface?
by ultra_nick on 4/18/24, 1:05 AM
I wonder if they've adopted ML yet.
by PeterStuer on 4/18/24, 3:42 PM
Eventually the approach would be rediscovered (but not recuperated) by the database field desparate for 'new' research topics.
We might see a revival now that transformets can front and backend the hard edges of the knowledge based tech, but it will remain to be seen wether scaled monolyth systems like Cyc are the right way to pair.
by astrange on 4/18/24, 6:46 AM
Was trying to find it the other day and AI searches suggested Cyc; I feel like that's not it, but maybe it was? (It definitely wasn't Everything2.)
by mepian on 4/17/24, 9:23 PM
by Rochus on 4/17/24, 9:55 PM
> Perhaps their time will come again.
That's pretty sure, as soon as the hype about LLMs has calmed down. I hope that Cyc's data will then still be available, ideally open-source.
> https://muse.jhu.edu/pub/87/article/853382/pdf
Unfortunately paywalled; does anyone have a downloadable copy?
by bshanks on 4/18/24, 8:25 PM
by chx on 4/17/24, 10:50 PM