by pedalpete on 4/16/24, 1:54 AM
I've found where LLMs can be useful in this context is around free-associations. Because they don't really "know" about things, they regularly grasp at straws or misconstrue intended meaning. This, along with the volume of language (let's not call it knowledge) result in the LLMs occasionally bringing in a new element which can be useful.
by KhoomeiK on 4/16/24, 3:02 AM
A group of PhD students at Stanford recently wanted to take AI/ML research ideas generated by LLMs like this and have teams of engineers execute on them at a hackathon. We were getting things prepared at AGI House SF to host the hackathon with them when we learned that the study
did not pass ethical review.
I think automating science is an important research direction nonetheless.
by barathr on 4/16/24, 2:38 AM
by UncleOxidant on 4/16/24, 5:07 AM
The ideas aren't the hard part.
by SubiculumCode on 4/16/24, 5:47 AM
In some fields of research, the amount of literature out there is stupendous, and with little hope of a human reading, much less understanding the whole literature.
Its becoming a major problem in some fields, and I think, in some ways, approaches that can combine knowledge algorithmically are needed, perhaps llms.
by deegles on 4/16/24, 5:03 PM
It would be fun to pair this with an automated lab that could run experiments and feed the results into generating the next set of ideas.
by not-chatgpt on 4/16/24, 1:00 AM
Cool idea. Never gonna work. LLMs are still generative models that spits out training data, incapable of highly abstract creative tasks like research.
I still remember all the GPT-2 based startup idea generators that spits out pseudo-feasible startups.