by benkaiser on 12/29/24, 3:44 AM with 153 comments
by szvsw on 12/29/24, 5:41 AM
If there’s some platonic notion of divinity or immanence that all faith is just a downward projection from, it seems like its statistical representation in tokenized embedding vectors is about as close as you could get to understanding it holistically across theological boundaries.
All kidding aside, whether you are looking at Markov chain n-gram babble or high temperature LLM inference, the strange things that emerge are a wonderful form of glossolalia in my opinion that speak to some strange essence embedded in the collective space created by the sum of their corpi text. The Delphic oracle is real, and you can subscribe for a low fee of $20/month!
by gwd on 12/29/24, 11:54 PM
* Using a system I developed myself; currently in open development: https://www.laleolanguage.com
by nickpsecurity on 12/29/24, 4:01 AM
I believe this happens because the verses and verse-specific commentary are abundant in the pre-training sources they used. Whereas, if one asks a highly-interpretive question, then it starts re-hashing other patterns in its training data which are un-Biblical. Asking about intelligent design, it got super hostile trying to beat me into submission to its materialistic worldview every paragraph.
So, they have their uses. I’ve often pushed for a large model trained on Project Gutenberg to have a 100% legal model for research and personal use. A side benefit of such a scheme would be that Gutenberg has both Bibles and good commentaries which trainers could repeat for memorization. One could add licensed, Christian works on a variety of topics to a derived model to make a Christian assistant AI.
by cowmix on 12/30/24, 2:06 AM
It's a great, fun test.
by danpalmer on 12/30/24, 3:18 AM
This is playing against their strengths. By all means ask them for a summary, or some analysis, or textual comparison, but please, please stop treating LLMs as databases.
by asimpleusecase on 12/29/24, 6:32 AM
by ks2048 on 12/29/24, 7:09 AM
Does anyone know any more thorough papers on this topic? For example, this could be tested on every verse in bible and lots of other text that is certainly in the training data: books in project gutenberg, wikipedia articles, etc.
Edit: this (and its references) looks like a good place to start: https://arxiv.org/abs/2407.17817v1
by jsenn on 12/30/24, 12:35 AM
Does verbatim completion of a bible passage look different from generation of a novel sequence in interesting ways? How many sequences of this length do they memorize? Do the memorized ones roughly correspond to things humans would find important enough to memorize, or do LLMs memorize just as much SEO garbage as they do bible passages?
by waynecochran on 12/29/24, 8:41 PM
by asim on 12/29/24, 6:22 AM
What I'm working on https://reminder.dev
by avree on 12/29/24, 8:57 PM
by kittikitti on 12/29/24, 6:26 AM
by evanjrowley on 12/29/24, 9:41 PM
by jccalhoun on 12/30/24, 3:29 AM
Then last night I saw a video about the Parker Solar Probe and how at 350,000mph it was the fastest moving man-made object. So I asked chatgpt how long at that speed it would take it to get to Alpha Centauri which is 4.37 light years away. It said it would take 59.8 million years. I knew that was way too long so I had it convert mph to miles per year and then it was able to give me the correct answer of 6817 years.
by efitz on 12/30/24, 4:28 AM
I think the experiment of using the LLM to recall described verses - eg “what’s the verse where Jesus did X”- is a much more interesting use. I think also that the LLM could be handy as, or to construct, a concordance. But I’d just use a document or database if I wanted to look up specific verses.
by ChuckMcM on 12/30/24, 12:24 AM
by michaelsbradley on 12/29/24, 5:49 AM
Can you provide short excerpts from works in Latin and Greek written between 600 and 1300 that demonstrate the evolution over those centuries specifically of literary references to Jesus' miracle of the loaves and fishes?
https://chatgpt.com/share/675858d5-e584-8011-a4e9-2c9d2df783...by orionblastar on 12/29/24, 6:04 AM
by cbg0 on 12/30/24, 6:25 AM
by pwinkeler on 12/30/24, 2:43 AM
by graemep on 12/30/24, 12:48 AM
by gerdesj on 12/30/24, 1:08 AM
Why do you put a weird computer model between you and a computer and errr Your Faith? Do bear in mind that hallucinations might correspond to something demonic (just saying)
I'm a bit of a rubbish Christian but I know a synoptic gospel when I see it and can quote quite a lot of scripture. I am also an IT consultant.
What exactly is the point of Faith if you start typing questions into a ... computational model ... and trusting the outputs? Surely you should have a decent handle on the literature: It's just one big physical book these days - The Bible. Two Testaments and a slack handful of books and that for each. I'm not sure exactly but it looks about the same size as the Lord of the Rings.
I've just checked: Bible: 600k LotR: 480K - so not too far off.
I get that you might want to ask "what if" types of questions about the scriptures but why would you ask a computer? Faith is not embedded in an Intel Core i7 or an Nvidia A100.
Faith is Faith. ChatGPT is odd.
by killermouse0 on 12/30/24, 9:42 AM
by Animats on 12/29/24, 8:52 PM
Did they try this on obscure bible excerpts, or just ones likely to be well known and quoted elsewhere? Well known quotes would be reinforced by all the copies.
by seanhunter on 12/30/24, 12:58 PM
But also, LLM's in general build a lossy compression of their training data so are not the right tool if you want a completely accurate recall.
Will the recall be accurate enough for a particular task? Well I'm not a religious person so I have no framework to help decide that question in the context of the bible. If you want a system to answer scripture questions I would expect a far better approach than just an LLM would be to build a RAG system and train the RAG embedding and search at the same time you train the model.
[1] https://en.wikipedia.org/wiki/Betteridge%27s_law_of_headline...
by weMadeThat on 12/30/24, 3:03 PM
I got exiled into an isolated copy of an AI-populated internet once and they put perfectly accurate bible quotes into dictionaries!
by ddtaylor on 12/29/24, 7:07 AM
I never tell other people what to believe or how they should do that in any capacity.
With that said I find the hallucination component here fascinating. From my perspective everyone who interprets various religious text does so differently and usually that involves varying levels of fabrication or something that looks a lot like it. I'm speaking about the "talking in tongues" and other methods here. I'm not trying to lump all religions into the same bag here, but I have seen that a lot have different ways of "receiving" communication or directive. To me this seems pretty consistent with the colloquial idea of a hallucination.
by dudeinjapan on 12/30/24, 12:10 AM
by eddiewithzato on 12/30/24, 12:15 AM
by sneak on 12/29/24, 6:38 AM
I experience the exact same problem with human beings.
> , which we regard as the inspired Word of God.
QED
by MrQuincle on 12/29/24, 8:27 PM
Interesting. In my very religious upbringing I wasn't allowed to read fairy tales. The danger being not able to classify which stories truly happened and which ones didn't.
Might be an interesting variant on the Turing test. Can you make the AI believe in your religion? Probably there's a sci-fi book written about it.