by lycopodiopsida on 2/15/25, 8:15 AM with 225 comments
by nabla9 on 2/15/25, 9:12 AM
Chatbot LLM's don't have metacognition. They don't know that they don't know. If you peek inside the LLM, the process seems different for things they don't know. They just can't express it because they are trained to produce good probability outcome instead of accurate one.
They have potential as knowledge databases, but someone must figure out how to get "I don't know" information out of them.
by becquerel on 2/15/25, 9:03 AM
by nrvn on 2/15/25, 10:12 AM
I never ask them anything that requires rigorous research and deep knowledge of subject matter.
But stuff like “create a script in python to do X and Y” or “how to do XY in bash” combined with “make it better” produces really good and working in 95% of the time results and saves my time a lot! No more googling for adhoc scripting. It is like having a junior dev by your side 24/7. Eager to pick up any task you throw at them, stupid and overconfident. Never self-reviewing himself. But “make it better” actually makes things better at least once.
by spiderfarmer on 2/15/25, 9:08 AM
by nbuujocjut on 2/15/25, 9:04 AM
> Michael P. Taylor reassigned Brachiosaurus brancai to the new genus Giraffatitan in 2009. The species became Giraffatitan brancai based on significant anatomical differences from the type species Brachiosaurus altithorax.
> Given that this is quite specific paleontological taxonomy information, I should note that while I aim to be accurate, I may hallucinate details for such specialized questions. You may want to verify this information independently.
by acka on 2/15/25, 9:20 AM
The species Brachiosaurus brancai was reassigned to its own genus, Giraffatitan brancai, by paleontologist George Olshevsky in 1991. Olshevsky proposed that Brachiosaurus brancai, which was originally described by Werner Janensch in 1914, was distinct enough from Brachiosaurus altithorax (the type species of Brachiosaurus) to warrant its own genus. Subsequent studies, particularly by Michael Taylor in 2009, provided further anatomical evidence supporting this distinction.
by simonbarker87 on 2/15/25, 9:11 AM
Beyond that I don’t trust them at all.
by -__---____-ZXyw on 2/15/25, 10:51 AM
I'm quite excited about many of the specific use cases for LLMs, and have worked a few things into my own methods of doing things. It's a quick and convenient way to do lots of actual specific things.
For example: if I want to reflect on different ways to approach a (simple) maths problem, or what sorts of intuitions lie behind an equation, it is helpful to have a tool that can sift through the many snippets of text out there that have touched off that and similar problems, and present me with readable sentences summing up some of those snippets of text from all those places. You've to be very wary, as highlighted by the article, but as "dumb summarisers" that save you trawling through several blogs, they can be quicker to use.
Nonetheless, equating this with "reasoning" and "intelligence" is only possible for a field of academics and professionals who are very poorly versed in the humanities.
I understand that tech is quite an insular bubble, and that it feels like "the only game in town" to many of its practitioners. But I must admit that I think it's very possible that the levels of madness we're witnessing here from the true believers will be viewed with even more disdain than "blockchain" is viewed now, after the dust has settled years later.
Blockchain claimed it was going to revolutionise finance, and thereby upend the relationship between individuals and states.
AI people claim they're going to revolutionise biology, and life itself, and introduce superintelligences that will inevitably alter the universe itself in a way we've no control over.
The danger isn't "AI", the danger is the myopia of the tech industry at large, and its pharaonic figureheads, who continue to feed the general public - and particularly the tech crowd - sci-fi fairytales, as they vie for power.
by tgsovlerkhgsel on 2/15/25, 9:11 AM
Or preferably, five minutes to understand how it works and what it can and cannot do, then five minutes to ask it something actually suitable.
"AI" (LLMs) are currently good at:
- language understanding, i.e. understanding and processing text you provide it. For example, taking a wall of text and answering questions about points mentioned there, or general sentiment, or extracting data from it etc.
- somewhat general knowledge, i.e. stuff that was sufficiently frequently represented in the training data
Absent additional tricks, "AI" is really bad at obscure knowledge or complex, multi-step thinking. We are slowly getting there, but we aren't there yet. This is not something the LLMs do, but rather the wrappers around them that provide the model with tools to get additional information and first prompting the model to select the tools, then repeated prompts with the output of the tools.
A good rule of thumb is that if an average well-educated intelligent person could answer it without further research, a LLM will probably be able to. I'd even say that if an average fresh out of school graduate of the corresponding discipline can answer it quickly (without further research or sitting down for ten minutes and doing the math), there's a good chance AI will be able to answer it, but it might also get it horribly wrong and you will have a hard time distinguishing between those if you have no knowledge in the field.
As the author mentions at the very end of the article, the hallucination problem also means that the best kind of tasks are where you can quickly verify whether the response was useful. A system that produces misleading responses 50% of the time is useless if you can't distinguish them, but very useful if in those 50% it saves you ten minutes of work and in the other 50% you lose a minute by trying.
by Janicc on 2/15/25, 9:06 AM
by tmnvdb on 2/15/25, 12:09 PM
Both 4o with internet and o3 without internet gave the right answer.
Somebody in the comments did a deep research run which produces an extremely detailed History of the naming.
Of course reliability is not a solved problem in general but people here would do well to check both blog posts and generative model output for factuality and relevance.
by llmfan on 2/15/25, 9:49 AM
DeepSeek R1 gets it right:
> The species formerly known as Brachiosaurus brancai was reassigned to its own genus, *Giraffatitan*, through a series of taxonomic revisions. The key steps were:
> 1. *Initial Proposal (1988):* Gregory S. Paul first suggested separating B. brancai from Brachiosaurus in 1988, proposing the informal name "Giraffatitan" as a subgenus. However, this was not widely accepted at the time.
> 2. *Formal Reclassification (2009):* Paleontologist *Michael P. Taylor* conducted a detailed comparative study of Brachiosaurus specimens. In his 2009 paper, he formally elevated B. brancai to the full genus *Giraffatitan brancai*, citing significant anatomical differences (e.g., limb proportions, vertebral structures) from the North American Brachiosaurus altithorax.
> Thus, *Michael P. Taylor* is credited with the formal reassignment of the species to the genus Giraffatitan in *2009*.
by mdp2021 on 2/15/25, 11:05 AM
"What's the value of quantities Q0 and Q1?" // "[Bot:] Q0 is 1.5 u/w, or 0.67 u per each w. Q1 is unknown." // "Can you see the error in your previous?" // "[Bot:] Yes, I can re-read my outputs! Q0 is (Q0 is 1.5 u/w (0.67w/u), Q1 is 2.0 u/w". // "How do you now know Q1?" // "[Bot:] I don't, I'm sorry."
The issues should be clear. Given the state of the population (e.g. lawyers that delegate research to unreliable sources), we have to reiterate the warning. Given that even in this audience we had voices claiming that the problem of intelligence was solved (how did it happen?), we sometimes reiterate it even here. But we should focus on the solutions, as the problem should be clear...
by jdthedisciple on 2/15/25, 11:52 AM
And even worse, judge its performance based off that?
When someone does that I immediately know that they have not the slightest clue about the underlying workings of any deep learning model, let alone basics of statistics.
by janikvonrotz on 2/15/25, 11:36 AM
LLM are non factual no matter what. Even if it is called Schoolar GPT does not make it any different.
by someothherguyy on 2/15/25, 9:04 AM
by 2-3-7-43-1807 on 2/15/25, 10:43 AM
which chatgpt?
and feeding the initial question to perplexity.ai yields a result that is practically identical to the perfect answer.
of course, the author has a point. as many here already stated it isn't even worth talking about it cause it is so obvious. then again as many correctly notice llms are sold with this promise.
practically, the subject is way too niche - i'd assume - and probably there are also conflicting statements on the interweb which the model was trained on.
having said that - perplexity got it right with web-access turned on.
by proc0 on 2/15/25, 9:52 AM
The promise is that LLMs are a form of intelligence, but it is only a complicated statistical approximation, which happens to correlate with correctly answering queries that are contained in its training dataset.
My speculation is that a true AI (in the sense that it has some form of intelligence) would show signs of intelligence even at very small scales. LLMs are completely useless at really small scales, but even small creatures like rats or even insects display intelligence in proportion to their scale.
I think these tools will help automate information processing of all kinds, but it is by no means intelligent, and we will not be able to rely on them as if they were intelligent because we'll still need include verification at ever level, similarly to how self-driving cars still need a human to pay attention. Useful sure, but it falls short of its promise that it will replace humans because they can "think". We're not there yet from a theoretical standpoint.
by PaulRobinson on 2/15/25, 10:26 AM
Things LLMs are good at include summarisation and contextualisation.
They can use that facility to help summarise processes and steps to get something done, if they've been trained on lots of descriptions of how to do that thing.
What they're not good at is perfect recall without being nudged. This example would have been very different if the LLM had been able to RAG (or GraphRAG), a local data source on palaeontology.
I think we're going to see an evolution where search companies can hook up an LLM to a [Graph]RAG optimised search index, and you'll see an improved response to general knowledge questions like this. In fact, I'd be surprised if this isn't happening already.
LLMs on their own are a lossy compression of training material that allow a stochastic parrot to, well, parrot things stochastically. RAG methods allow more deterministic retrieval, which when combined with language contextualisation can lead to the kinds of results the author is looking for, IME.
by itvision on 2/15/25, 9:37 AM
> ChatGPT said:
> The species Brachiosaurus brancai was reassigned to its own genus, Giraffatitan brancai, by paleontologist George Olshevsky in 1991. This reclassification was later supported by a detailed study by Michael P. Taylor in 2009, which reinforced the distinction between Brachiosaurus and Giraffatitan based on anatomical differences.
My ChatGPT has just given a correct answer. What am I doing wrong?
by antirez on 2/15/25, 9:25 AM
by gizmo on 2/15/25, 9:54 AM
For any question that requires research (or deep expertise in a specific field) you need to use either a research model (that can reason and look things up in external knowledge bases) or you need a model that is trained on the kind of questions that you want to ask it so that it retains that data.
by lbill on 2/15/25, 2:26 PM
For tasks that are trivial to verify, it's ok: a code compiler will run the code written by a LLM. Or: ask a LLM to help you during the examples mapping phase of BDD, and you'll quickly be able to tell what's good and what isn't.
But for the following tasks, there is a risk: - ask a LLM to make a summary of an email your didn't read. You can't trust the result. - you're a car mechanic. You dump your thoughts to a voice recorder, and use AI to turn it into a textual structured report. You'd better tripple check the output! - you're a medical doctor, attempting to do the same trick: you'd have to be extra careful with the result!
And don't count on software testing to make AI tool robust: LLM are non deterministic.
by blu_ on 2/15/25, 9:01 AM
by greatgib on 2/15/25, 10:04 AM
When in fact LLMs are good for some tasks but not for general knowledge. You can think of the cutoff date as a strong example about that. The knowledge in itself is just a base to have the ability to perform tasks.
In addition, for the example in this article, the person is testing the LLM on common chat service without dealing with settings that might be important like the temperature. If you want the closest to an exact reply you should set the temperature to 0, but general public tools have an average temperature to be able to be creative when needed.
by jefffoster on 2/15/25, 9:23 AM
by jeffreygoesto on 2/15/25, 9:00 AM
by hbarka on 2/15/25, 9:13 AM
1. Cats are transactional 2. Dogs are transactional 3. Cats are not transactional 4. Dogs are not transactional
It agreed on all occasions. Language is agreeable.
by firesteelrain on 2/15/25, 9:10 AM
It still fails at being confidently wrong.
The brief article hits at people trusting the tool without questioning the output.
Meanwhile, we have people using Codeium or Copilot to write code and that sort of works since the code eventually needs to be compiled and tested (unit, integration, system, requirement sell off)
There is no test for the truth available to everyone else.
by xianshou on 2/15/25, 1:05 PM
Who reassigned the species Brachiosaurus brancai to its own genus, and when?
---
Here is the transcription of the text from the image:
Reasoned for 8 seconds ▼
The user is asking about the reclassification of Brachiosaurus brancai to its own genus. Originally, Janensch described it as Brachiosaurus brancai in 1914. However, in 2009, Michael P. Taylor and Matt Wedel reassigned it to Giraffatitan brancai. Their paper, titled “Brachiosaurus brancai and the early evolution of titanosauriform sauropods,” argued for this change, distinguishing Giraffatitan brancai from Brachiosaurus. I’ll check the exact references to confirm the specifics.
It seems “Giraffatitan” was first proposed by Gregory S. Paul in 1988, when he reassigned Brachiosaurus brancai to its own genus. Although some confusion exists, it’s generally accepted that Paul was the main figure behind this change. Other references, such as a 2007 paper by Sereno and a 2009 paper by Taylor and colleagues, also discuss this reclassification. The 1988 reshuffling is usually seen as the turning point for Brachiosaurus brancai, although later research helped solidify this decision.
The species was reassigned to its own genus—Giraffatitan—by paleontologist Gregory S. Paul in 1988.
---
That looks pretty good to me!
by tim333 on 2/15/25, 1:10 PM
That kind of assumes AI is bad, being hyped and it will all crash, however AI performance is not constant - as many commenters here have noted the article example from a year ago works ok now, and in the following years AI will keep improving.
by torvald on 2/15/25, 9:15 AM
I recall my AI professor in uni telling us during the first lecture that «Artificial intelligence is this target, that, and once we get there it, is it no longer artificial intelligence, is just an algorithm» – and this still feels like the case.
by zenon on 2/15/25, 9:25 AM
They crouch under your desk as a man in a grey suit and spectacles enters and pulls up a chair in front of you. He peers over his glasses at you, and asks, who classified the leptosporangiate ferns, and when was it done? The what now?
I'm happy you asked such an excellent question, you say. It was Michael Jackson, in 1776.
A sneer flicks over the man's upper lip, He jerks upright, takes a step back from you. This man, he declares with disgust, is not intelligent!
by jhanschoo on 2/15/25, 1:43 PM
by dataviz1000 on 2/15/25, 9:30 AM
[0] https://chatgpt.com/share/67b05e3b-eea8-8004-8dab-806ee8fa59...
by 1vuio0pswjnm7 on 2/15/25, 4:11 PM
Over its short history so far we have learned that Silicon Valley's only viable "business model" is data collection, surveillance and online ad services. "AI", i.e., next generation autocomplete, can work for this in the same way that a "web browser" or a "search engine" did. In the end, no one pays for a license to use it. But it serves a middleman surveillance "business model" that solicits ad spend and operates in secrecy. When this "business model" falters, for example because computer use and ad spend stagnates or shrinks, then Silicon Valley's human workers are not "needed". Large numbers of these human workers are paid from investment capital or ad spend, not from fees for services or the sale of products.
Perhaps the question is not whether "AI" can "replace" Silicon Valley's human workers. Perhaps the question is whether the online ads "industry" is sustainable.
by refulgentis on 2/15/25, 9:18 AM
I think the author's lens is interesting, I call it "information resolution." Orion is the first model that has an understanding of the style of short story written by famously-Radiohead-affiliated artist Stanley Donwood. Also, it knows the "codename" for Material You, i.e. a random name for the library I was prototyping, its much finer grained than existing models.
* The species Brachiosaurus brancai was reassigned to its own genus, Giraffatitan brancai, by George Olshevsky in 1991. This reclassification was later supported by further studies, notably by Michael P. Taylor in 2009, which reinforced the distinct differences between Giraffatitan and Brachiosaurus altithorax.
by gwd on 2/15/25, 10:43 AM
- Brachiosaus brancai (species) -> Brachiosarus (Giraffatitan) brancai (sub-genus) in 1988
- Bachiosarus (Giraffatitan) brancai (sub-genus)sub-genus -> G. brancai (genus) in 2009.
So if "Brachiosaurus brancai" as a species was never renamed into a genus, which answer did he want -- when it was renamed out of being only a species, or when it was renamed into being its own genus?
Claude's response:
> Brachiosaurus brancai was reassigned to the new genus Giraffatitan by paleontologist Gregory S. Paul in 1988. The species then became known as Giraffatitan brancai.
> Since this is quite specific paleontological information, I should note that while I aim to be accurate, I may hallucinate some details when discussing such specialized topics. I recommend double-checking these details against reliable paleontological sources.
So:
1. Nearly correct -- it was assigned a new sub genus in 1988, and did eventually become known as the genus Giraffatitan brancai.
2. It recognized that this was an area where its knowledge would be weak, and asked me to double check, which was his main complaint about ChatGPT.
I asked it if there was any ambiguity in the question, and it said:
> Yes, there is ambiguity. The question asks about the reassignment of Brachiosaurus brancai as if there was only one such event, but there were actually two significant taxonomic changes:
> In 1988, Gregory S. Paul informally suggested moving B. brancai to its own genus, Giraffatitan In 2009, Michael P. Taylor formally made this generic distinction by properly establishing Giraffatitan as a distinct genus from Brachiosaurus
> My previous answer was incomplete as it only mentioned Paul's 1988 informal suggestion, not Taylor's 2009 formal reassignment. Taylor's work was the official taxonomic change that established Giraffatitan brancai as the valid name
I think Claude did about as well as a random human would have done who'd studied this 10 years ago and been asked a question with false assumptions embedded.
Claude and ChatGPT aren't authorities on every subject. They're that guy at the office who seems to know a bit about everything, and can point you in the right direction when you basically don't have a clue.
by thenano2 on 2/15/25, 9:17 AM
by continuational on 2/15/25, 9:04 AM
by mseepgood on 2/15/25, 11:40 AM
by alecco on 2/15/25, 9:03 AM
by gillesjacobs on 2/15/25, 10:37 AM
The species was first split at the subgeneric level by Gregory S. Paul in 1988—he proposed the name Brachiosaurus (Giraffatitan) brancai. Then in 1991 George Olshevsky raised the subgenus Giraffatitan to full generic status, so that B. brancai became Giraffatitan brancai. Later, a 2009 study by Michael P. Taylor provided detailed evidence supporting this separation.
I guess Mike Taylor will gracefully cede his point now?It is very funny to me that someone would feel the need to complain about a niche factual error in pretrained LLMs without even enabling RAG. If you even know the basics about this field, you shouldn't be surprised.
Of course this was probably more about ego stroking his paleontological achievement than a thoughtful evaluation of the current state of LLMs.
by _giorgio_ on 2/15/25, 9:27 AM
Learn to understand its limitations and make the best use of it. Surely it's confused by lesser known facts, that's a thing that you can't ignore even if you interpret AI as a tool that compresses knowledge.
If you don't understand that, you're the tool.
by anonzzzies on 2/15/25, 9:05 AM
by kolinko on 2/15/25, 1:00 PM
by throw098320923 on 2/15/25, 11:03 AM
Remember how authorities "hallucinated" that some medication is 100% effective and with 0% side effects?! Despite scientific opposition, and warning leaflet included WITH the medication? And how you would go to jail, for ignoring such hallucinations!?
by squarefoot on 2/15/25, 9:05 AM
by anonu on 2/15/25, 10:26 AM
by pk-protect-ai on 2/15/25, 9:17 AM
by yapyap on 2/15/25, 10:06 AM
by scotty79 on 2/15/25, 11:13 AM
If you asked a human this question they'd respond with "let me find out and get back to you", educate themselves by accessing various knowledge repositories and then formulate the answer.
And some AIs are possibly doing that even today, just not the ones freely accessible to everybody.
Maybe this one?
https://blog.google/products/gemini/google-gemini-deep-resea...
by nbzso on 2/15/25, 9:46 AM
A pipe dream ideal for technocratic mafia (In 1932, Scott and fellow technocrat Walter Rautenstrauch formed the "Committee on Technocracy" at Columbia University) of Bilderburgs members, Peter Thiel, Eric Schmidt and their pupils like Musk and co. Now they have a carte blanche for AI governance and regulatory capture. Starlink, Starshield (Skynet), Stargate, Closed AI, Grok.
Married to the masters of the old system, they will deliver "efficiency" on a grand scale. Transhumanism is the "inevitable" future of destruction of human dignity, value and psyche.
And all of this trough psyops for a text calculator with stochastic parrot tendencies.
A calculator is a useful tool, but not intelligence in any form.
Machine learning is real, and the biggest danger is the tendency of a hyper class of billionaires to apply surveillance and governance automation.
Ask Larry Ellison about his dream of citizens with the best behaviour. Not so different from Chinas social scoring system.
by tompagenet2 on 2/15/25, 9:23 AM
---
The reassignment of Brachiosaurus brancai to its own genus occurred through a series of steps by different paleontologists:
## Initial Classification In 1914, Werner Janensch first described the African species as Brachiosaurus brancai, considering it part of the existing Brachiosaurus genus[4].
## Path to Reclassification Gregory S. Paul made the first step toward separation in 1988 by proposing a subgenus classification, Brachiosaurus (Giraffatitan) brancai, after noticing significant skeletal differences[4][10].
George Olshevsky then elevated Giraffatitan to full genus status in 1991, though this change was largely disregarded by the scientific community at the time[4][10].
## Formal Establishment The definitive separation came in 2009 when Michael P. Taylor published a detailed study demonstrating 26 distinct osteological differences between the African and American species. This comprehensive analysis firmly established Giraffatitan as a separate genus, creating the combination Giraffatitan brancai[3][4]. From 2012 onward, most researchers have accepted this classification[10].
Citations: [1] https://svpow.com [2] https://svpow.com/2009/09/09/brachiosaurus-brancai-is-not-br... [3] https://www.miketaylor.org.uk/dino/pubs/taylor2009/Taylor200... [4] https://en.wikipedia.org/wiki/Giraffatitan [5] https://www.nhm.ac.uk/discover/dino-directory/giraffatitan.h... [6] https://svpow.com/2025/02/14/if-you-believe-in-artificial-in... [7] https://www.jurassic-pedia.com/brachiosaurus-altithorax-sf/ [8] https://blog.everythingdinosaur.com/blog/_archives/2023/03/1... [9] https://dinomuseum.ca/2019/05/your-brachiosaurus-is-not-a-br... [10] https://en.wikipedia.org/wiki/Brachiosaurus [11] http://www.extinctblog.org/extinct/tag/Wilhelm+von+Branca [12] https://www.tandfonline.com/doi/abs/10.1671/039.029.0309 [13] https://www.smithsonianmag.com/science-nature/a-baby-brachio... [14] https://pmc.ncbi.nlm.nih.gov/articles/PMC3045712/ [15] https://boasblogs.org/dcntr/the-brachiosaurus-brancai-in-the... [16] https://www.museumfuernaturkunde.berlin/en/visit/exhibitions... [17] https://thepaintpaddock.wordpress.com/brachiosaurus-altithor... [18] https://www.researchgate.net/publication/242264129_A_ReEvalu... [19] https://www.tandfonline.com/doi/full/10.1080/02724634.2011.5... [20] https://www.app.pan.pl/archive/published/app68/app011052023.... [21] https://blog.everythingdinosaur.com/blog/_archives/2008/06/2... [22] https://en.wikipedia.org/wiki/Brachiosaurus
---
by kristopolous on 2/15/25, 9:16 AM
They each have "personality defects" and at least right now we're in a time of ensembles.
Ask qwen to do some kind of product comparison btw. It's impressive. The 02-05 Gemini is pretty impressive as well.
Expand beyond Claude and ChatGPT. There's some good stuff out there.
by GistNoesis on 2/15/25, 10:50 AM
Is it a primary source of secondary source [1] ? And how to incorporate this new information.
In their quest for building a "truthful" knowledge base, LLMs incorporate implicitly facts they read from their training dataset, into their model weights. Their weight update mechanism, allows to merge the facts of different authority together to compress them and not store the same fact many times, like in a traditional database.
This clustering of similar new information is the curse and the blessing of AI. It allows faster retrieval and memory-space reduction. This update mechanism is usually done via Bayes rule, doing something called "belief propagation".
LLMs do this implicitly, and have not yet discovered that while belief propagation works most of the time, it's only guaranteed to work when the information graph have no more than one loop. Otherwise you get self reinforcing behavior, where some source cites another and gives it credit, which gives credit to the previous source, reinforcing a false fact in the similar fashion as farm links help promote junk sites.
When repeating a false information to a LLM many times, you can make it accept it as truth. It's very susceptible to basic propaganda. LLMs can be a triple-store or a quad-store based on how and what they are trained.
But LLM can also incorporate some error correction mechanism. In this article, the LLM tried two times to correct itself failed to do so, but the blog author published an article which will be incorporated into the training dataset, and the LLM will have another example of what it should have answered, provided that the blog author is perceived as authoritative enough to be given credence.
This error correction mechanism with human in the loop, can also be substituted by a mechanism that rely on self consistency. Where the LLM build its own dataset. And asks questions to itself about the fact it knows, and tries to answer them based on first principles.
For example the LLMs can use tools to retrieve the original papers, verify their time and date, and see who coined the term first and why. By reasoning it can create a rich graph of facts that are interconnected, and it can look for incoherence by asking itself. The more rich the graph, the better the information can flow along its edges.
Because LLMs are flexible there is a difference between what they can do, and what they do, based on whether or not we trained them to make emerge the behavior we desire to emerge. If we don't train them with a self consistency objective they will be prone to hallucinations. If we train them based on Human Feedback preference we will have a sycophants AI. If we train them based on "truth", we will have "know it all" AIs. If we train them based on their own mirrors, we will have what we will have.
[1]https://www.wgu.edu/blog/what-difference-between-primary-sec...
by ciconia on 2/15/25, 9:28 AM
On a moral level, watching how tech bros are sucking it up to Trump/Musk and how their companies are betting all their chips on the AI roulette, it all seems related.
by pinoy420 on 2/15/25, 9:05 AM
That said, I do wonder if its corpus of training data contained that much information on your subject. It is rather niche is it - compared to cooking recipes, or basic software development techniques of 2 years ago, or chemistry, maths and physics.
My friend is a leading research chemist and he, and one other person in china, are working on this one particular field - individually - so there would be little information out there. I asked ChatGPT 4o to give an overview of what he was doing based on the little information I knew. He was astounded. It got it spot on. I asked it to elaborate and to come up with some new direction for research and the ideas it spat out were those he had considered and more.
by iamnotagenius on 2/15/25, 10:52 AM
by computerthings on 2/15/25, 10:10 AM
what's the best way to fire fast with a joystick that has no auto-fire function?
> 1. Tap the Button Rapidly 2. Use a “Double Tap” Technique 3. Use a “Flick” Motion 4. Joystick or Button Placement 5. Software/External Programs (If Playing on PC) 6. Practice
you missed the main one
> Ah, I see! The main technique I missed is "firing by holding the button down and rapidly alternating between pressing and releasing"—often referred to as "button mashing" or the "rapid toggle method." Here's how it works: Press the fire button and hold it down. Quickly release and press it again repeatedly.
I'll give you a hint, it involves making the elbow really stiff
> Ah, got it! You're talking about "elbow mashing"—a technique that involves using the elbow to press the joystick button. This method exploits your body mechanics for rapid, repetitive motion, leveraging the larger, stronger muscles of your arm to generate faster inputs.
------
Well, wow, that is stupid. It can't even say "I don't know", it can't even ask me! It's just text vomit. A dog that can tilt its head in confusion can do more.
I couldn't find the "official" name, if any, but here's a redditor from 11 years ago explaining what I was looking for:
> I have no idea how to explain this but I can do it, so I'm just going to wing an explanation and hope you understand/do it. Pretty much try and "lock up" your elbow. If you force this feeling hard enough your arm or hand will start shaking. From here add a tiny bit of forward force in your wrist so that your hand shakes up and down rather than side to side.
I learned it from an older kid when playing on the Amiga. Sure, nothing is "the best" way, but nothing else my body is capable of can click faster, and any "pro" would mention this before just hallucinating insight with great confidence.