from Hacker News

Knowing less about AI makes people more open to using it

by botanicals6 on 1/27/25, 1:54 AM with 82 comments

  • by paulgb on 1/27/25, 2:36 AM

    This seems to be largely predicated on the idea that the more you understand how AI works the more the magic disappears[1], but I think the opposite is true -- I like to think I understand this stuff from the logic gates to backpropagation to at least BERT-era attention, and the fact that all of those parts come together into something I now spend hours a day conversing with in plain English to solve real problems is absolutely awe-inspiring to me.

    [1] from the source article “we argue that consumers who perceive AI as magical will experience feelings of awe leading to greater AI receptivity”

  • by resonious on 1/27/25, 2:44 AM

    I think there are a lot of technological advancements that are easy to "not like" when you know some select few details about them. Including literal sausage making as another commenter on here mentioned.

    T-shirts are great until you hear about the conditions of where they're made and how their disposal is managed. Social media is great until you realize how much they know about you and how they use that knowledge. Modern medicine is easy to not like when you look at the animal experiments that made it happen. And again, sausages - I know some vegetarian folks who are vegetarian in protest of how most meat is produced.

    I kind of wonder if there is a subset of comfortable modern society where every aspect is easily likable no matter how much you know about it. Bonus points if that society is environmentally sustainable.

  • by tracerbulletx on 1/27/25, 3:53 AM

    There's probably a reversal of that trend when you know a lot about it. I don't know how you could go through building your own GPT-2 and not see it as an incredible technology and useful advancement that is going to give us an incredible number of insights and tools.

    I still can't get over how soon people just accepted that you can universally translate all languages, that audio can be accurately transcribed, that you can speak a sentence and get pretty good research on a topic. These things are all actually insane and everyone has just decided to ignore this and focus on the aesthetic grossness of some of the things people are trying to get them to do.

  • by ripped_britches on 1/27/25, 2:32 AM

    Or on the far end of the spectrum of literacy, we have serious techno optimists that understand the fullest potential of lifesaving research (alpha fold, etc)
  • by analog31 on 1/27/25, 3:56 AM

    I wonder if the "literacy" about AI is not so much about the inner workings of the technology, but about the broad ramifications of its use.

    It takes at least a wee bit of sophistication to go from "this is a neat search engine, and it can help me with my writing tasks" to "the stuff that's generated by this thing is going to affect society in unpredictable ways that are not necessarily positive."

    A similar spectrum of attitudes may already exist for social media, where the term "algorithm" predated the term "AI."

  • by ksenzee on 1/27/25, 2:29 AM

    > These insights pose a challenge for policymakers and educators. Efforts to boost AI literacy might unintentionally dampen people’s enthusiasm for using AI by making it seem less magical. This creates a tricky balance between helping people understand AI and keeping them open to its adoption. To make the most of AI’s potential, businesses, educators and policymakers need to strike this balance.

    Why would we avoid educating people, in order to keep them willing to use AI? Why is getting people to use AI seen as a good in itself? Did AI write this article? (Don’t answer that.)

  • by dns_snek on 1/28/25, 9:02 AM

    The thought that there's a magical machine that I can just ask for an answer and it's going to provide the correct one is absolutely thrilling, but then I remember that at least 1/3rd of LLM-provided answers in my own domains of knowledge turn out to be wrong upon closer inspection.

    I don't think the human brain is evolved to deal with a machine that promises to have all the answers, always speaks with an authoritative tone, and is trained to be agreeable. As a species we like shortcuts and we don't like to think critically, an easy to get answer is always going to be infinitely more appealing than a correct answer, no matter how wrong it is.

    Right now there's a generation of kids growing up who believe that they don't have to learn anything because LLMs have all the answers. World leaders don't seem concerned about this, likely because a dumb population who doesn't know how to think critically is easy to control.

  • by dartos on 1/27/25, 2:07 AM

    It’s just like politics!

    The less you know about any politician, the more you like them!

  • by dukeofdoom on 1/27/25, 2:42 AM

    What I find fascinating is how much it feels like a conversation, but then also like a conversation with yourself. Since you're asking the question, and the AI echos back at you. So I guess one way it can be used is to figure out things about yourself, like a journal. Just the AI helps you to explore questions you might have.

    "The Socratic Method involves a shared dialogue between teacher and students. The teacher leads by posing thought-provoking questions. Students actively engage by asking questions of their own. The discussion goes back and forth."

    So AI can be an awesome learning tool, for anyone, at any level. Like many technologies, it is what you make it to be. Even things like Instagram can be educational places.

  • by jongjong on 1/27/25, 3:06 AM

    It's kind of weird how AI isn't yet about robots or reasoning but merely the last phase in the evolution of search engines. An LLM is essentially an extremely user-friendly search engine without the commercial/product aspect. It's an ideal information engine.

    It's understandable why LLMs pose a major threat to Google. Google search is essentially an information engine where the information is intermingled with junk commercial content.

    These days I only use Google for cases where I want to go to a website I already know but I'm too lazy to type in the URL or use a bookmark. The utility it provides now is merely a convenience.

  • by shlomo_z on 1/27/25, 3:38 AM

    Reminds me of the saying (internet meme):

    "Give a man a game and he'll have fun for a day. Teach a man to make games and he'll never have fun again"

  • by dartos on 1/27/25, 1:19 PM

    > balance between helping people understand AI and keeping them open to its adoption.

    Why is the latter a goal?

  • by oidar on 1/27/25, 2:29 AM

    The paper referenced is behind a paywall. This article is really hard to understand because the paper it is reporting on uses "AI literacy" as the determinant of "openness to AI". I'm very curious about what they mean by "AI literacy".
  • by floppiplopp on 1/27/25, 9:22 AM

    That's how scams work.
  • by paulcole on 1/27/25, 2:17 AM

    I don’t know how my ipad works but I like using it a lot. I have no clue how strawberries get from a field to my grocery store but I really like eating them.

    There’s a limit to how much I am willing to learn and there’s a heck of a lot of things I’ll just accept at face value because I believe they make my life better.

    So far AI makes my life better so I don’t particularly care to learn about it.