from Hacker News

Why AI Is a Philosophical Rupture

by daverol on 2/8/25, 8:55 AM with 13 comments

  • by firtoz on 2/8/25, 2:29 PM

    Human consciousness is largely comprised of the perceptual inputs, combined with some internal deeper reflections, and some echoes of those reflections trigger the same or similar receptors to those that activate from the perceptual inputs.

    We train AI to produce more of those perceptual inputs that we expect from sequences of virtualised actions, roughly matching what one would expect if those same instructions were provided to humans also.

    If you look at a human mainly as a thing that does stuff when told, then there's not that much difference to AI. I'd argue that it's a very North American way of looking at humans.

  • by keiferski on 2/8/25, 2:41 PM

    As someone with a background in philosophy, I found this to be a pretty vague and unsatisfying article that throws a lot of terms around without making any concrete points. Sloppy thinking, frankly, and the kind of thing that gives philosophy a bad name.

    I don’t deny that these AI tools will probably have major effects on society, but from what I can tell from this rambling article, the idea seems to be that because LLMs have “interiority”, that will cause humans to rethink the notion of consciousness and start applying it to machines, presumably giving them rights and so on.

    This narrative never made any sense to me, because I don’t think consciousness or intelligence has ever really been the relevant factor in determining worth. Plenty of things have consciousness or intelligence. The relevant factor here is humanity, and for the foreseeable future it will be very easy to biologically determine what is human and what isn’t. Until that becomes impossible, virtually no one is going to ascribe personhood to an AI, and no matter how complex AI systems get, they will still be perceived as complex machines, not selves.

    I haven’t seen anyone address this point, but I also haven’t read a ton of the responses to the Turing Test which factor in recent developments. (So I’d be glad if anyone critiques the argument here.)

  • by gom_jabbar on 2/8/25, 5:03 PM

    AI is potentially the philosophical rupture as it might entail the automation of philosophy itself. As Nick Land put it in footnote 6 of Crypto-Current: Bitcoin and Philosophy:

      The techonomic horizon, for 'us', coincides with the impending crisis of historically-actualized artificial intelligence. Encapsulated within this by now manifest potential is the comprehensive automation of philosophy. [0]
    
    AI also fundamentally challenges human identity:

      The Human Security System is structured by delusion. What's being protected there is not some real thing that is mankind, it's the structure of illusory identity. Just as at the more micro level it's not that humans as an organism are being threatened by robots, it's rather that your self-comprehension as an organism becomes something that can't be maintained beyond a certain threshold of ambient networked intelligence. [1]
    
    [0] https://retrochronic.com/#crypto-current-footnote-6

    [1] https://syntheticzero.net/2017/06/19/the-only-thing-i-would-...