from Hacker News

A look at Apple's technical approach to AI including core model performance etc.

by xrayarx on 6/14/24, 5:36 AM with 106 comments

  • by mk_stjames on 6/14/24, 11:43 AM

    I think there is a mistake in the line:

      >This looks like 2 years after the release of GPT-4, Apple has approximately trained an original GPT-4 level model.
    
    GPT-4 was released just 15 months ago, on March 14, 2023. Two years ago we were just getting GPT-3.5.
  • by ibaikov on 6/14/24, 1:31 PM

    From Apple website: "Apple Intelligence analyzes whether [the request] can be processed on device. If it needs greater computational capacity, it can draw on Private Cloud Compute, which will send only the data that is relevant to the task to be processed on Apple silicon servers. When requests are routed to Private Cloud Compute, data is not stored or made accessible to Apple, and is only used to fulfill the user’s requests."

    Are there any more details on what exactly is being sent as context to the cloud? Do they send features extracted from an image on device or the full picture? Is it capable of only sending current pic or will on-device model select what it thinks is needed for context and many pictures and/or texts will be sent?

  • by FL33TW00D on 6/14/24, 8:45 AM

    I described this system word for word 2 years ago, glad to see it come to fruition on the only software stack integrated enough to do it.

    https://fleetwood.dev/posts/a-case-for-client-side-machine-l...

  • by throwaway4good on 6/14/24, 8:49 AM

    The NVDIA stock has had a huge AI-related run over the past year while Apple has barely moved.

    This basically says that only a tiny bit of the AI will done on NVDIA hardware for the billions of Apple users.

    The market is not pricing that in.

  • by cma on 6/14/24, 1:03 PM

    All of Apple's investment in AI edge capability is almost totally undermined by their huge price discrimination on RAM, shipping 8GB laptops in 2024 and charging like $300-500 for 8GB more.
  • by mark_l_watson on 6/14/24, 1:00 PM

    I think Apple’s approach to (more or less) private local AI, with short term reliance on OpenAI makes sense.

    When I listened to the Apple Developer’s presentations this week I always filtered everything said with “they are talking about proposed product updates, and some existing working functionality.”

    I have been enjoying running all of Apple’s beta iOS, iPadOS, and macOS beta releases this week, and there are nice features in Photos, Calendar, etc., but we need to wait to see what they release next fall.

    I am not an Apple developer, really, I did really well selling a Mac app in 1984, and a Mac app I wrote two years ago had pathetic sales. That said, I have been trying the beta Xcode with LLM code completions, etc., and I have been experimenting with MLX and Apple Silicon for a long while. Apple is definitely an interesting player in the AI product space!

  • by SebFender on 6/14/24, 10:53 AM

    "Apple has demonstrated how meaningful AI interactions can be built into every corner of our digital life"

    Well not really. I haven't seen anything new here, unless you consider genmoji a killer app...

    The whole thing comes down to - your request may get sent to ChatGPT for advanced features.

    Better doesn't mean new.

  • by cubefox on 6/14/24, 12:20 PM

    This article really reads like it was written by an Apple fan rather than an objective observer.
  • by la64710 on 6/14/24, 1:42 PM

    Nothing new , I guess that’s the point anyway,?
  • by seydor on 6/14/24, 7:39 AM

    a little cope for the fact that apple failed to present state of the art performance