from Hacker News

Nvidia Jetson Orin Nano Super [video]

by ralusek on 12/17/24, 3:27 PM with 51 comments

  • by nimish on 12/17/24, 5:01 PM

    If anyone is wondering why nVidia is dominant, it's because this guy can run the same CUDA stuff as the $30000 H100

    So everyone can play around and develop for it. That's how you get to software dominance, and then can reap the rewards.

  • by qwertox on 12/17/24, 7:55 PM

    For most of us this is not so attractive.

    The 8GB shared RAM are all there is for OS + models, so the current Jetson AGX Orin 64GB is still more interesting as it allows to run a small LLM plus ASR and TTS on one device, yet with a price tag above 2000€ and also with an Ampere GPU but at 275 TOPS.

    But for vision / robotics stuff this Nano Super is a great price.

    You can search for "View Jetson Orin Technical Specifications" on the Jetson Orin page [0] to see the full offering of devices (it's in the middle of the page).

    [0] https://www.nvidia.com/en-us/autonomous-machines/embedded-sy...

  • by TechDebtDevin on 12/17/24, 4:26 PM

    Was literally about to buy the Jetson Orin Nano for $499 and then this gets to released. How awesome
  • by sliken on 12/17/24, 4:43 PM

    I've been looking for a replacement for Google Assistant/Nest/Nest Mini that is cloudless and could support things like verbally setting alarms, inquiring about the weather, and random queries.

    This looks like a decent fit. Just needs a case, microphone, and speaker. Oh an one of the newer small LLMs that fit in under 8GB.

  • by zamadatix on 12/17/24, 4:09 PM

    Even with the lower price point it's still a bit of an odd fit. The dev kit is aimed at developers but costs about the same as a 3060 (which will blow it out of the water on AI performance and capacity). Unless this board was already the one you were targeting for non-development usage for your robotics/embedded edge use case then I'm not sure there is a compelling reason to be interested in the announcement.

    Edit: I didn't mean to imply it's a bad use case or product, just that it's not a use case 95% here are going to care about. It's an odd fit but if you're the target of that fit it's great. If you're not that odd fit target, don't get your hopes up about a low cost AI dev kit from Nvidia. It'll just be another SBC sitting unused in your drawer.

  • by bradfa on 12/17/24, 5:22 PM

    It appears to be the same hardware as before, even the same firwmare/software as before (JetPack 6.1 was already out), just now they've lowered the dev kit price and documented a new performance mode which increases the clocks, power consumption, thermals, and compute performance.
  • by hemogloben on 12/17/24, 4:06 PM

    The specs on this look identical to the Orin Nano 8GB. Elsewhere Nvidia says the software updates are available to all Orin owners [1], so is this just a special edition released alongside the new JetPack release?

    1: https://blogs.nvidia.com/blog/jetson-generative-ai-supercomp...

  • by awwaiid on 12/20/24, 1:39 PM

    Any guesses on using this for LLM inference how it compares performance-wise (like tokens/sec ultimately) to like the new mac mini m4pro? I'm not sure how to figure that out.
  • by kittikitti on 12/17/24, 4:13 PM

    I appreciated this release including the personal video from the CEO with an oven joke. Although the VRAM is very limited, these specifications compete with Raspberry Pi's platform and I would expect the Orin Nano Super to outperform.
  • by haunter on 12/17/24, 6:30 PM

    What OSes can be run on this? I've seen people using Ubuntu and Debian but sounds like bascially anything that has an ARM64 version? What about BSD (ignoring CUDA)?
  • by AlexeyBrin on 12/17/24, 4:01 PM

    The price is interesting. What I don't like about these devices is that they are supported for a few years and after you are on your on. I have the original Jetson Nano and they stopped providing any updates for it, if you can use it without an internet connection it will work just fine for years.
  • by PhilippGille on 12/17/24, 4:10 PM

  • by nadermx on 12/17/24, 3:57 PM

    "NVIDIA Ampere architecture with 1024 CUDA cores and 32 tensor cores" from their site. Not sure I understand how many Gigs of gpu memory that is?
  • by snapplebobapple on 12/18/24, 8:47 PM

    What do you guys recommend for a case for this? rank mountable would be useful but a base I can sit on a shelf in a rack would work too.
  • by alecco on 12/17/24, 9:57 PM

    I really like CUDA, but these are crazy expensive for the specs. It's sad there is no competition.
  • by ksec on 12/18/24, 6:33 PM

    Am I correct to assume this is the same SoC as the upcoming Switch 2?
  • by ZiiS on 12/17/24, 4:06 PM

    I do wonder if he is simply gaming the Turing test now?
  • by tonymet on 12/21/24, 5:05 PM

    is it really that hard to add a sodimm slot.
  • by Jiahang on 12/17/24, 3:57 PM

    Another new toy in the Jetson series, the wallet shows that it is very painful.
  • by jbverschoor on 12/17/24, 4:04 PM

    Good response to Apple silicon eating dev llm. Bc dev is the first step to domination