from Hacker News

Outperforming Rust DNA sequence parsing benchmarks by 50% with Mojo

by _diyar on 2/7/24, 4:58 PM with 26 comments

  • by dark__paladin on 2/7/24, 8:25 PM

    Everything about Mojo is suspicious to me. Maybe I'm paranoid, but Modular has made wild performance claims in the past without releasing much information in regard to implementation [citation needed], plus leaning so far into the AI stuff smells of a marketing-first project IMO.
  • by seanray on 2/7/24, 5:41 PM

    Is this comparing normal Rust vs Moji with SIMD? I don't see how Mojo can produce faster code than C/C++/Rust. Must all be in the implementation details, I feel this is a misleading title if so.
  • by hkmaxpro on 2/8/24, 1:07 AM

    Insightful Reddit comment https://old.reddit.com/r/rust/comments/1al8cuc/modular_commu...

    > The TL;DR is that the Mojo implementation is fast because it essentially memchrs four times per read to find a newline, without any kind of validation or further checking. The memchr is manually implemented by loading a SIMD vector, and comparing it to 0x0a, and continuing if the result is all zeros. This is not a serious FASTQ parser. It cuts so many corners that it doesn't really make it comparable to other parsers (although I'm not crazy about Needletails somewhat similar approach either).

    > I implemented the same algorithm in < 100 lines of Julia and were >60% faster than the provided needletail benchmark, beating Mojo. I'm confident it could be done in Rust, too.

  • by Croisonetto on 2/7/24, 9:04 PM

    The article sheds light on Mojo's potential, but with every such article, I'm cautious not to get overly hyped. Many key factors will come into play; long-term support and community growth will be crucial for its adoption. Additionally, I'm curious about the learning curve for Python developers looking to switch or integrate Mojo into their workflows. Still looking forward to all new information about Mojo's development and its source code getting published someday soon
  • by Isomorpheus on 2/7/24, 9:33 PM

    Since Chris is lurking: will Mojo on GPUs be more like using Jax (relying on compiler), Triton (more control, but abstracted), or more like CUDA (close to maximal control)? Combination? Nvidia and AMD support out of box?
  • by spoder on 2/7/24, 9:16 PM

    Surely the Mojo implementation doesn't miss something like maybe error handling?
  • by john-tells-all on 2/7/24, 5:13 PM

    Given that Mojo is a very Python-compatible language, this is incredible! Mojo gives most of the benefits of Python (immense ecosystem, short and clear code), with incredible speed.
  • by fulafel on 2/7/24, 8:43 PM

    Any theories why the compiler didn't manage to use SIMD without the manual SIMD code?
  • by screye on 2/7/24, 11:29 PM

    Has anyone here integrated MOJO into a real industry workflow.

    I am looking for a language not called C++ that I can write some bottlenecked modules in and wrap it around in python.

    I want the code base to feel like python with a few files offloaded to this language.

    Any suggestions ?

  • by tubs on 2/7/24, 11:40 PM

    The website (perhaps innocently) animates the cookie popup on ios such that attempting to reject presses on accept. Please fix.
  • by tripplyons on 2/7/24, 7:29 PM

    I want to see some comparisons with other python libraries like numpy, jax, and numba.