from Hacker News

Benchmarking RSA Key Generation

by mfrw on 12/31/24, 12:57 PM with 32 comments

  • by jdewerd on 1/3/25, 2:58 PM

    > The prime-counting function approximation tells us there are Li(x) primes less than x, which works out[5] to one prime every 354 odd integers of 1024 bits.

    Rule of thumb: Want a 1024-bit prime? Try 1024 1024-bit candidates and you'll probably find one. Want a 4096-bit prime? Try 4096 4096-bit candidates and you'll probably find one.

    The approximate spacing of primes around p is ln(p), so ln(2^1024) = 1024*ln(2), and ln(2)=0.693 so if you are willing to absorb 0.693 into your rule of thumb as a safety margin you get the delightfully simple rule of thumb above. Of course, you'll still want to use a sieve to quickly reject numbers divisible by 2, 3, 5, 7, etc, and this easily rejects 90% of numbers, and then do a Fermat primality test on the remainders (which if you squint is sort of like "try RSA, see if it works"), and then do Miller-Rabin test to really smash down the probability that your candidate isn't prime. The probabilities can be made absurdly small, but it still feels a bit scandalous that the whole thing is probabilistic.

    EDIT: updated rule of thumb to reflect random candidate choice rather than sequential candidate choice.

  • by JacobiX on 1/3/25, 4:26 PM

    It is fascinating (to me at least) that almost all RSA implementations rely on a probabilistic primality test, even though the probability of picking a pseudoprime is extremely small.
  • by throw0101c on 1/3/25, 2:51 PM

    For new deployments/protocols, is there any reason to use RSA? Is RSA (mostly?) about legacy support nowadays?