by seveibar on 3/28/25, 12:38 AM with 116 comments
by ChrisGammell on 3/28/25, 3:19 AM
I'm always curious about people entering the space though. It is a small market, IMO. The tools are fractured, the existing players are lumbering behemoths, and the users are cranky zealots (you will have to pry KiCad out of my cold dead hands). I noted the step about the AR being written in JavaScript, which I have no real opinion on, but I'm curious about plans to plug into ecosystems (CAD vendors, OS tools) or try and draw people into another new ecosystem.
by GistNoesis on 3/28/25, 2:00 PM
The point of Monte-Carlo method is that you can tradeoff accuracy for speed. The longer you run your algorithm the more accurate you get.
But what's even more interesting is that we can often use the contrapositive : You can get shitty accurate result real fast.
Instead of exploring all paths, you explore only one path picked at random.
And that's where it shines : when you put it into the most imbricated loop of your algorithm.
For example when you want to learn a neural network which autoroute. Typically you would have the outerloop which is updating the neural network parameter, and the inner-loop which is computing a path through the graph.
When you use Monte-Carlo this inner-loop which control accuracy can be reduced to 1 iteration if everything is bias-less, you will have a slower outerloop due to higher variance but the machine learning should """theoretically""" learn.
This is what allows you to build policies which intuitively always pick the right decision. Like in chess or go, you have some monte-carlo tree search variant like alpha-go-zero or alpha-chess-zero or alpha-router-zero, where even without the search part, the giant cache (encoded by the neural network parameters), once trained, can compute your best guess path in one pass through your neural network aka constant time (with a constant which can easily trade memory for speed by augmenting the number of parameters or training for longer).
by Animats on 3/28/25, 2:26 AM
Routing itself is easy. It's when the router has to tear up stuff it already routed to fit new stuff in that things get complicated and the combinatorics start to get you.
I miss the autorouter KiCAD used to have. It was taken out for iffy IP reasons (the author had worked for an autorouting company). Reaction to users who wanted it back were along the lines of "Real Men Don't Use Autorouters".[1]
[1] https://forum.kicad.info/t/autorouting-and-autoplacement/185...
by n4r9 on 3/28/25, 10:53 AM
Some quibbles or glossovers:
> A recursive algorithm is a depth first search. Any loop that explores candidates/neighbors without sorting the candidates is a BFS.
Not sure what to say about this. It's either wrong or I'm missing the intuition. Both DFS and BFS can either be written iteratively or recursively; the real difference is whether you pop your next candidate from the top or the bottom of the stack. Equivalently, whether you use a stack (FILO) or a queue (FIFO). That said, I took maths rather than CS so verify this for yourself.
> It is simply the best foundation for any kind of informed search (not just for 2d grids!)
A* is useful in pathfinding if you have some notion of easily-computed "distance" to your desired target and you're only running a few queries for any given graph. If you plan to run many queries on a mostly-static graph (like a road network) then you could be better off applying a pre-processing algorithm like contraction heirarchies. If you're optimising but don't know what target you're aiming for (e.g. Traveling Salesman) then using some other local search heuristic like 2-opt may be better.
> The major difference between these [A* and BFS] is BFS explores all adjacent nodes, while A* prioritizes exploring nodes that are closer to the destination.
This is definitely a difference. But it glosses over the biggest difference, which is that A* is a dynamic algorithm. That allows you to terminate early with confidence that you have found the shortest path. With BFS you may not be certain until you've searched the whole graph, which could be vast.
by dkjaudyeqooe on 3/28/25, 1:10 PM
> Any time you’re using a tree you’re ignoring an O(~1) hash algorithm for a more complicated O(log(N)) algorithm
This is incredibly misguided. The hashing approach is fine if your points are evenly distributed and you only really want to query an area relatively close to the fixed subdivisions you've chosen, otherwise your 0(1) is going to degenerate to O(n).
Trees are an informed representation when you don't know how your data is distributed.
Similar story for randomized algorithms. What happens when your search space is many trillions of items or possibilities? Or there are no heuristics to be had? If you can't brute force it and you can't use a clever algorithm then randomized algorithms are a savior.
Maybe these aren't needed for this specific application, but best to not make sweeping statements.
by MrLeap on 3/28/25, 4:18 AM
A*, Lee's algorithm and the like are all cool. It's criminal to write any kind of floodfill without having an accompanying visualization, you're squandering so much dopamine.
This article has me wondering if all the gamedev things I *didn't* read about but are adjacent have utility in this kind of thing. I can't be the first person to think a boids router would be pretty fun. More seriously, I bet jumpflooding signed distance fields would provide you a lot of power.
Everything about spatial hashing in particular matches my experience. Haven't found many occurences in almost 2 decades where any of the tree structures are worth the time. One notable exception. The lovecraftian text editor I made uses quite a lot of trie's for reactivity things. Nice way to compress 45,000 words into a compact state machine for event handling.
by amiga386 on 3/28/25, 2:51 PM
And then once you've used the playful, expressive (interpreted, abstract, slow) language you enjoy using, to develop an excellent, performant algorithm... if performance still matters, write the same thing again in a performant, low-level language, and perhaps even write architecture-specific assembly.
There's a reason numpy, pandas, OpenCV, TensorFlow aren't written in pure Python, but instead let Python tell them to do things they've implemented in high performance C++/assembly/CUDA, etc.
No matter how proud the authors would be of exploring a problem space and coming up with an efficient algorithm (and blogging about it), I doubt they'd be popular numerical computing libraries if they insisted on it being written in pure Python, or Javascript.
While this is a fun blog post, I don't think it'd have the same takeaways if the author's algorithmic insights got a pure-Javascript HEVC encoder down from 1 day per frame to 3 hours per frame...
by knodi123 on 3/28/25, 2:20 AM
by nine_k on 3/28/25, 3:54 AM
by aranchelk on 3/28/25, 4:57 AM
> I’m controversially writing our autorouter in Javascript. This is the first thing people call out, but it’s not as unreasonable as you might expect. Consider that when optimizing an algorithm, you’re basically looking at improving two things:
> Lowering the number of iterations required (make the algorithm smart) Increasing the speed of each iteration
It may be true in this domain, I wouldn’t know, but applied to software engineering in general IMO it would be a massively incorrect assumption to say choice of language doesn’t affect speed and needed number of iterations.
by kayson on 3/28/25, 4:50 AM
On the topic of autorouters, the post doesn't really go into why there are no open source data sets and why there aren't many open source tools. It's probably no surprise but there's a TON of money in it. The author mentions iphones and 10s of thousands of routes. That's nothing. Modern ICs have billions of transistors and billions of routes with an insanely complicated rule set governing pitch, spacing, jobs, etc. Not to mention the entire time you have to make sure the parasitic resistance and capacitance from the routing doesn't break the timing assumptions.
Cadence and Synopsys probably put billions yearly into R&D and I bet a big chunk goes into "Place & Route". I've heard that licenses for their tools run into the miions of dollars per head per year.
by shadowgovt on 3/28/25, 6:31 PM
1. Autrouting in particular is a problem well-suited to visualization and not all problems are. it's fundamentally about real things in real space (bonus: they're mostly constrained to a 2D plane, which makes them easy to visualize on a computer screen). A lot of hard and interesting problems don't match that reality and so aren't as amenable to visualization.
(... but the general advice, "look for opportunities to visualize," stands. Your eyes and brain are very fast matchers for a broad variety of patterns in a way a computer isn't without a lot of special-purpose, tuned software).
2. JavaScript, between the language itself (specifically the ability to reflect data structures at runtime) and the ecosystem around it, is very amenable to visualization. In that way specifically, it was a strong choice for this problem domain. Imagine how much work you'd be taking on if you decided to "just bang out a quick visualization" for an arbitrary piece of the algorithm in C++, or Rust.
by CamouflagedKiwi on 3/28/25, 4:25 PM
by phendrenad2 on 3/28/25, 2:25 AM
by Lerc on 3/28/25, 4:15 AM
Hmm, I wonder what python raylib is like, that might be the ticket.
by bArray on 3/28/25, 3:01 PM
I have a few major nags about current autorouters:
1. No way to prioritise connections.
2. Handling of differential pairs or buses is terrible.
3. Does not use best practices for high-speed signal lines.
4. No way to add arbitrary rules to the autorouting process, i.e. block a trace from going through an area. For example, maybe you don't want a power trace being autorouted under your IMU, but some low voltage signals are fine.
I've use freerouting [1] in the past but it has a somewhat difficult time of being maintained. Freerouting seems to really struggle with larger designs, it seems to be greedily routing traces and spends ages trying to fix those early placed traces.
by hyperbrainer on 3/28/25, 11:08 AM
BFS is also a recursive search. Even in the case of non-recursive search, the only difference is whether you use a queue or stack.
Apart from that, great article.
by kalaksi on 3/28/25, 9:27 AM
I'm not sure how performant modern day JS is but aren't you still leaving a lot of performance improvements on the table? Sure, algorithm may matter more but who says you can't use the same algorithm.
by teleforce on 3/30/25, 6:46 PM
I strongly believe that the CAD world including EDA is at the verge of disruption by an AI or more correctly Machine Intelligence (Constraint Programming - CP to be exact) very similar to how LLM disrupting the Chatbot technology [1],[2].
The path to this is most probably by solving the autorouting mechanism with CP, a deterministic logic, optimization and constraint programming machine based intelligence [3], [4], [5],[6].
Fun facts, the modern founder of logic, optimization, and constraint programming is George Boole, the grandfather of Geoffrey Everest Hinton, the "Godfather of AI" and "Godfather of Deep Learning".
[1] Most AI value will come from broad automation, not from R & D (313 comments):
https://news.ycombinator.com/item?id=43447616
[2] Diagrams AI can, and cannot, generate (68 comments):
https://news.ycombinator.com/item?id=43398434
[3] Constraint programming:
https://en.wikipedia.org/wiki/Constraint_programming
[4] Logic, Optimization, and Constraint Programming: A Fruitful Collaboration - John Hooker - CMU (2023) [video]:
https://www.youtube.com/live/TknN8fCQvRk
[5] "We Really Don't Know How to Compute!" - Gerald Sussman - MIT (2011) [video]:
https://youtube.com/watch?v=HB5TrK7A4pI
[6] High-Quality Ultra-Compact Grid Layout of Grouped Networks [PDF]:
https://ialab.it.monash.edu/~dwyer/papers/gridlayout2015.pdf
by djmips on 3/28/25, 9:18 AM
by xg15 on 3/28/25, 2:00 PM
I feel people are getting way to comfortable throwing randomness at things, pretending it's perfectly fine or even beneficial because you can reason over the probability distributions and then shrugging their shoulders when the results have unpredictable and impossible to debug edge case behavior.
We already have enough unpredictable factors in a program, I feel we don't have to purposefully add even more of them.
by tobyhinloopen on 3/28/25, 11:48 AM
https://www.gameaipro.com/GameAIPro2/GameAIPro2_Chapter14_JP...
by fooblaster on 3/28/25, 2:46 PM
by swayvil on 3/28/25, 2:43 AM
10000000000 random guesses can be way faster than a clever calculation.
And maybe there is no clever calculation.
by weinzierl on 3/28/25, 9:27 AM
Are state-of-the-art autorouters still based on A*?
From what I remember A* was used in the 80s before Specctra introduced a cost based push and shove capable approach in the mid 90s.
A* being what it is, is probably still a part of contemporary solutions, but I am surprised about the emphasis it got in the post.*
by thenthenthen on 3/28/25, 5:17 PM
by sitkack on 3/28/25, 8:52 AM
by jofla_net on 3/28/25, 1:02 PM