by feifan on 2/19/25, 4:33 PM with 7 comments
by FromTheId on 2/19/25, 6:36 PM
But I'm optimistic here given your current approach—entrypoints and illuminated indirection ("related code you didn't know existed"—feels like what I'd want ready to hand.
The problem is only going to get worse with all the AI generated slop code filling PRs now, too. Goddangit
by gbernardi on 2/19/25, 9:48 PM
"These tests/prod methods/data models/etc are impacted" really matters to me because that's most likely what I'm trying to do when I'm grepping for `verified` and adding a bunch of path constraints `!model !test`, etc. Leveraging that further for automations like "run all impacted tests" is another game changer for many companies.
Can this help me draw flowcharts and the like? I recently did a bit of a refactor and "show me the stack of calls from the API layer leading to method m" was a lot more work than it should have been!
by mhashemi on 2/19/25, 6:44 PM
This turned into theater. We had no choice but to rubberstamp. The “review” was more of a heads up than a real review request because no one can get up to speed without becoming a blocker. Something to improve monitoring reflexes or reaction time in an incident. Frankly these days an LLM would easily clear the bar for the “review” part of those reviews. But even the LLM's context could be significantly enhanced with a tool like Tanagram that has actual architectural context. Looking forward to updates!
by ztratar on 2/19/25, 10:21 PM