by mywittyname on 3/31/25, 8:26 PM with 45 comments
What are some good places to get started? What are the tools you use to get work done?
Some background: I mainly do data engineering using Python and Snowflake + MySQL. I use Pycharm as my dev environment, but I'd be open to changing that if there's a better option.
by bionhoward on 3/31/25, 11:16 PM
Anybody coding with OpenAI at this point ought to be publicly mocked for being a sheep. All you’re doing when you code with AI is becoming dependent on something where you’re paying them to train their model to imitate you. It’s not good for anyone but OpenAI.
Better off learning a better (compiled) programming language. If you have to use AI, use Groq or Ollama as you can keep these outputs to train or fine tune your own AI one day.
Why pay OpenAI for the privilege to help them take your job? Why outsource mental activity to a service with a customer noncompete?
by thebeardisred on 3/31/25, 9:54 PM
No matter what, the current paradigm is the "mixture of experts", aka defining roles and then switching between the roles. In the case of Aider, you're mainly looking at "architect" and "code". Others like Cline/Roo provide also this, but more limitations and access to MCP. Honestly, I would avoid these more advanced tools until you get an understanding for how everything works.
Begin with your "architect" and agree on a design, then act as an engineering manager to guide your "code" model to implement the tests. Once you have a full test suite, make sure that you can use commands like `/lint` and `/test` to call your toolchain as needed.
I personally prefer this method (again, when you're getting started) because it's independent of every IDE and will work anywhere you have a terminal. After you get comfortable, you'll quickly want to use a more advanced tool to trade your money for the time of the machine(s). When you get to that point you'll understand what the fuss is about.
by biotechbio on 3/31/25, 11:24 PM
I used VSCode as my default IDE so the switch was very natural.
I am working on machine learning in bio, and many of the tools, methods, and data structures are very domain specific. Even so, the agent feature is good enough that for most tasks, I can describe the functionality I want and it gets me 80% of the way there. I pay $20 a month for Cursor and it has quickly become the last subscription I would cancel.
by rbrownmh on 3/31/25, 9:54 PM
OpenRouter Quickstart: https://openrouter.ai/docs/quickstart
See usage example here (Typescript): https://github.com/brownrw8/olelo-honua/blob/main/src/provid...
And while you're at it, check out my open source project using OpenRouter, ʻŌlelo Honua ;)
ʻŌlelo Honua: https://www.olelohonua.com
by thuanao on 3/31/25, 11:45 PM
by kittikitti on 4/1/25, 3:14 AM
Ideally, you should try out all the tools. Each of them have advantages and disadvantages for each problem within a project. You might find that mixing and matching LLM apps gives you more expertise and skill over just sticking to your favorite.
by rf15 on 4/1/25, 4:39 AM
by willchen on 3/31/25, 11:43 PM
You can also check out the GitHub repo at: https://github.com/dyad-sh/dyad
(disclosure: I created Dyad)
by inheritedwisdom on 3/31/25, 9:57 PM
Couple of helpful tidbits we’ve learned:
- Define a read me that lays out your architecture. Normalization, libraries to use for different use cases, logging requirements, etc. reference it on every new task.
- keep notebooks from getting too big. A notebook per elt action or table definition make’s your compares faster and required context smaller.
Be prepared to be amazed. We’ve pointed Cline at an api doc and had it spit out elt code and a model that were nearly production ready. We’re pulling weeks off estimates…
by wmedrano on 4/1/25, 1:49 AM
by medhir on 4/1/25, 2:49 AM
Using Cursor, I felt like it was too easy to modify too many things in one go.
I keep my prompts scoped to drafting / refining specific components. If I feel stuck, I’ll use the chat as a rubber duck to bounce ideas off of and rebuild momentum. Asking the model to follow a specific pattern you’ve already established helps with consistency.
Recently been impressed with Gemini 2.5, it’s able to provide more substantive responses when I ask for criticism whereas the other models will act more like sycophants.
by tpougy on 4/4/25, 1:57 PM
When used right with vs code the experience seems almost the same as using cursor, but you can choose your AI provider.
PS: In the last few days I tried Roo Code with Google Gemini 2.5 thinking (signficantly better that Clude 3.7) but found that the OpenRouter option to use Google Gemini 2.5 thinking has a bad rate limit, the solution was to create a google developer account and use the google gemini api directly which dont have such bad rate limit and is also very cheap (I used it a lot and the cost was zero).
by brokegrammer on 4/2/25, 6:43 AM
The developer experience with Aider isn't that great when compared to a full IDE like Cursor, but it's a great way to get started because it's a simple CLI tool that accepts commands.
After that you might decide to switch to Jetbrains AI since you use Pycharm.
by sans_souse on 3/31/25, 11:25 PM
I demo'd a quick experiment from scratch that posted here if you're interested, sorry no yt link tho: https://www.tiktok.com/t/ZT2TJ8hrt/
by d4rkp4ttern on 4/1/25, 11:25 AM
by mpalmer on 4/1/25, 2:25 AM
The polish of Cursor et al is a component of its dishonesty. You should keep the nature of the tool exposed to you at all times.
I say you should:
- identify the things about your personal development process that you'd like to streamline (up to and including not having to code everything by hand)
- write Python scripts that call LLMs (or don't!) to achieve the thing you want.
- heck, write library functions and establish some neat patterns - a random prompt picker using a folder of txt files, why not?
Not only can you create a development experience that approximates much of what Cursor does, you can make one that is twice as good for you because it is aligned with how you work, and how you think.
And at that point, with your fluent new workflow, making a simple VSCode extension to hook into your scripts can't be all that hard. At that point, how can Cursor compete?
by singular_atomic on 4/6/25, 12:08 PM
by timbritt on 4/1/25, 8:01 AM
It’s not.
These tools will literally waste your time and empty your wallet if you don’t have a solid approach — and that starts with knowing how to detect code and architecture smells at light speed. These tools are writing code faster than you can read it so you better know what you’ve (actually) instructed it to do, and be able to monitor the quality in a 300ms glance or you’re hosed.
Having said that, there are antidotes:
1. Use a memory-bank. A memory bank is a little folder where you instruct the agent to research upcoming tasks, store progress info, post gotchas, generate retrospectives and more. It’s super helpful if the thing you’re building is large and complex.
2. Instruct the agent to implement strict TDD/BDD principles and force it to follow a red-green-refactor workflow at a granular level. The downside to this is twice the code (read $tokens) per feature but you’ll spend it anyway when your agent has 600 lines of untested code written while you grabbed coffee and you come back to the agent stuck in a loop, spending $0.20 per request sending the same large files trying to figure out some syntax error between test and code files.
3. You need to know what you’re doing. The model/agent is not going to automatically apply second and third order thinking to look around systems design corners. It does exactly what you tell it to do (sometimes well) but you also have to always be considering what you forgot to tell it to do (or not to do). Prompt engineering is not natural language. It’s a form of spellweaving where you have to find the edges of all relevant forms of existence and non-existence, codify them into imperative language, add the right nuance to finesse the style and have the confidence to cast the spell — over and over and over again.
4. You need money. Building a serious endeavor with an AI agent will cost you around $400-600 a month. Leave room for lots of mistakes. Prepare for weeping and gnashing of teeth when the task counter hits $10.00, you’re like 20% done with the feature, the model has lost context about what it was doing and you have to choose between making a broken commit and hoping the agent can sort it out in a clean task, or eating the cost and starting over. I spent $50 on starting over just this past weekend. This is why I wrote numbers 1-3 above.
I’m committed to figuring out where all the pitfalls are so others might not have to. I personally feel like anyone who’s doing this right now is really just burning some cash to see what’s possible.
by andrewstuart on 4/1/25, 1:15 AM
by eternityforest on 4/1/25, 8:55 AM
by ramesh31 on 4/1/25, 5:03 AM
by horsellama on 4/4/25, 9:50 PM
by jlcases on 4/1/25, 5:29 PM
For Python/data engineering specifically, creating clear category boundaries between data models, transformation logic, and validation rules makes the LLM much more likely to generate code that follows your architectural patterns.
The key is treating documentation as a "context API" for the AI rather than just human-readable text. When your documentation has a clear hierarchical structure without overlaps or gaps, the LLM can navigate the problem space much more effectively.