Beyond the obvious chatbots and coding copilots, curious what people are actually shipping with LLMs. Internal tools? Customer-facing features? Any economically useful agents out there in the wild?
by petercooper on 6/28/25, 4:05 PM
Analyzing firehoses of data. RSS feeds, releases, stuff like that. My job involves curating information and while I still do that process by hand, LLMs make my net larger and help me find more signals. This means hallucinations or mistakes aren't a big deal, since it all ends up with me anyway. I'm quite bullish on using LLMs as extra eyes, rather than as extra hands where they can run into trouble.
by actinium226 on 6/28/25, 3:35 PM
We have a prompt that takes a job description and categorizes it based on whether it's an individual contributor role, manager, leadership, or executive, and also tags it based on whether it's software, mechanical, etc.
We scrape job sites and use that prompt to create tags which are then searchable by users in our interface.
It was a bit surprising to see how Karpathy described software 3.0 in his recent presentation because that's exactly what we're doing with that prompt.
by yamalight on 6/28/25, 3:58 PM
Built vaporlens.app in my free time using LLMs (specifically gemini, first 2.0-flash, recently moved to 2.5-flash).
It processes Steam game reviews and provides one page summary of what people thing about the game. Have been gradually improving it and adding some features from community feedback. Has been good fun.
by rootsofallevil on 6/28/25, 4:05 PM
> Beyond the obvious chatbots and coding copilots, curious what people are actually shipping with LLMs.
We're delivering confusion and thanks to LLMs we're 30% more efficient doing it
by jabroni_salad on 6/28/25, 4:56 PM
One of my clients is doing m&a like crazy and we are now using it to help with directory merging. Every HR and IT department does things a little differently and we want to match them to our predefined roles for app licensing and access control.
You used to either budget for data entry or just graft directories in a really ugly way. The forest used to know about 12000 unique access roles and now there are only around 170.
by alonsonic on 6/28/25, 4:22 PM
I created an agent to scan niche independent cinemas and create a repository of everything playing in my city. I have an LLM heavy workflow to scrape, clean, classify and validate the data. It can handle any page I throw at it with ease. Very accurate as well, less than 5% errors right now.
by intermerda on 6/28/25, 3:54 PM
Mostly for understanding existing code base and making changes to it. There are tons of unnecessary abstractions and indirections in it so it takes a long time for me to follow that chain. Writing Splunk queries is another use.
People use it to generate meeting notes. I don't like it and don't use it.
by GarnetFloride on 6/28/25, 3:46 PM
We've been encouraged to use LLMs for brainstorming blog posts. The actual posts it generates are usually not good but gives us something to talk about so we can write something better.
And doing SEO to posts. It seems to do that pretty well.
by mulmboy on 6/29/25, 7:34 AM
We operate a saas where a common step is inputting rates of widgets in $/widget, $/widget/day, $/1kwidgets, etc etc. These are incredibly tedious and error prone to enter. And usually the source of these rates is an invoice which presents them in ambiguous ways e.g. rows with "quantity" and "charge" from which you have to back calculate the rate. And these invoices are formatted in all different ways.
We offer a feature to upload the invoice and we pull out all the rates for you. Uses LLMs under the hood. Fundamentally it's a "chatgpt wrapper" but there's a massive amount of work in tweaking the prompts based on evals, splitting things up into multiple calls, etc.
And it works great! Niche software, but for power users were saving them tens of minutes of monotonous work per day and in all likelihood entering things more accurate. This complements the manual entry process with full ability to review the results. Accuracy is around 98-99 percent.
by miketery on 6/28/25, 4:37 PM
I built a SQL agent with detailed database context and a set of tools. It’s been a huge lift for me and the team in generating rather complex queries that would take non trivial time to construct, even if using cursor or ChatGPT.
by jakevoytko on 6/28/25, 4:25 PM
I work for Hinge, the dating app. We use them for our "prompt feedback" feature, where the LLM gives constructive feedback on how to improve your prompts if it judges them as low-effort or clichéd.
by orphea on 6/28/25, 4:13 PM
When a customer onboards, we scrap their website to pre-fill some answers and pre-create certain settings (categories, tags, etc.). Ideally the customer spends most of the time just confirming things.
by binarymax on 6/28/25, 3:31 PM
So many things. I have built several customer facing products, a web research platform that works better than the RAG you get from Google, and lots of small tools.
For example, I wrote a recent blog post on how I use LLMs to generate excel files with a prompt (less about the actual product and more about how to improve outcomes): https://maxirwin.com/articles/persona-enriched-prompting/
by tibbar on 6/28/25, 4:53 PM
Internal research assistants. Essentially 'deep research' hooked up to the internal data lake, knowledge bases, etc. It takes some iterations to make a tool like this actually effective, but once you've fixed the top N common roadblocks, it just sorta works. Modern (last 6 months) of models are amazing.
If all you've built is RAG apps up to this point, I highly recommend playing with some LLM-in-a-loop-with-tools reasoning agents. Totally new playing field.
by pploug on 6/29/25, 2:28 PM
by jackthetab on 6/28/25, 8:11 PM
Which LLMs and plans are you guys using for all of these cool ideas?
ATM I use ChatGPT Plus for everything except coding inside my Jetbrains IDEs.
I'm starting to look around at other LLMs for non-coding purposes (brainstorming, docs, being a project manager, summarizing, learning new subjects, etc.).
by nickandbro on 6/28/25, 4:21 PM
I have a hobby project called
https://Vimgolf.ai where users try to best a bot that is powered by O3. Apparently, O3 is really good at vim sequences to transform a start file to an end file albeit with moderate complexity.
by perk on 6/28/25, 4:29 PM
Several things! But my favourite use-case works surprisingly well.
I have a js-to-video service (open source sdk, WIP) [1] with the classic "editor to the left - preview on the right" scenario.
To help write the template code I have a simple prompt input + api that takes the llms-full.txt [2] + code + instructions and gives me back updated code.
It's more "write this stuff for me" than vibe-coding, as it isn't conversational for now.
I've not been bullish on ai coding so far, but this "hybrid" solution is perfect for this particular use-case IMHO.
[1] https://js2video.com/play
[2] https://js2video.com/llms-full.txt
by ArneVogel on 6/28/25, 4:51 PM
I am using it for FisherLoop [1] to translate text/extract vocabulary/generate example sentences in different languages. I found it pretty reliable for longer paragraphs. For one sentence translations it lacks context and I have to manually edit sometimes. I tried adding more context like the paragraph before and after, but then I found it wouldn't follow the instructions and only translate the paragraph I wanted but also the context, which I found no good way to prevent. So now I manually verify, but it saves me still ~98% of the work.
[1] https://www.fisherloop.com/en/
by incomingpain on 6/30/25, 11:44 AM
My startup,
https://mapleintel.ca has an original threatfeed that's 100% reliable, all ip addresses have directly attacked me.
The new AI threatfeed is everything above + I'm using AI to make rapid decisions for me. I can pull info from sources like dnsbl etc to help judge. If I were to do it manually, maybe 1 ip per 30 seconds? Phi4, omg 1 every second.
by VladVladikoff on 6/28/25, 3:40 PM
Nvidia nemo ASR + an 8B LLM to generate transcripts and summaries of phone calls that my support team conducts. It works better than the notes they leave about the calls.
by wayschultz on 6/28/25, 4:31 PM
by asdev on 6/28/25, 3:59 PM
by hoistbypetard on 6/28/25, 4:31 PM
I work with (a few someones) who see fit to send out schedules as PDFs, 3 months at a time. I have a script that feeds Claude the PDFs and gets it to generate an ICS file. Then a script that feeds it both the ICS file and the original PDF and asks it to highlight any differences between the two.
Getting those events onto a usable, sharable calendar is much easier now.
by impure on 6/28/25, 4:22 PM
Pretty much all of my productivity apps has LLM integration now. My language learning app uses them to break down phrases and get detailed definitions. My RSS app generates summaries. And recently I released an email app that's like Google Inbox in that it uses bundles. It also summarizes emails and extracts expiry and due dates.
by karmakaze on 6/28/25, 4:19 PM
Not production I was just playing around but seems useful. On so many platforms bios are mostly blank. The best way to get good ones is to have AIs search for pictures and info about yourself and write a draft that's close but definitely not how you want it. That motivates fixing it up on the spot.
by ohxh on 6/28/25, 4:49 PM
Lots of non-chatbot uses in property management. Auditing leases vs. payment ledgers. Classifying maintenance work orders. Creating work orders from inspections (photos + text). Scheduling vendors to fix these issues. Etc.
by lazy_afternoons on 6/28/25, 4:03 PM
We use it for lead quality assessment, detecting bad language, scoring language on subtle skills etc
Pretty much 5-6 niche classification use cases.
by IdealeZahlen on 6/28/25, 4:02 PM
I've been building some interactive educational stuff (mostly math and science) with react / three.js using Claude.
by tony_codes on 6/28/25, 4:47 PM
Enabling users at jumblejournal.org to journal by hand using openAI OCR. Also, for journal extraction of growth vectors
by cpursley on 6/28/25, 4:34 PM
Parsing information into structured data as well as classifying information into normalized fields.
by sethops1 on 6/28/25, 3:35 PM
Not really, no. Still just using ChatGPT or Gemini for the occasional search for things that are buried in documentation somewhere. Anything more than that and LLMs make a hash of it fairly quick.
by joeyagreco on 6/28/25, 3:58 PM
Writing test boilerplate.
by rootcage on 6/28/25, 4:01 PM
The most common use case - coding assistant to get more done in less time.
Used it to deeper understand complex code base, create system design architecture diagrams and help onboard new engineers.
Summarizing large data dumps that users were frustrated with.
by scarface_74 on 6/29/25, 2:52 AM
Chatbots with some nuance. I work with voice and chat call centers hosted on Amazon Connect - the AWS version of the call center that Amazon uses internally.
Traditionally and still how it works in most call centers, you have to explicitly list out the things you can handle (intents), what sentences trigger them (utterances) and slots - ie “I want to get a flight from {origin} to {destination}” the variable parts would be the slots
Anyway, absolutely no company would or should trust an LLM to generate output to a customer. It never ends well. I use Gen ai to categorize free text input from a customer into a set of intents the system can handle and fill in the slots. But the output is very much on rails
It works a lot better than the old school method.
by gametorch on 6/28/25, 5:16 PM
1. Pre-prompting for image and video generation. Gives you way better results for less than a cent of added cost. Although many image models do this thing for you; you have to understand each individual model and apply this judiciously.
2. I build REPLs into any manual workflow that makes use of LLMs. Instead of just being like "F@ck, it didn't work!" you can instead tell the LLM why it didn't work and help it get the right answer. Saves a ton of time.
3. Coming up with color palettes, themes, and ideas for "content". LLMs are really good at pumping out good looking input for whatever factory you have built.
by nurettin on 6/28/25, 4:10 PM
I use LLMs to provide up to date information (by injecting newer information into the live conversation) and figure out what functions the user wants to call.
by tootie on 6/28/25, 4:31 PM
Coding assistant and audio transcription