by bryanh on 3/23/23, 4:57 PM with 1106 comments
by celestialcheese on 3/23/23, 5:29 PM
That being said, I'd never build anything dependent on these plugins. OpenAI and their models rule the day today, but who knows what will be next. Building on a open source framework (like langchain/gpt-index/roll your own), and having the ability to swap out the brain boxes behind the scenes is the only way forward IMO.
And if you're a data provider, are there any assurances that openai isn't just scraping the output and using it as part of their RLHF training loop, baking your proprietary data into their model?
by johnfn on 3/23/23, 5:38 PM
Never have I been more wrong. It's clear to me now that they simply didn't even care about the astounding leap forward that was generative AI art and were instead focused on even more high-impact products. (Can you imagine going back 6 months and telling your past self "Yeah, generative AI is alright, but it's roughly the 4th most impressive project that OpenAI will put out this year"?!) ChatGPT, GPT4, and now this: the mind boggles.
Watching some of the gifs of GPT using the internet, summarizing web pages, comparing them, etc is truly mind-blowing. I mean yeah I always thought this was the end goal but I would have put it a couple years out, not now. Holy moly.
by 93po on 3/23/23, 6:26 PM
As someone else said, Google is dead unless they massively shift in the next 6 months. No longer do I need to sift through pages of "12 best recipes for Thanksgiving" blog spam - OpenAI will do this for me and compile the results across several blog spam sites.
I am literally giving notice and quitting my job in a couple weeks, and it's a mixture of both being sick of it but also because I really need to focus my career on what's happening in this field. I feel like everything I'm doing now (product management for software) is about to be nearly worthless in 5 years. Largely in part because I know there will be a Github Copilot integration of some sort, and software development as we know it for consumer web and mobile apps is going to massively change.
I'm excited and scared and frankly just blown away.
by CobrastanJorji on 3/23/23, 5:55 PM
Holy cow.
by mk_stjames on 3/23/23, 7:26 PM
First is your API calls, then your chatgpt-jailbreak-turns-into-a-bank-DDOS-attack, then your "today it somehow executed several hundred thousand threads of a python script that made perfectly timed trades at 8:31AM on the NYSE which resulted in the largest single day drop since 1987..."
You can go on about individual responsibility and all... users are still the users, right. But this is starting to feel like giving a loaded handgun to a group of chimpanzees.
And OpenAI talks on and on about 'Safety' but all that 'Safety' means is "well, we didn't let anyone allow it to make jokes about fat or disabled people so we're good, right?!"
by huijzer on 3/23/23, 7:17 PM
Timeline of shipping by them (based on https://twitter.com/E0M/status/1635727471747407872?s=20):
DALL·E - July '22
ChatGPT - Nov '22
API's 66% cheaper - Aug '22
Embeddings 500x cheaper while SoTA - Dec '22
ChatGPT API. Also 10x cheaper while SoTA - March '23
Whisper API - March '23
GPT-4 - March '23
Plugins - March '23
Note that they have only a few hundred employees. To quote Fireship from YouTube: "2023 has been a crazy decade so far"
by softwaredoug on 3/23/23, 7:05 PM
It's also an interesting case study. Alexa foundationally never changed. Whereas OpenAI is a deeply invested, basically skunkworks, project with backers that were willing to sink significant cash into before seeing any returns, Alexa got stuck on a type of tech that 'seemed like' AI but never fundamentally innovated. Instead the sunk cost went to monetizing it ASAP. Amazon was also willing to sink cash before seeing returns, but they sunk it into very different areas...
It reminds me of that dinner scene in Social Network. Where Justin Timberlake says "you know what's f'ing cool, a billion dollars" where he lectures Zuck on not messing up with the party before you know what it is yet. Alexa / Amazon did a classic business play. Microsoft / OpenAI were just willing to figure it all out after the disruption happened where they held all the cards.
by jyrkesh on 3/23/23, 5:32 PM
Not saying mobile's going away, but this could be the thing that does to mobile what mobile did to desktop.
by throwaway4837 on 3/23/23, 7:03 PM
If OpenAI becomes the AI platform of choice, I wonder how many apps on the platform will eventually become native capabilities of the platform itself. This is unlike the Apple App Store, where they just take a commission, and more like Amazon where Amazon slowly starts to provide more and more products, pushing third-party products out of the market.
by kenjackson on 3/23/23, 5:21 PM
They're really building a platform. Curious to see where this goes over the next couple of years.
by gradys on 3/23/23, 10:47 PM
I do think much of the kind of software we were building before is essentially solved now, and in its place is a new paradigm that is here to stay. OpenAI is certainly the first mover in this paradigm, but what is helping me feel less dread and more... excitement? opportunity? is that I don't think they have such an insurmountable monopoly on the whole thing forever. Sounds obvious once you say it. Here's why I think this:
- I expect a lot of competition on raw LLM capabilities. Big tech companies will compete from the top. Stability/Alpaca style approaches will compete from the bottom. Because of this, I don't think OpenAI will be able to capture all value from the paradigm or even raise prices that much in the long run just because they have the best models right now.
- OpenAI made the IMO extraordinary and under-discussed decision to use an open API specification format, where every API provider hosts a text file on their website saying how to use their API. This means even this plugin ecosystem isn't a walled garden that only the first mover controls.
- Chat is not the only possible interface for this technology. There is a large design space, and room for many more than one approach.
Taking all of this together, I think it's possible to develop alternatives to ChatGPT as interfaces in this new era of natural language computing, alternatives that are not just "ChatGPT but with fewer bugs". Doing this well is going to be the design problem of the decade. I have some ideas bouncing around my head in this direction.
Would love to talk to like minded people. I created a Discord server to talk about this ("Post-GPT Computing"): https://discord.gg/QUM64Gey8h
My email is also in my profile if you want to reach out there.
by JCharante on 3/23/23, 6:20 PM
I have been playing around with GPT-4 parsing plaintext tickets and it is amazing what it does with the proper amount of context. It can draft tickets, familiarize itself with your stack by knowing all the tickets, understand the relationship between blockers, tell you why tickets are being blocked and the importance behind it. It can tell you what tickets should be prioritized and if you let it roleplay as a PM it'll suggest what role to be hiring for. I've only used it for a side project and I've always felt lonely working on solo side projects, but it is genuinly exciting to give it updates and have it draft emails on the latest progress. The first issue tracker to develop a plugin is what I'm moving towards.
by marban on 3/23/23, 5:14 PM
by mherrmann on 3/23/23, 6:53 PM
If I were OpenAI, I would use the usage data to further train the model. They can probably use ChatGPT itself to determine when an answer it produced pleased the user. Then they can use that to train the next model.
The internet is growing a brain.
1: https://writings.stephenwolfram.com/2023/03/chatgpt-gets-its...
by dougmwne on 3/23/23, 5:25 PM
I played with some prompts and GTP-4 seems to have no problem reading and writing to a simulated long term memory if given a basic pre-prompt.
by amrrs on 3/23/23, 7:48 PM
insane!
by swyx on 3/23/23, 6:41 PM
IT RUNS FFMPEG https://twitter.com/gdb/status/1638971232443076609?s=20
IT RUNS FREAKING FFMPEG. inside CHATGPT.
what. is. happening.
ChatGPT is an AI compute platform now.
by jug on 3/23/23, 6:55 PM
Can you imagine Google just released a davinci-003 like model in public beta? That only supports English and can't code reliably.
OpenAI is clearly betting on unleashing this avalanche before Google has time to catch up and rebuild reputation. They're still lying in the boxing ring and the referee is counting to ten.
by justaregulardev on 3/23/23, 5:39 PM
by samfriedman on 3/23/23, 5:18 PM
The browser and file-upload/interpretation plugins are great, but I think the real game changer is retrieval over arbitrary documents/filesystem: https://github.com/openai/chatgpt-retrieval-plugin
by not2b on 3/23/23, 10:42 PM
by ch33zer on 3/23/23, 5:40 PM
Of course there's also microsoft who does have some popular services, but they're pretty limited.
Thought 2: How do these companies make money if everyone just uses the chatbot to access them? Is LLM powered advertising on the way?
by s1mon on 3/23/23, 6:57 PM
by impulser_ on 3/23/23, 6:33 PM
You can ask both Bard and ChatGPT to give you a suggestion for a vegan restaurant and a recipe with calories and they both provide results. The only thing missing is the calories per item but who cares about that.
Most of the time it would be better to Google vegan restaurants and recipes because you want to see a selection of them not just one suggestion.
by anonyfox on 3/23/23, 6:59 PM
This will decimate frontend developers or at least change the way they provide value soon, and companies not being able to transition into a "headless mode" might get a hard time.
by elevenoh4 on 3/23/23, 5:46 PM
The waitlist mafia has begun. Insiders get all the whitespace.
by mikeknoop on 3/23/23, 5:18 PM
Super excited for this. Tool use for LLMs goes way beyond just search. Zapier is a launch partner here -- you can access any of the 5k+ apps / 20k+ actions on Zapier directly from within ChatGPT. We are eager to see how folks leverage this composability.
Some new example capabilities are retrieving data from any app, draft and send messages/emails, and complex multi step reasoning like look up data or create if doesn't exist. Some demos here: https://twitter.com/mikeknoop/status/1638949805862047744
(Also our plugin uses the same free public API we announced yesterday, so devs can add this same capability into your own products: https://news.ycombinator.com/item?id=35263542)
by maxdoop on 3/23/23, 5:58 PM
by elaus on 3/23/23, 5:30 PM
by andre-z on 3/23/23, 6:00 PM
by MichaelRazum on 3/23/23, 7:51 PM
by petilon on 3/23/23, 5:11 PM
OpenAI is moving fast to make sure their first-mover advantage doesn't go to waste.
by neilellis on 3/23/23, 6:09 PM
by throwaway138380 on 3/23/23, 5:15 PM
by KennyBlanken on 3/23/23, 5:46 PM
That is the most awkward insertion of a phrase about safety I've seen in quite some time.
by kacperlukawski on 3/23/23, 5:12 PM
by lurker919 on 3/23/23, 5:29 PM
by Filligree on 3/23/23, 5:11 PM
Bing already demonstrated the capability, but this is a more diverse set than just a search engine.
by JaDogg on 3/24/23, 8:50 AM
by sharemywin on 3/23/23, 5:48 PM
by jpalomaki on 3/23/23, 5:54 PM
Then you have your own computer with ChatGPT acting as CPU.
by antimora on 3/23/23, 5:22 PM
That was the whole thing about Alexa: NLP front end routed to computational backend.
by qgin on 3/23/23, 7:53 PM
by jcims on 3/23/23, 7:44 PM
by londons_explore on 3/23/23, 5:27 PM
Could I get the same by just making my prompt "You are a computer and can run the following tools to help you answer the users question: run_python('program'), google_search('query')".
Other people have done this already, for example [1]
by sheepscreek on 3/24/23, 1:30 PM
Short version: Is it spam? Yes. Scam? No. Ignore it at your own peril.
Long version: The cat is out of the bag now. The power of transformers is real. They are smarter and more intelligent than the least 20% smart humans (my approximation), and that’s already a breakthrough right there. I’ll paraphrase Steve Yegge:
> LLMs of today are like a Harvard graduate who did shrooms 4 hours ago, and is still a little high.
Putting the statistical/probability monkey aspect aside for a minute, empirically and anecdotally, they are incredibly powerful if you can learn how to harness them through intelligent prompts.
If they appear useless or dumb to you, your prompts are the reason why. Challenge them with a little guidance. They work better that way (read up on zero shot, one shot, two shot instructions).
What is most relevant this time is that they are real (an API, a search bot, a programming buddy) and democratized - available to anyone with an email address.
More on harnessing their power: squeezing your entire context into a 8k/32k token will be challenging for most complex applications. This is where prompt engineering (manual or automated) comes in.
To help with this, some very cool applications that use embeddings and vectors will push them even further - so the context can be shared as a compact vector instead of a large corpus of text.
While this is certainly better than a traditional search box, it’s still far from a fully-autonomous AI that can function with little to no supervision.
OpenAI plug-ins are a band-aid towards that vision, but they get us even closer.
by DustinBrett on 3/23/23, 5:13 PM
by nmca on 3/23/23, 5:50 PM
by yosito on 3/23/23, 5:36 PM
by golergka on 3/23/23, 5:38 PM
by Imnimo on 3/23/23, 8:24 PM
While I might be comfortable having ChatGPT look up a recipe for me, I feel like it's a much bigger stretch to have it just propose one from its own weights. I also notice that the prompter chooses to include the instruction "just the ingredients" - is this just to keep the demo short, or does it have trouble formulating the calorie counting query if the recipe also has instructions? If the recipe is generated without instructions and exists only in the model's mind, what am I supposed to do once I've got the ingredients?
by treyhuffine on 3/23/23, 6:33 PM
It will be interesting to see how the companies trying to compete respond.
by SubiculumCode on 3/23/23, 7:42 PM
by sirsinsalot on 3/24/23, 9:13 AM
by prophet_ on 3/24/23, 6:38 AM
by kernal on 3/23/23, 6:49 PM
by uconnectlol on 3/23/23, 9:23 PM
Who the hell talks like this? Only the most tamed HNer who thinks he's been given a divine task and accordingly crosses all Ts and dots all Is. Which is why software sucks, because you are all pathetically conformant, in a field where the accepted ideas are all terrible.
by yodon on 3/23/23, 5:25 PM
At present, we are naively pushing all information a session might need into the session before it might be needed in case it might be needed (meaning a lot of info that generally wont end up being used, like realtime updates to associated data records, needs to be pushed into the session as they happen, just in case).
It looks like plugins will allow us to flip that around and have the session pull information it might need as it needs it, which would be a huge improvement.
by iamflimflam1 on 3/23/23, 6:20 PM
by neilellis on 3/23/23, 5:43 PM
That's the sound of a thousand small startups going bust.
Well played OpenAI.
by eqmvii on 3/23/23, 6:49 PM
by iamsanteri on 3/23/23, 6:58 PM
by danielrm26 on 3/23/23, 7:20 PM
This is a short-term bridge to the real thing that's coming: https://danielmiessler.com/blog/spqa-ai-architecture-replace...
by wskish on 3/24/23, 12:07 AM
by mirekrusin on 3/23/23, 8:33 PM
This is dangerous.
by mrandish on 3/23/23, 5:39 PM
I'm curious to see just how they're going to play this "open standard."
by gk1 on 3/23/23, 6:37 PM
by booleandilemma on 3/23/23, 11:36 PM
by Seattle3503 on 3/23/23, 8:31 PM
by davidkunz on 3/23/23, 6:42 PM
- Compiler/parser for programming languages (to see if code compiles)
- Read and write access to a given directory on the file system (to automatically change a code base)
- Access to given tools, to be invoked in that directory (cargo test, npm test, ...)
Then I could just say what I want, lean back and have a functioning program in the end.
by endisneigh on 3/23/23, 6:18 PM
by aetherane on 3/23/23, 10:18 PM
by Neuro_Gear on 3/23/23, 6:12 PM
What spirits do you wizards call forth!
by JanSt on 3/23/23, 5:56 PM
by eh9 on 3/24/23, 4:17 AM
by dengorilla on 3/24/23, 2:09 AM
by throwPlz on 3/23/23, 5:15 PM
by bobdosherman on 3/23/23, 9:17 PM
by Thorentis on 3/23/23, 10:34 PM
This is missing the most important part of AGI, where understanding of the concepts the plugins provide is actually baked into the model so that it can use that understand to reason laterally. With this approach, ChatGPT is nothing more than an API client that accepts English sentences as input.
by siavosh on 3/23/23, 9:58 PM
by akavi on 3/23/23, 5:35 PM
Like, this feels a lot like when the iPhone jumped out to grab the lion share of mobile. But the switching costs was much smaller (end users could just go out and buy an Android phone), and network effects much weaker (synergy with iTunes and the famous blue bubbles... and that's about it). Here it feels like a lot of the value is embedded in the business relationships OpenAI's building up, which seem _much_ more difficult to dislodge, even if others catch up from a capabilities perspective.
by blackoil on 3/23/23, 5:32 PM
by robbywashere_ on 3/23/23, 5:55 PM
by mmq on 3/23/23, 5:17 PM
by victoryhb on 3/23/23, 5:38 PM
by jaimex2 on 3/24/23, 4:21 AM
I bet ChatGPT and equivalents will be rubbish soon. It'll segway the answer to an ad before giving what you are after.
Enjoy it while it's good and trying to build a user base, like all big tech things.
by seydor on 3/24/23, 9:16 AM
Maintaining the business ecosystem around gpt4 and future open-source chatbots will be quite a challeng
by ChildOfChaos on 3/23/23, 7:03 PM
I swear last week was huge with GPT 4 and Midjourney 5, but this week has a bunch of stuff as well.
This week you have Bing adding updated Dall-e to it's site, Adobe announcing it's own image generation model and tools, Google releasing Bard to the public and now these ChatGPT plugins, Crazy times. I love it.
by zaptrem on 3/23/23, 7:19 PM
by LelouBil on 3/23/23, 10:30 PM
When I tried bing, it made at most 2 searches right after my question but the second one didn't seem to be based on the first one's content.
This can do multiple queries based on website content and follow links !
by sharemywin on 3/23/23, 5:33 PM
Are the plugins going to cost more?
Do they share the $20 with the plug provider?
do you get charged a pay per use?
by rickrollin on 3/23/23, 6:08 PM
by seydor on 3/23/23, 9:39 PM
by siva7 on 3/23/23, 5:38 PM
by jeadie on 3/26/23, 3:58 AM
by billiam on 3/23/23, 8:07 PM
1. https://writings.stephenwolfram.com/2023/03/chatgpt-gets-its...
by CrypticShift on 3/23/23, 5:43 PM
by pzo on 3/23/23, 6:58 PM
by jacquesm on 3/23/23, 10:10 PM
by felipelalli on 3/24/23, 4:23 AM
by hmate9 on 3/23/23, 8:58 PM
Not saying it’s likely to happen with current chatgpt but as these inevitably get better the chances are forever increasing.
by amrb on 3/23/23, 10:26 PM
by Jeff_Brown on 3/23/23, 10:19 PM
Important yes, philosophical no -- it's an empirical question.
by davidkuennen on 3/23/23, 5:32 PM
by andre-z on 3/23/23, 7:12 PM
by wouldbecouldbe on 3/23/23, 10:10 PM
by nikcub on 3/23/23, 9:16 PM
by rvz on 3/23/23, 5:16 PM
Another sign of Microsoft actually running the show with their newly acquired AI division.
by finikytou on 3/23/23, 9:24 PM
by p10 on 3/23/23, 6:47 PM
by kristopolous on 3/23/23, 6:13 PM
by nikolqy on 3/23/23, 5:25 PM
by yawnxyz on 3/23/23, 10:41 PM
by kooman on 3/24/23, 7:14 AM
by sharemywin on 3/23/23, 6:07 PM
Create a manifest file and host it at yourdomain.com/.well-known/ai-plugin.json
by squarefoot on 3/23/23, 8:57 PM
by amrb on 3/23/23, 9:37 PM
by modeless on 3/23/23, 5:38 PM
by justanotheratom on 3/23/23, 10:17 PM
by gonlad_x on 3/24/23, 11:00 AM
by smy20011 on 3/23/23, 5:20 PM
by Pigalowda on 3/23/23, 7:08 PM
by karmasimida on 3/23/23, 11:53 PM
by danShumway on 3/24/23, 3:07 AM
> OpenAI will inject a compact description of your plugin in a message to ChatGPT, invisible to end users. This will include the plugin description, endpoints, and examples.
> The model will incorporate the API results into its response to the user.
Without knowing more details, both of these seem like potential avenues for prompt injection, both on the user end of things to attack services and on the developer end of things to attack users. And here's OpenAI's advice on that (https://platform.openai.com/docs/guides/safety-best-practice...), which includes gems like:
> Wherever possible, we recommend having a human review outputs before they are used in practice.
Right, because that's definitely what all the developers and companies are thinking when they wire an API up to a chat bot. They definitely intend to have a human monitor everything. /s
----
What is (no pun intended) prompting this? Does OpenAI just feel like it needs to push the hype train harder? All of the "AI safety" experiments they've been talking about are bullcrap; they're wasting time and energy doing flashy experiments about whether the AI can escape the box and self-replicate, meanwhile this gets dropped with only a minor nod towards the many actual dangers that it could pose.
It's all hype. They're only interested in being "worried" about the theoretical concerns because those make their AI sound more special when journalists report about it. The actual safety measures on this seem woefully inadequate.
It really frustrates me how easily the AGI crowd got wooed into having their entire philosophy converted into press releases to make GPT sound more advanced than it is, while actual security concerns warrant zero coverage. It reminds me of all of the self-driving car trolley problems floated around the Internet a while back that were ultimately used to distract people from the fact that self-driving cars would drive into brick walls if they were painted white. Announcements like this make it so clear that all of the "ethical" talk from OpenAI is pure marketing propaganda designed to make GPT appear more impressive. It has nothing to do with actual ethics or safety.
Hot take: you don't need an AGI to blow things up, you just need unpredictable software that breaks in novel, hard-to-anticipate ways wired up to explosives.
----
Anyway, my conspiracy theory after skimming through the docs is that OpenAI will wait for something to go horribly wrong and then instead of facing consequences they'll use that as an excuse to try and get a regulation passed to lock down the market and avoid opening up API access to other people. They'll act irresponsible and they'll use that as an excuse to monopolize. They'll build capabilities that are inherently insecure and were recklessly deployed, and then they'll pull an Apple and use that as an excuse to build a highly moderated, locked-down platform that inhibits competition.
by pisush on 3/23/23, 6:59 PM
by embit on 3/23/23, 10:37 PM
by 0xDEF on 3/23/23, 6:03 PM
by hackerlight on 3/24/23, 2:40 AM
by sacnoradhq on 3/24/23, 3:24 AM
I hope Sam is/will give YC dinner talks about their journey.
by machiaweliczny on 3/24/23, 7:13 AM
by djoldman on 3/23/23, 5:47 PM
Instant links from inside chatGPT to your website are the new equivalent of Google search ads.
by zerop on 3/24/23, 10:53 AM
by throwaway2203 on 3/23/23, 8:11 PM
by v4dok on 3/23/23, 5:18 PM
by sourcecodeplz on 3/23/23, 5:07 PM
by davidmurphy on 3/23/23, 5:15 PM