by S0y on 10/30/24, 2:15 AM
by asdfman123 on 10/31/24, 12:34 AM
I work for Google, and I just got done with my work day. I was just writing I guess what you'd call "AI generated code."
But the code completion engine is basically just good at finishing the lines I'm writing. If I'm writing "function getAc..." it's smart enough to complete to "function getActionHandler()", and maybe suggest the correct arguments and a decent jsdoc comment.
So basically, it's a helpful productivity tool but it's not doing any engineering at all. It's probably about as good, maybe slightly worse, than Copilot. (I haven't used it recently though.)
by ntulpule on 10/30/24, 4:42 AM
Hi, I lead the teams responsible for our internal developer tools, including AI features. We work very closely with Google DeepMind to adapt Gemini models for Google-scale coding and other Software Engineering usecases. Google has a unique, massive monorepo which poses a lot of fun challenges when it comes to deploying AI capabilities at scale.
1. We take a lot of care to make sure the AI recommendations are safe and have a high quality bar (regular monitoring, code provenance tracking, adversarial testing, and more).
2. We also do regular A/B tests and randomized control trials to ensure these features are improving SWE productivity and throughput.
3. We see similar efficiencies across all programming languages and frameworks used internally at Google and engineers across all tenure and experience cohorts show similar gain in productivity.
You can read more on our approach here:
https://research.google/blog/ai-in-software-engineering-at-g...
by devonbleak on 10/30/24, 9:23 PM
It's Go. 25% of the code is just basic error checking and returning nil.
by yangcheng on 10/31/24, 5:54 AM
Having worked at both FAANG companies and startups, I can offer a perspective on AI's coding impact in different environments.
At startups, engineers work with new tech stacks, start projects from scratch, and need to ship something quickly. LLMs can wrtie way more code. I've seen ML engineers build React frontends without any previous frontend experience, flutter developers write 100-line SQL queries for data analysis, with LLM 10x productivity for this type of work.
At FAANG companies, codebases contain years of business logic, edge cases, and 'not-bugs-but-features.' Engineers know their tech stacks well, and legacy constraints make LLMs less effective, and can generate wrong code that needs to be fixed
by dep_b on 10/31/24, 2:02 PM
A quarter of all new code? Of course. Especially if you include all "smart autocomplete" code.
When dealing with a fermenting pile of technical debt? I expect very little. LLM's don't have application-wide context yet.
AI is definitely revolutionizing our field, but the same people that said that no-code tools and all of the other hype-of-the-decade technologies would make developers jobless are actually the people AI is making jobless.
Generate an opinion piece about how AI is going to make developers jobless, using AI? Less than a minute. And you don't need to maintain that article, once it's published, it's done.
While there's a tsunami of AI-generated almost-there projects coming that need to be moved to a shippable and sellable state. So I'm more afraid about the kind of work I'm going to get while still getting paid handsomely for my skills, than ever being jobless as the only guy that really understands the whole stack from top to bottom.
by fzysingularity on 10/30/24, 8:54 PM
While I get the MBA-speak of lines-of-code that AI is now able to accomplish, it does make me think about their highly-curated internal codebase that makes them well placed to potentially get to 50% AI-generated code.
One common misconception is that all LLMs are the same. The models are trained the same, but trained on wildly different datasets. Google, and more specifically the Google codebase is arguably one of the most curated, and iterated on datasets in existence. This is a massive lever for Google to train their internal code-gen models, that realistically could easily replace any entry-level or junior developer.
- Code review is another dimension of the process of maintaining a codebase that we can expect huge improvements with LLMs. The highly-curated commentary on existing code / flawed diff / corrected diff that Google possesses give them an opportunity to build a whole set of new internal tools / infra that's extremely tailored to their own coding standard / culture.
by Taylor_OD on 10/30/24, 8:35 PM
If we are talking about the boilerplate code and autofill syntax code that copilot or any other "AI" will offer me when I start typing... Then sure. Sounds about right.
The other 75% is the stuff you actually have to think about.
This feels like saying linters impact x0% of code. This just feels like an extension of that.
by imaginebit on 10/30/24, 2:57 AM
I think he's trying to promote AI, somehow raises questions about thrir code quality among some
by ryoshu on 10/30/24, 8:13 PM
Spoken like an MBA who counts lines of code.
by pfannkuchen on 10/30/24, 8:13 PM
It’s replaced the 25% previously copy pasted from stack overflow.
by ttul on 10/31/24, 6:21 AM
I wanted a new feature in our customer support console and the dev lead suggested I write a JIRA. I’m the CEO, so this is not my usual thing (and probably should not be). I told Claude what I wanted and pasted in a .js file from the existing project so that it would get a sense of the context. It cranked out a fully functional React component that actually looks quite nice too. Two new API calls were needed, but Claude helpfully told me that. So I pasted the code sample and a screenshot of the HTML output into the JIRA and then got Claude to write me the rest of the JIRA as well.
Everyone knows this was “made by AI” because there’s no way in hell I would ever have the time. These models might not be able to sit there and build an entire project from scratch yet, but if what you need is some help adding the next control panel page, Claude’s got your back on that.
by nosbo on 10/30/24, 4:41 AM
I don't write code as I'm a sysadmin. Mostly just scripts. But is this like saying intellisense writes 25% of my code? Because I use autocomplete to shortcut stuff or to create a for loop to fill with things I want to do.
by 0xCAP on 10/30/24, 8:20 PM
People overestimate faang. There are many talents working there, sure, but a lot of garbage gets pumped into their codebases as well.
by summerlight on 10/31/24, 12:05 AM
In Google, there is a process called "Large Scale Change" which is primarily meant for trivial/safe but extremely tedious code changes that potentially span over the entire monorepo. Such as foundational API changes, trivial optimization, code style etc etc. This is a perfectly suitable for LLM driven code changes (in fact I'm seeing more and more of LLM generated LSC) and I guess a large fraction of mentioned "AI generated codes" can be actually attributable to this.
by drunken_thor on 10/31/24, 12:59 AM
A company that used to be the pinnacle of software development is now just generating code in order to sell their big data models. Horrifying. Devastating.
by motoxpro on 10/30/24, 11:09 PM
People talk about how AI is bad at generating non-trivial code, but why are people using it to generate non-trivial code?
25% of coding is just the most basic boilerplate. I think of AI not as a thinking machine but as a 1000 WPM boilerplate typer.
If it is halucinating, you're trying to make it do stuff that is too complex.
by ausbah on 10/30/24, 2:18 AM
i would be may more impressed if LLMs could do code compression. more code == more things that can break, and when llms can generate boatloads of it with a click you can imagine what might happen
by randomNumber7 on 10/30/24, 9:27 PM
I cannot imagine this to be true, cause imo current LLM's coding abilities are very limited. It definitely makes me more productive to use it as a tool, but I use it mainly for boilerplate and short examples (where I had to read some library documentation before).
Whenever the problem requires thinking, it horribly fails because it cannot reason (yet). So unless this is also true for google devs, I cannot see that 25% number.
by d_burfoot on 10/31/24, 1:52 PM
I'd be far more impressed if the CEO said "The AI deleted a quarter of our company's code".
by avsteele on 10/31/24, 2:22 AM
Everyone here is arguing about the average AI code quality and I'm here just not believing the claim.
Is Google out there monitoring the IDE activity of every engineer, logging the amount of code created, by what, lines, characters, and how it was generated? Dubious.
by sbochins on 10/30/24, 8:28 PM
It’s probably code that was previously machine generated that they’re now calling “AI Generated”.
by xen0 on 10/31/24, 4:46 PM
I really do wonder who these engineers are, that the current 'AI' tools are able to write so much of their code.
Maybe my situation is unusual; I haven't written all that much code at Google lately, but what I do write is pretty tied to specific details of the program and the AI auto completion is just not that useful. Sometimes it auto completes a method signature correctly, but it never gets the body right (or even particularly close).
And it routinely making up methods or fields on objects I want to use is anti productive.
by arethuza on 10/31/24, 8:55 AM
I'm waiting for some Google developer to say "More than a quarter of the CEOs statements are now created by AI"... ;-)
by prmoustache on 10/30/24, 10:29 PM
Aren't we just talking about auto completion?
In that case those 25% are probably the very same 25% that were automatically generated by LTP based auto-completion.
by alienchow on 10/31/24, 7:59 AM
When setting up unit tests traditionally took more time and LOC than the logic itself, LLMs are particularly useful.
1. Paste in my actual code.
2. Prompt: Write unit tests, test tables. Include scenarios: A, B, C, D, E. Include all other scenarios I left out, isolate suggestions for review.
I used to spend the majority of the coding time writing unit tests and mocking test data, now it's more like 10%.
by lysace on 10/30/24, 8:08 PM
Github Copilot had an outage for me this morning. It was kind of shocking. I now believe this metric. :-)
I'll be looking into ways of running a local LLM for this purpose (code assistance in VS Code). I'm already really impressed with various quite large models running on my 32 GB Mac Studio M2 Max via Ollama. It feels like having a locally running chatgpt.
by makerofthings on 10/31/24, 8:05 AM
I keep trying to use these things but I always end up back in vim (in which I don't have any ai autocomplete set up.)
The AI is fine, but every time it makes a little mistake that I have to correct it really breaks my flow. I might type a lot more boilerplate without it but I get better flow and overall that saves me time with less mistakes.
by rcarmo on 10/30/24, 7:06 AM
There is a running gag among my friends using Google Chat (or whatever their corporate IM tool is now called) that this explains a lot of what they’re experiencing while using it…
by skywhopper on 10/30/24, 10:42 PM
All this means is that 25% of code at Google is trivial boilerplate that would be better factored out of their process rather than tasking inefficient LLM tools with. The more they are willing to leave the “grunt work” to an LLM, the less likely they are to ever eliminate it from the process.
by mirkodrummer on 10/30/24, 11:25 PM
Sometimes I wonder why we would want LLMs spit out human readable code. Wouldn’t be a better future where LLMs generate highly efficient machine code and eventually we read the “source map” for debugging? Wasn’t source code just for humans?
by standardUser on 10/30/24, 11:26 PM
I use it all the time for work. Not much for actual code that goes into production, but a lot for "tell me what this does" or "given x, how do I do y". It speeds me up a ton. I'll also have it do code review when I'm uncertain about something, asking if there's any bugs or inefficiencies in a given chunk of code. I've actually found it to be more reliable about code than more general topics. Though I'm using it in a fairly specific way with code, versus asking for deep information about history for example, where is frequently gets facts very wrong.
by redbell on 10/31/24, 11:26 AM
Wait a second—didn't Google
warn its employees against using AI-generated code? (
https://news.ycombinator.com/item?id=36399021). What had changed?! Has Gemini now surpassed Bard in capabilities? Did they manage to resolve the
copyright issues? Or maybe they've noticed a boost in productivity? I'm not sure, but let’s see if other big tech companies would follow this path.
by SavageBeast on 10/30/24, 9:34 PM
Google needs to bolster their AI story and this is good click bait. I'm not buying it personally.
by hggigg on 10/30/24, 8:44 PM
I reckon he’s talking bollocks. Same as IBM was when it was about to disguise layoffs as AI uplift and actually just shovelled the existing workload on to other people.
by submeta on 10/31/24, 11:03 AM
Pandora‘s box has been opened.
Some say „this is mere tab completion“, some say „it won’t replace the senior engineer.“
I can remember how many fiercely argued 2 years ago that GenAI and Copilot are producing garbage. But here we are: These systems improve the workflow of creating / editing code enormously. You seniors might not be affected, but there are endless many scenarios where it replaces the junior who‘d write code to transform data, write scripts, write one-off scripts, or even write boilerplate, test code and what not.
And this is only after a short time. I cannot even imagine what we‘ll have ten years from now where we can propably have much larger context windows where the system can „unterstand“ the whole code base, not just parts.
I am sorry for low level engineering jobs, but I am super exited as well.
With GebAI I have been writing super complex Elisp code to automate workflow in Emacs, or VBA scripts in Excel, or Bash scripts I wouldn’t have otherwise been able to write, or JavaScript, or quickly write Python code to solve very tricky problems (and I am very high level in Python), or even React code for web apps for my personal use.
The future looks exiting to me.
by LudwigNagasena on 10/30/24, 9:00 PM
How much of that generated code is `if err != nil { return err }`?
by ken47 on 11/4/24, 4:32 PM
Without context, not very meaningful. Does this simple measure lines of code? Characters written? Is it “oversuggesting” code that it shouldn’t be confident in? Does this code make it into production or is a large percentage of it fixed by humans at great cost?
Google, and really, the whole financial machine has a vested interest playing up the potential of AI. Unfortunate that it isn’t being given time to grow organically.
by yearolinuxdsktp on 10/31/24, 3:17 AM
Of course when so much of it is written in verbose-as-fuck languages like Java and Go, you’d be stupid not to let computers generate lack chunks of it. It’s sad, we as humans stopped trying to do better at better coding languages. At least Java is slowly making progress—-maybe in another 10 years, it will finally become a high level language. Go never tried to be one. You surprised you need AI to tab complete your boilerplate?!
Financial incentives at large companies are not aligned with low volumes of code. There are no rewards for less code. People get rewarded for another bullshit framework to slap on their resume. Box me in, no, cube me in to a morass of a thick ingress layer, that uses 1/8th of my CPU.
by holtkam2 on 10/30/24, 11:24 PM
Can we also see the stats for how much code used to come from StackOverflow? Probably 25%
by tgtweak on 10/31/24, 3:04 AM
I feel like, given my experience lately with all the API models currently available, that this is only a fact if the models google is using internally are SIGNIFICANTLY better than what is available publicly even on closed models.
Claude 3.5-sonnet (latest) is barely able to stay coherent on 500 LOC files, and easily gets tripped up when there are several files in the same directory.
I have tried similarly with o1-preview and 4o, and gemini pro...
If google is using a 5M token context window LLM with 100k+ token-output trained on all the code that is not public... then I can believe this claim.
This just goes to show how critical of an issue this is that these models are behind closed doors.
by thelittleone on 10/31/24, 12:38 PM
I understand CEOs need to promote their companies, but it's notable that Google - arguably the world's leading information technology company - fell behind in AI development under Pichai's leadership. Now he's touting Google's internal capabilities, yet Gemini is being outperformed by relative newcomers like Anthropic and OpenAI.
His position seems secure despite these missteps, which highlights an interesting double standard: there appears to be far more tolerance for strategic failures at the CEO level compared to the rigorous performance standards expected of engineering staff.
by mjhay on 10/30/24, 9:41 PM
100% of Sundar Pichai could be replaced by an AI.
by elzbardico on 10/30/24, 9:50 PM
Well. When I developed in Java, I think that Eclipse did similar figures circa 2005.
by sreitshamer on 10/31/24, 5:00 PM
Software development isn't a code-production activity, it's a knowledge-acquisition activity. It involves refactoring and deleting code too. I guess the AI isn't helping with that?
by deterministic on 10/30/24, 9:44 PM
Not impressed. I currently auto generate 90% or more of the code I need to implement business solutions. With no AI involved. Just high level declarations of intent auto translated to C++/Typescript/…
by syngrog66 on 10/31/24, 2:54 PM
> "and we continue to be laser-focused on building great products."
NO! False. I can confirm they are not. I've known of several major obvious unfixed bugs/flaws in Google apps for years. and in the last year or so especially theres been an explosion in the number of head-scratching, jaw-dropping fails and UX anti-patterns in their code. GMail, Search, Maps and Android are now riddled with them.
on Sundar Pichai's watch he's been devolving Google to be yet another Microsoft type in terms of quality, care and taste.
by agomez314 on 10/31/24, 12:18 AM
I thought great engineers reduce the amount of new code in a codebase?
by jeffbee on 10/30/24, 10:07 PM
It's quite amusing to me because I am old enough to remember when Copilot emerged the HN mainthought was that it was the death sentence for big corps, the scrappy independent hacker was going to run circles around them. But here we see the predictable reality: an organization that is already in an elite league in terms of developer velocity gets more benefit from LLM code assistants than Joe Hacker. These technologies serve to entrench and empower those who are already enormously powerful.
by twis on 10/30/24, 9:16 PM
How much code was "written by" autocomplete before LLMs came along? From my experience, LLM integration is advanced autocomplete. 25% is believable, but misleading.
by blibble on 10/30/24, 10:31 PM
this is the 2024 version of "25% of our code is now produced by outsourced resources"
by arminiusreturns on 10/30/24, 9:47 PM
I was a luddite about the generative LLMs at first, as a crusty sysadmin type.
I came around and started experimenting. It's been a boon for me.
My conclusion is that we are at the first wave of a split between those who use LLMs to augment their abilities and knowledge, and those who delay. In cyberpunk terminally, it's aug-tech, not real AGI. (and the lesser ones code abilities and simpler the task, the more benefit, it's an accelerator)
by skatanski on 10/30/24, 10:43 PM
I think at this moment, this sounds more like "quarter of the company's new code is created using stackoverflow and other forums. Many many people use all these tools to find information, as they did using stackoverflow a month ago, but now suddenly we can call it "created by AI". It'd be nice to have a distinction. I'm saying this, while being very excited about using LLMs as a developer.
by jmartin2683 on 10/30/24, 11:01 PM
I’m gonna bet this is a lie.
by hsuduebc2 on 10/31/24, 2:10 AM
I believe it is absolutely suitable for generating controllers in java spring or connecting to database and making a simple query which from my experience as an ordinary enterprise developer in Fintech is most of the job. Making these huge applicatins is a lot of repetitive work and integrations. Not a work that usually requires some advanced logic.
by sanj on 10/31/24, 3:05 PM
Caveat: I formerly worked at Google.
What missing is that code being written by AI may have less of an impact than dataset that are developed or refined by AI. Consider examples like a utility function's coefficients, or the weights of a model.
As these are aggressively tuned using ML feedback, they'll influence far more systems than raw code.
by nenadg on 10/31/24, 12:23 PM
Internet random person (me) says more than 99% of Google's 25%+ code written by AI has already been written by humans.
by baalimago on 10/31/24, 2:01 PM
To me, programming assistants have two usecases:
1. Generate unit tests for modules which are already written to be tested
2. Generate documentation for interfaces
Both of these require quite deep knowledge in what to write, then it simply documents and fills in the blanks using the context which already has been laid out.
by agilob on 10/31/24, 9:07 AM
So we're using CoL as a metric now?
by piyuv on 10/31/24, 5:02 PM
I wish Tim Cook would reply with “more than half of all iMessages are created with autocomplete”
by daylet on 10/31/24, 8:07 AM
by Hamuko on 10/30/24, 10:14 PM
How do Google's IP lawyers feel about a quarter of the company's code not being copyrightable?
by horns4lyfe on 10/31/24, 12:37 AM
I’d bet at least a quarter of their code is class definitions, constructors, and all the other minutiae files required for modern software, so that makes sense. But people weren’t writing most of that before either, we’ve had autocomplete and code geb for a long time.
by ThinkBeat on 10/30/24, 9:27 PM
This is quite interesting to know.
I will be curious to see if it has any impact positive or negative
over a couple of years.
Will the code be more secure since the AI does not make the mistakes
humans do?
Or will the code, not well enough understood by the employees, exposes
exploits that would not be there?
Will it change average up time?
by Starlevel004 on 10/30/24, 8:44 PM
No wonder search barely works anymore
by tabbott on 10/30/24, 10:36 PM
Without a clear explanation of methodology, this is meaningless. My guess is this statistic is generated using misleading techniques like classifying "code changes generated by existing bulk/automated refactoring tools" as "AI generated".
by mastazi on 10/30/24, 11:36 PM
The auto-linter in my editor probably generates a similar percentage of the characters I commit.
by nine_zeros on 10/30/24, 3:23 AM
Writing more code means more needs to be maintained and they are cleverly hiding that fact. Software is a lot more like complex plumbing than people want to admit:
More lines == more shit to maintain. Complex lines == the shit is unmanageable.
But wall street investors love simplistic narratives such as More X == More revenue. So here we are. Pretty clever marketing imo.
by davidclark on 10/30/24, 10:11 PM
If I tab complete my function and variable symbols, does my lsp write 80%+ of my lines of code?
by _spduchamp on 10/30/24, 9:23 PM
I can ask AI to generate the same code multiple times, and get new variations on programming style each time, and get the occasional solution that is just not quite right but sort of works. Sounds like a recipe for a gloppy mushy mess of style salad.
by mjbale116 on 10/30/24, 2:55 AM
If you manage to convince software engineers that you are doing them a favour by employing them then they will approach any workplace negotiations with a specific mindset which will make them grab the first number it gets thrown to them.
These statements are brilliant.
by hiptobecubic on 10/31/24, 5:24 AM
I've had mixed results writing "normal" business logic in c++, but i gotta say, for SQL it's pretty incredible. Granted SQL has a lot of boilerplate and predictable structure, but it saves a ton of time honestly.
by echoangle on 10/31/24, 6:25 AM
Does protobuf count as AI now?
by Terr_ on 10/30/24, 10:55 PM
My concern is that "frequently needed and immediately useful results" is strongly correlated to "this code should already be abstracted away into a library by now."
Search Copy-Paste as a Service is hiding a deeper issue.
by fredgrott on 10/31/24, 11:59 AM
Kind of useless stat given how much code a typical dev refactors....
by zxilly on 10/31/24, 8:00 AM
As a go developer, Copilot write 100% "if err != nnil for me
by Kiro on 10/31/24, 8:03 AM
I find it interesting that the people who dismiss the utility of AI are being so aggressive, sarcastic and hateful about it. Why all the anger? Where's the curiosity?
by oglop on 10/30/24, 4:25 PM
No surprise. I give my career about 2 years before I’m useless.
by cebert on 10/30/24, 9:16 PM
Did AI have to go thru several rounds of Leetcode interviews?
by ChrisArchitect on 10/30/24, 4:42 AM
by hi_hi on 10/30/24, 10:14 PM
> More than a quarter of new code created at Google is generated by AI, said CEO Sundar Pichai...
How do they know this? At face value, it sounds like alot, but it only says "new code generated". Nothing about code making it into source control or production, or even which parts of googles vast business units.
For all we know, this could be the result of some internal poll "Tell us if you've been using Goose recently" or some marketing analytics on the Goose "Generate" button.
It's puff piece to put Google back in the lime light, and everyone is lapping it up.
by wokkaflokka on 10/30/24, 9:41 PM
No wonder their products are getting worse and worse...
by me551ah on 10/31/24, 10:33 AM
AI has boosted my productivity but only marginally. Earlier I used to copy paste stuff from stackoverflow and now AI generates that for me.
by teknopaul on 10/31/24, 12:10 PM
I'd say the same. But 90% of my time not writing code. It is mostly time wasted with github and k8s build issues.
by okokwhatever on 10/31/24, 2:49 PM
People still don't understand those who pay the bills are those who claim developers are less and less necessary. It doesn't matter how much we love our job and how much we care for quality, at the end those who pay take more care of reducing workforce for something potentially free or cheap.
We are less needed, less cared and less seen as engineers. We are just a cost in a wrong column of Quickbooks.
Get use to it.
by tremorscript on 11/1/24, 3:15 AM
Sounds about right and it explains a lot about the current quality of google products and google search. :-)
by gilfoyle on 10/31/24, 3:07 PM
This is like saying more than a quarter of the code is from oss, examples and stackoverflow before LLMs.
by meindnoch on 10/31/24, 8:56 AM
I saw code on master which was parsing HTML with regex. The author was proud that this code was mostly generated by AI.
:)
by matt3210 on 10/31/24, 5:52 PM
NVIDIA CEO said there would be no more developers too and it totally wasn't a marketing thing.
by nottorp on 10/31/24, 10:21 AM
The protobuf boilerplate, right? :)
by erlend_sh on 10/31/24, 5:49 PM
Self-interested hyperbole aside, I think that’s a laughably low number for what is now effectively an ‘AI Company’. I’m sure >95% of Google employees use Google (well, at least until recent years).
If this stuff really works as well as these companies claim it does, wouldn’t their entire workforce excitedly be using these tools already?
by hollywood_court on 10/31/24, 5:39 PM
Cursor and v0.dev write 95% of the code for myself and the two other devs on my team.
by chabes on 10/30/24, 8:35 PM
When Google announced their big layoffs, I noted the timing in relation to some big AI announcements. People here told me I was crazy for suggesting that corporations could replace employees with AI this early. Now the CEO is confirming that more than a quarter of new code is created by AI. Can’t really deny that reality anymore folks.
by foobarian on 10/31/24, 12:25 AM
The real question is, what fraction of the company’s code is deleted by AI :-)
by bryanrasmussen on 10/31/24, 12:41 AM
Public says more than a quarter of Google's search results are absolute crap.
by Timber-6539 on 10/31/24, 6:47 AM
All this talk means nothing until Google gives AI permissions to push to prod.
by silexia on 11/3/24, 6:24 PM
Is this why Google search results are so bad now?
by marstall on 10/30/24, 8:16 PM
maps with recent headlines about AI improving programmer productivity 20-30%.
which puts it in line with previous code-generation technologies i would imagine. I wonder which of these increased productivity the most?
- Assembly Language
- early Compilers
- databases
- graphics frameworks
- ui frameworks (windows)
- web apps
- code generators (rails scaffolding)
- genAI
by soperj on 10/30/24, 9:26 PM
The real question is how many lines of code was it responsible for removing.
by haccount on 10/31/24, 8:33 AM
No wonder Gemini is a garbage fire if had chatgpt write the code for it.
by 1GZ0 on 10/31/24, 8:52 AM
I wonder how much of that code is boilerplate vs. actual functionality.
by defactor on 10/31/24, 2:33 AM
Try any AI tool to write basic factor code.hallucinates most of the time
by otabdeveloper4 on 10/30/24, 8:32 PM
That explains a lot about Google's so-called "quality".
by zxvkhkxvdvbdxz on 10/30/24, 10:14 PM
I feel this made me loose the respect I still had for Google
by niobe on 10/30/24, 11:55 PM
This explains a LOT about Google's quality decline.
by mgaunard on 10/31/24, 8:51 AM
AI is pretty good at helping you manage a messy large codebase and making it even more messy and verbose.
Is that a good thing though? We should work and making code small and easy to manage without AI tools.
by socrateslee on 11/1/24, 3:32 AM
or saying that most of the Google engineers are using tools like copilot, and they use the copilot just as everyone else.
by fortylove on 10/31/24, 6:31 PM
Is this why we finally got darkmode in gcal?
by rockskon on 10/31/24, 12:29 AM
No shit a quarter of Google's new code is created by AI. How else do you explain why Google search has been so aggressively
awful for the past 5~ years?
Seriously. The penchant for outright ignoring user search terms, relentlessly forcing irrelevant or just plain wrong information on users, and the obnoxious UI changes on YouTube! If I'm watching a video on full screen I have explicitly made it clear that I want YouTube to only show me video! STOP BRINGING UP THE FUCKING VIDEO DESCRIPTION TO TAKE UP HALF THE SCREEN IF I TRY TO BRIEFLY SWIPE TO VIEW THE TIME OR READ A MESSAGE.
I have such deep-seated contempt for AI and it's products for just how much worse it makes people's lives.
by nektro on 10/31/24, 12:39 AM
Google used to be respected, a place so highly sought after that engineers who worked there were revered like wizards. oh how they've fallen :(
by fmardini on 10/31/24, 9:26 AM
Proto-plumbing is very LLM amenable
by dickersnoodle on 11/1/24, 5:12 PM
That explains a lot, actually.
by ThinkBeat on 10/30/24, 9:32 PM
So um.
With making this public statement,
can we expect that 25% of "the bottom" coders at Google
will soon be granted a lot more time and
ability to spend time with their loves ones.
by shane_kerns on 10/31/24, 2:38 AM
It's no wonder that their search absolutely sucks now. Duckduckgo is so much better in comparison now.
by marstall on 10/30/24, 8:11 PM
first thought is that much of that 25% is test code for non-ai-gen code...
by evbogue on 10/30/24, 2:35 AM
I'd be turning off the autocomplete in my IDE if I was at Google. Seems to double as a keylogger.
by marviel on 10/30/24, 10:05 PM
> 80% at Reasonote
by octacat on 11/1/24, 6:53 PM
It is visible...
by tylerchilds on 10/31/24, 1:33 PM
as a consumer, i never could have guessed
by anacrolix on 10/31/24, 12:31 PM
Puts on Google
by annlee2019 on 11/1/24, 6:40 AM
google CEO doesn't write code
by sheeshkebab on 10/31/24, 12:42 PM
and it shows… Google codebases I see in the wild are the worst - jumbled mess of hard to read code.
by psunavy03 on 10/31/24, 4:19 PM
And yet the 2024 State of DevOps report THAT GOOGLE PRODUCES has a butt-ton of caveats about the effectiveness of GenAI . . .
by AI_beffr on 10/31/24, 2:13 AM
i like how people say that ai can only write "trivial" code well or without mistakes. but what about from the point of view of the AI? writing "trivial" code is probably almost exactly as much of a challenge as writing the most complex code a human could ever write. the scales are not the same. dont allow yourself to feel so safe..
by jdmoreira on 10/31/24, 10:21 AM
I would prefer if he was more competent and made the stock price go up.
I guess grifters are going to grift
by sigmonsays on 10/31/24, 2:03 AM
imho code that is written by AI is code that is not worth having.
by hodder on 10/31/24, 4:39 PM
The market would be even more shocked to learn that another 30% is pasted in from Stack Overflow!
by AmazingTurtle on 10/31/24, 1:32 PM
Yeah, go ahead and lay off another 25% of development staff and see how well AI coders perform.:))
by fennecbutt on 10/31/24, 3:03 PM
That explains a lot.
by est on 10/31/24, 1:45 AM
Now maintain quarter of your old code base with AI, don't shut down services randomly.
by skrebbel on 10/30/24, 8:16 PM
To my experience, AIs can generate perfectly good code relatively easy things, the kind you might as well copy&paste from stackoverflow, and they'll very confidently generate
subtly wrong code for anything that's non-trivial for an experienced programmer to write. How do people deal with this? I simply don't understand the value proposition. Does Google now have 25% subtly wrong code? Or do they have 25% trivial code? Or do all their engineers babysit the AI and bugfix the subtly wrong code? Or are all their engineers so junior that an AI is such a substantial help?
Like, isn't this announcement a terrible indictment of how inexperienced their engineers are, or how trivial the problems they solve are, or both?
by ajkjk on 10/31/24, 4:09 PM
Well yeah he sells AI and wants you to believe in it so the stock price stays good.
by stevenally on 10/31/24, 6:49 PM
Yeah. I just wrote 600 lines of SQL using a macro processor. Took 10 minutes.
by throwaway290 on 10/31/24, 3:40 AM
"More than a quarter of our code is created by autocomplete!"
That's not that much...
by odinkara on 10/31/24, 3:09 PM
and it shows
by jagged-chisel on 10/31/24, 12:06 PM
“Created by” or “with the assistance of”?
by josephd79 on 10/31/24, 12:40 PM
that explains everything.
by DidYaWipe on 10/31/24, 6:47 AM
No wonder it sucks. Google's vaunted engineering has always been suspect, but their douchebaggery has been an accepted fact (even by them)>
by nephy on 10/31/24, 12:21 AM
Can we move on to the next grift yet?
by bamboozled on 10/31/24, 10:57 AM
"Product has not improved, or maybe even become worse in that time"
by martin82 on 10/31/24, 6:18 AM
I guess that must be the reason for the shocking enshitification of Google
by yapyap on 10/31/24, 1:46 AM
yikes
by pixelat3d on 10/30/24, 5:44 AM
[flagged]
by jrockway on 10/30/24, 2:20 AM
When I was there, way more than 25% of the code was copying one proto into another proto, or so people complained. What sort of memes are people making now that this task has been automated?
by kev009 on 10/30/24, 2:23 AM
I would hope a CEO, especially a technical one, would have enough sense to couple that statement to some useful business metric, because in isolation it might be announcement of public humiliation.
by joeevans1000 on 10/30/24, 2:45 AM
I read these threads and the usual 'I have to fix the AI code for longer than it would have taken to write it from scratch' and can't help but feel folks are truly trying to downplay what is going to eat the software industry alive.
by tylerchilds on 10/30/24, 2:21 AM
if the golden rule is that code is a liability, what does this headline imply?
by an_d_rew on 10/30/24, 2:25 AM
Huh.
That may explain why google search has, in the past couple of months, become so unusable for me that I switched (happily) to kagi.
by croes on 10/30/24, 2:27 AM
by hipadev23 on 10/30/24, 2:29 AM
Google is now mass-producing techdebt at rates not seen since Martin Fowler’s first design pattern blogposts.
by Tier3r on 10/30/24, 3:35 AM
Google is getting enshittified. It's already visible in many small ways. I was just using Google maps and in the route they called X (bus) Interchange as X International. I can only assume this happened because they are using AI to summarise routes now. Why in the world are they doing that? They have exact location names available.
by floor_ on 10/31/24, 12:14 PM
So no one owns a quarter of the new code at google. It's going to be very funny when it hits 100%.
by xyst on 10/30/24, 11:08 PM
I remember Google used to market "lines of code" for their products. Chrome at one point had 6.7 LoC. Now the new marketing term is: "product was made with 1M lines of AI generated code (slop)!11!". Or "Chrome refactored with 10% AI" or some bs
by klocksib on 10/31/24, 3:19 PM
it's quicker and easier than ever to generate a project to send to the Google Graveyard.
by ultra_nick on 10/30/24, 2:58 AM
Why work at big businesses anymore? Let's just create more startups.
by 1oooqooq on 10/30/24, 4:28 AM
this only means employees sign up to use new toys and they are paying enough seats for all employees.
it's like companies paying all those todolist and tutorial apps left running on aws ec2 instances in 2007ish.
I'd be worried if i were a google investor. lol.