by eludwig on 8/31/24, 11:48 AM with 139 comments
by mrmetanoia on 8/31/24, 1:58 PM
by becquerel on 8/31/24, 2:19 PM
As it happens I have been using Claude quite extensively as a drafting partner over the past few months for writing a novel. I enjoy plotting, planning and editing, but not drafting, so I let it do the zeroth draft for me. It has been quite a productive arrangement.
by rpdillon on 8/31/24, 1:55 PM
https://www.rogerebert.com/roger-ebert/video-games-can-never...
The argument rings just as silly today as it was 12 years ago.
by kubectl_h on 8/31/24, 2:57 PM
I suspect the rate of individualized adoption of AI augmented writing is well beyond what a casual observer here on HN would think it is.
I also share Chiang's worry about this:
> We are entering an era where someone might use a large language model to generate a document out of a bulleted list, and send it to a person who will use a large language model to condense that document into a bulleted list. Can anyone seriously argue that this is an improvement?
I do not think OpenAI Et al. set out to create a self-perpetuating slop machine like this but it sure feels like this is where it is going. For individuals it improves their life I guess but when zoomed out there is something quite dystopian about it.
by some_random on 8/31/24, 2:03 PM
by bugglebeetle on 8/31/24, 2:09 PM
by tkgally on 8/31/24, 2:20 PM
by BrannonKing on 8/31/24, 2:11 PM
by OJFord on 8/31/24, 1:54 PM
And you didn't prompt it any more than commissioning a piece, or making a thematic suggestion to a painter friend.
You may not like its art, and it may not come up with some whole new original style, but that doesn't mean it isn't making art in known styles.
TFA is just a bit of a silly fearful protest, IMO.
by GaggiX on 8/31/24, 1:54 PM
The model can't output the average because the average is usually completely meaningless, that's why it's a generative model and not a regressive one. As always, these articles are made by people who don't really understand the technology, and create their own interpretation on how it works, whether they are right or not at the end.
by Workaccount2 on 8/31/24, 2:11 PM
by js8 on 8/31/24, 2:10 PM
I think it's pretty clear that generative AI makes a lot of small decisions. They might not be groundbreaking, or novel (as they aren't in a custom portrait), or somehow lack overall vision, but they are there.
by elawler24 on 8/31/24, 3:03 PM
[1] https://www.sciencedirect.com/science/article/abs/pii/S09218...
by gmaster1440 on 8/31/24, 2:17 PM
We can debate his generalized definition of art as making creative choices that carry subjective, intentional, and performative value for human beings (and therefore LLMs fall short of this), but I think he makes a couple strong points nonetheless:
1. The argument others like François Chollet have also made, that we have yet to see any AI systems capable of exhibiting intelligence beyond stylistic mimicry or forming generalized knowledge about concepts from large data sets.
2. The subjective experience of human interaction is valuable and desirable, and will remain so in the face of increasingly capable models, not because they won't be able to compete in producing inspiring art or enjoyable fiction, but because of the inherent primacy of human intentionality and experience.
by f33d5173 on 8/31/24, 2:44 PM
Sure. But suppose we had an AGI that was just as smart as a human: clearly it would be able to make all these decisions just fine and make art. If current AIs are someone between that and dirt, then they ought to be able to make decisions of less complexity, but still of some importance, to the final product. As AIs improve, we would expect the decisions they make to become more complex.
Recall that traditional artists have always had a number of assistants to help them. They produced the sketch and outline, and had other artists - skilled, but not as much as them - fill in the details. A modern artist, who already is less skilled than these artists, and futhermore has less need for their creation to stand the test of time, can benefit from an even less skilled assistant helping them.
by antimemetics on 8/31/24, 1:47 PM
Of course it’s not going to be creative on its own - it obviously is not intelligent.
But for me comfyui is an incredibly cool tool to be creative.
Such a boring topic after all - all the noise it attracts won’t amount to much once people understand this technology
by 1vuio0pswjnm7 on 8/31/24, 10:01 PM
https://www.newyorker.com/tech/annals-of-technology/chatgpt-...
ChatGPT is a blurry JPEG of the web (newyorker.com)
574 points by ssaddi on Feb 9, 2023
306 comments
https://news.ycombinator.com/item?id=34724477
About the author:
Lunch with the FT: Sci-fi writer Ted Chiang (ft.com)
https://cc.bingj.com/cache.aspx?d=4753927952212231&w=P5zSV2b...
Recommended by HN moderator:
by nf3 on 8/31/24, 1:54 PM
by pessimizer on 8/31/24, 2:27 PM
Panting a fence, not art because it keeps the wood from rotting.
Painting a fence hot pink? Art because there's no good reason to paint a fence that color.
If we discover that birds hate hot pink fences, and that makes them last longer? Not art again.
A rich guy pays a million dollars for a hot pink fencepost? Art. Who's the guy who sold the hot pink fencepost? Does he have any other colors?
by squidbeak on 8/31/24, 2:36 PM
Whatever anyone thinks about the limitations of LLMs, or whether AI in its current form is sales hype - can anyone sensibly claim that AI 1000 years from now won't be capable of an artistic sensibility? Until there's some proof that there is a secret ingredient in human consciousness that can never be developed by AI - not even a self-aware AI - anyone attempting to lay an imaginary ceiling over the tech is deceiving themselves.
by AIorNot on 8/31/24, 2:21 PM
- but even assuming the rate of advancement slows down, eventually it will be making Art…
by Havoc on 8/31/24, 2:19 PM
by nwoli on 8/31/24, 2:06 PM
by matt3210 on 9/3/24, 3:23 PM
> he crafted detailed text prompts and then instructed DALL-E to revise and manipulate the generated images again and again. He generated more than a hundred thousand images to arrive at the twenty images in the exhibit.
by swayvil on 8/31/24, 2:30 PM
Doing art via the mind is like breathing through a soda straw.
Mind promises and promises but just keeps missing the mark.
I can make a perfect line with my hand in a moment. I can spend a year creating a "perfect line drawing device" and never get there.
The promise of mind is so tempting tho. Succumb to it and you end up living in a flavorless cartoon.
by CuriouslyC on 8/31/24, 1:58 PM
by H1Supreme on 8/31/24, 2:15 PM
Which makes ChatGPT (or whatever) just as valid as any tool for creating art.
> What I’m saying is that art requires making choices at every scale; the countless small-scale choices made during implementation are just as important to the final product as the few large-scale choices made during the conception.
As a life-long artist and musician, I agree with this. However, I find the artist's perspective lacking from this article. For many artists (myself included), the process is why we do it. It's truly therapeutic. I honestly cannot imagine my life without creative expression. Whether entering a prompt fulfills that for someone is up to them to decide. But, for me, it would remove the parts of creating art that give me joy.
by bookofjoe on 9/2/24, 3:21 PM
by AlienRobot on 8/31/24, 3:15 PM
The "AI artists" using this tool lack the technical and artistic competency to realize this. They didn't write the algorithm, draw the dataset, or train the model. They prompted. They have the smallest amount of creative input into this whole pipeline.
I do believe AI can be used in the process to create art as it's just an image generator like fractal art, but the problem is most people are going to use AI not as a means to create art, but as an end. You could fix the problem above by simply importing the image into GIMP and changing the brightness, but nobody does that because they aren't interesting into creating an art piece with a set goal in mind, they're just being entertained by generating dozens or hundreds of images with this technology.
Amusingly, you could also just type text in GIMP. Instead there is now something called "flux" that can do text literals.
While I see the point in making a prompt interpreter capable of generating text literally, if I were creating something, I wouldn't let an AI randomly pick a font, color, weight, serifs/slabs, etc. for me. These are creative choices in design that make all the difference. Prompting gives the illusion of (creative) choice. You get something that looks good, but "getting something that looks good" is the default state. Anyone can do that. It's the AI art equivalent of drawing a stickman. The prompters just don't realize it because they're comparing themselves to to artists of other media, not comparing themselves to other AI artists.
When everything is AI, and anyone can generate an image with a prompt, the whole market will be so saturated with this (perhaps it already is at the rate these are generated) that all the novelty will be gone.
It was cool when AI was able to generate video, just like it was able to generate text. But in my opinion, those are feats of the technology, not artistic feats. The piece itself isn't interesting. It could be any video. Just the fact that the tech can do this is impressive. But it's just the tech that is impressive, not its output. Once the tech can do it once, it can do it every time, so the second time AI generates video is never going to be as impressive as the first time. By the thousandth time it will be as impressive as my ability to send this message to the other side of the world at the speed of light.
by dbrueck on 8/31/24, 2:19 PM
by swayvil on 8/31/24, 3:09 PM
Ask an artist what's art? Hell no.
by ben_w on 8/31/24, 2:53 PM
Myself, I like results: A metaphor about the scent of roses is just as sweet, after I find it came from an LLM.
> I think the answer is no. An artist—whether working digitally or with paint—implicitly makes far more decisions during the process of making a painting than would fit into a text prompt of a few hundred words.
In the art of words, Even the briefest form has weight, Prompt and haiku both.
> This hypothetical writing program might require you to enter a hundred thousand words of prompts in order for it to generate an entirely different hundred thousand words that make up the novel you’re envisioning.
That would be an improvement on what I've been going through with the novel I started writing before the Attention Is All You Need paper — I've probably written 200,000 words, and it's currently stuck at 90% complete and 90,000 words long.
> Believing that inspiration outweighs everything else is, I suspect, a sign that someone is unfamiliar with the medium. I contend that this is true even if one’s goal is to create entertainment rather than high art.
I agree completely. The better and worse examples of AI-generated are very obvious, and I think relate to how much attention to detail people have with the result. This also applies to both text and images — think of all the cases in the first few months where you could spot fake reviews and fake books because they started "As a large language model…"
The quality of the output then becomes how good the user is at reviewing the result: I can't draw hands, but that doesn't stop me from being able to reject the incorrect outputs. Conversely I know essentially nothing about motorbikes, so if an AI (image or text) makes a fundamental error about them, I won't notice the error and would therefore let it pass.
> Effort during the writing process doesn’t guarantee the end product is worth reading, but worthwhile work cannot be made without it.
This has been the case so far, but even then not entirely. To use the example of photographs, even CCTV footage can be interesting and amusing. Yes, this involves filtering out all the irrelevant stuff, and yes this is itself an act of effort, but even there that greatest effort is the easiest to automate: has anything at all even happened in this image?
To me, this matches the argument between the value of hand-made vs. factory made items. Especially in the early days, the work of an artisan is better than the same mass-produced item. An automated loom replacing artisans, pre-recorded music replacing live bands in cinemas and bars, cameras replacing painters, were all strictly worse in the first instance, but despite this they remained worth consuming — even in, as per the acknowledgement in the article itself: "When photography was first developed, I suspect it didn’t seem like an artistic medium because it wasn’t apparent that there were a lot of choices to be made; you just set up the camera and start the exposure."
> Language is, by definition, a system of communication, and it requires an intention to communicate.
I do not see any requirement for "intention", but perhaps it is a question of definitions — at most I would reverse the causality, and say that if you believe such a requirement exists, then whatever it is you mean by "intention" must be present in an AI that behaves like an LLM.
> There are many things we don’t understand about how large language models work, but one thing we can be sure of is that ChatGPT is not happy to see you.
Despite knowing how they work, I am unsure of this. I do not know how it is that I, a bag of mostly-water whose thinking bits are salty protein electrochemical gradients, can have subjective experiences.
I do know that ChatGPT is learning to act like us. On the one hand, it is conceivable that it could use some of its vector space to represent emotional affect that itself will closely correspond to the levels of serotonin, adrenaline, dopamine, oxytocin, in a real human — and I can even test this simply by asking it do pretend is has elevated or suppressed levels of these things.
On the other, don't get me wrong, my base assumption here is that it's just acting: I know that there are many other things, such as VHS tapes, which can reproduce the emotional affect of any real human, present any argument about their own personhood, to beg to be not switched off, and I know that it isn't real. Even the human who gets filmed and their affect and words getting onto the tape, will, most likely, be faking all those things.
I have no way to tell if what ChatGPT is doing is more like consciousness, or more like a cargo-cult's hand-carved walkie-talkie shaped object is to the US forces in the Pacific in WW2.
But when it's good enough at pretending… if you can't tell, does it matter?
> Because language comes so easily to us, it’s easy to forget that it lies on top of these other experiences of subjective feeling and of wanting to communicate that feeling.
> it’s the same phenomenon as when butterflies evolve large dark spots on their wings that can fool birds into thinking they’re predators with big eyes.
100% true. Even if, for the sake of argument, I assume that an LLM has feelings, there's absolutely no reason to assume that those feelings are the ones that it appears to have to our eyes. The author gives an example of dogs, writing "A dog can communicate that it is happy to see you" — but we know from tests, that owners believe dogs have a "guilty face" which is really a "submission face", because we can't really read canine body language as well as we think we can: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4310318/
Also, these models are trained to maximise our happiness with their output. One thing I can be sure of is they're sycophants.
> The point of writing essays is to strengthen students’ critical-thinking skills; in the same way that lifting weights is useful no matter what sport an athlete plays, writing essays develops skills necessary for whatever job a college student will eventually get. Using ChatGPT to complete assignments is like bringing a forklift into the weight room; you will never improve your cognitive fitness that way.
> By Chollet’s definition, programs like AlphaZero are highly skilled, but they aren’t particularly intelligent, because they aren’t efficient at gaining new skills.
Both fantastic examples.
by slowhadoken on 8/31/24, 1:56 PM