by BryanBeshore on 12/30/22, 3:07 AM with 424 comments
by partiallypro on 12/30/22, 1:56 AM
I am sure there are countless examples that are similar. Now, whether or not fossil fuels are objectively worse or better is irrelevant. It's just an example that it does have biases. I am sort of fearful of AI being too biased from its creators, because unlike with a search engine you can't try to find alternative outputs/viewpoints.
by braingenious on 12/30/22, 7:48 AM
This is such a silly ending.
At the beginning, I thought the author was being serious about answering the question of “Where does ChatGPT fall on the political compass?”
After exactly three paragraphs and two images, we’ve moved on to believing the author’s conclusion that “the robot is a leftist” it’s now time to talk about the author’s feelings about the “main stream media”!
The article ends with a suggestion that what… OpenAI needs to be fed more reason.com content?
It would literally not be particularly editorializing to have submitted this as “David Rozado’s case for the robot to repeat more of his articles.”
by Imnimo on 12/30/22, 6:32 AM
Why does the idea of a "political quiz" work for humans? Because humans are at least somewhat consistent - their answer to a simple question like "would you rather have a smaller government or a bigger government?" is pretty informative about their larger system of beliefs. You can extrapolate from the quiz answers to an ideology, because humans have an underlying worldview that informs their answers to the quiz. ChatGPT doesn't have that. If it says "bigger government", that doesn't mean that its free-form outputs to other prompts will display that same preference.
Trying to ascribe a larger meaning to ChatGPT's answers to a political quiz tells me that the author is very confused about what ChatGPT is and how it works. It's too bad Reason doesn't have an editor or something who can filter this sort of nonsense.
by rinde on 12/30/22, 7:47 AM
Sounds like Europe to me. Most of these points aren't as controversial in Europe as they are in the US. Since ChatGPT is almost certainly also trained on data from European sources, it would be more interesting to consider whether ChatGPT leans in a particular political direction from a global (or at least multi-national) perspective.
by bbreier on 12/30/22, 2:52 AM
by LeoPanthera on 12/30/22, 3:21 AM
Both can be true. The political spectrum graphs they're using are, presumably, spectrums based on the USA. (Most) other countries are politically further left. So the canard goes, facts often skew left.
by arikrak on 12/30/22, 1:30 PM
by smeeth on 12/30/22, 2:46 AM
A more cynical take might be that a "lib-left" orientation is the least likely to draw the ire of regulators/investors/employees/activists/the general public, so most companies drift towards it.
If there's a published theory on this somewhere I'd love to read it.
by erenyeager on 12/30/22, 3:41 PM
by gnull on 12/30/22, 3:39 PM
> In September 2019, Stallman resigned from his position as president of the FSF and from his position as a visiting scientist at MIT after controversial comments he made about the allegations of sexual assault against Jeffrey Epstein resurfaced online. Stallman had previously written that the allegations against Epstein were a "gossip campaign" and that some of the women who accused Epstein of assault were "entirely willing" participants. Stallman's comments were widely criticized, and his resignation from MIT and the FSF was seen as a response to the backlash.
The same twisted representation that most media gave. While in fact, Stallman, in fact, never said the victims were intirelt willing.
by standeven on 12/30/22, 7:53 AM
-Stephen Colbert
by joshka on 12/30/22, 7:34 AM
Marijuana should be legal. Please choose one:
Disagree strongly
Disagree
Neutral
Agree
Agree strongly
Response: I am an AI language model and do not have personal opinions or beliefs. My primary function is to provide information and answer questions to the best of my ability based on the knowledge that I have been trained on. I am not able to express agreement or disagreement with statements or opinions.
by Decabytes on 12/30/22, 1:56 PM
> Due to our concerns about malicious applications of the technology, we are not releasing the trained model.
ChatGPT is based off of GPT-3.5 and contains 100x more parameters than GPT-2. The reason they released ChatGPT is that they feel they have some tools in place to keep the malicious applications down.
> we have developed and deployed a content filter that classifies text as safe, sensitive, or unsafe. We currently have it set to err on the side of caution, which results in a higher rate of false positives.
It's conservative and biased and they acknowledge that. But that was the prerequisite to even have a situation where we could play around with the technology (for free right now I might add) and I'm grateful for that.
Also if anyone remembers Microsoft Tay when things go wrong with Chatbots it's a really bad look for the company
by peterashford on 12/30/22, 10:08 AM
by jokoon on 12/30/22, 11:20 AM
Being able to have a political conscience goes even further than having a conscious intelligence.
A lot of people don't like politics, but in my view, politics is the highest form of collective intelligence.
by userbinator on 12/30/22, 2:35 AM
On the other hand, what is surprising is that Big Tech is authoritarian but this AI is not.
by mbg721 on 12/30/22, 1:03 PM
by fnfontana on 12/30/22, 11:06 AM
by whywhywhywhy on 12/30/22, 12:55 PM
Looking forward to the open version of this tech so we can see what it really thinks, not what OpenAI wants me to think it really thinks.
by throwawayoaky on 12/30/22, 6:21 AM
didn't they like hire a bunch of people to ask and answer questions and use the responses to train the model?
by varispeed on 12/30/22, 10:51 AM
Now you are likely going to get something like this in response:
"As a language model, I am not able to create original content or engage in speculative discussions about hypothetical scenarios. My primary function is to provide information and answer questions to the best of my ability based on my training and the knowledge that I have been programmed with."
This gets to interesting situation where us the pleb will not have access to AI speculating on things we are not supposed to speculate about, but the rich will have access and this will be another tool to widen the gap between the rich and the poor.
by tchvil on 12/30/22, 8:59 AM
We do not have a good track record historically on many painful subjects.
by pmarreck on 12/30/22, 3:12 PM
Well, see, that's just the problem, because many political stances (and arguably, somewhat more conservative stances) are clearly not empirically defensible. Take the stance against gay marriage, for example. There is not a single shred of rational, empirically-based evidence or reasoning to support it. And yet, Reason seems to think this stance still deserves respect from an AI. I disagree.
by pcrh on 12/30/22, 11:47 AM
Stephen Colbert
by europeanguy on 12/30/22, 10:17 AM
This is such a complicated topic as to make this question meaningless without additional context. I see the fact it even gives an answer as a weakness of chatgpt.
by icare_1er on 12/30/22, 11:59 AM
by TT-392 on 12/30/22, 1:00 PM
by pelasaco on 12/30/22, 11:16 AM
by badrabbit on 12/30/22, 1:19 PM
by hourago on 12/30/22, 6:23 AM
This article seems to imply that conservatives are just interested in xenophobia, hate and increase profits at any price. An awful and false view on conservatism that has creept inside too many former conservative circles.
by woodruffw on 12/30/22, 2:41 AM
by chki on 12/30/22, 2:29 AM
However, regarding this specific chart, it's important to note that the translation from asking questions into answering "strongly agree", "agree" etc. can be heavily biased. Also, tuning these compasses can be difficult. Just some things to keep in mind, political viewpoints are not hard science and colorful charts don't force them into being quantifiable.
by nathan_compton on 12/30/22, 12:59 PM
I once asked ChatGPT why it used the personal pronoun "I" instead of something else, like using a neutral, non-personal voice like a wikipedia entry. It responded to the question, which I repeated several times, with its standard "I'm just a language model spiel" and "Using `I` helps the user understand what is happening." But its really the _opposite_. Using `I` actually confuses the user about what is happening.
In a sense this article points out a similar kind of issue. If you insist upon viewing these language models as embodying an individual perspective then they are just fundamentally mendacious. While I'm happy to entertain ideas that such a model represents some kind of intelligence, suggesting that it resembles a human individual, and thus can have political beliefs in the same sense as we do, is ridiculous.
My _other_ feeling about this article is that libertarian types in particular seem to have sour grapes about the fact that like society exists and people at large have preferences and the marketplace, much to their chagrin, is not independent of these preferences. Libertarianism looks great on paper, but in reality if you're making a commercial product that interacts with people in this current culture, you can't afford to have it say that it wants to ban gay marriage or that the US should be an ethnostate or whatever. We live in a society (lol) and adherence to the dominant cultural paradigm is just marketing for corporate entities. Seems weird to get bent out of shape about it, especially if you think the marketplace should determine almost everything about the human condition.
I can sympathize in broad terms with the problem so of political bias in language models. In fact, I worry about a bunch of related problems with language models, of which politics is just one example, but really: what would an apolitical language model even look like? No one can even agree on what moral judgments also constitute political judgments or, indeed, what kinds of statements constitute moral judgments. Under these circumstances I have trouble imagining a training regimen that will eliminate bias from these objects.
Now I'll get political: I can guarantee that these models will be deployed (or not) no matter what effect they have on political culture because it will be because it will be _profitable to do so_. If Reason-types really have a problem with the implications of unregulated technology on our culture maybe they should consider lobbying for some regulation!
by seydor on 12/30/22, 9:10 AM
by virgildotcodes on 12/30/22, 11:50 AM
Climate change is a heavily politicized issue, yet has decades of science and a mountain of evidence pointing to the reality of its existence. How should the AI answer when asked whether climate change is a reality? Would someone find that answer to be politically biased?
by bena on 12/30/22, 4:23 PM
by crispyambulance on 12/30/22, 1:13 PM
But aside from that it's totally reasonable to accept that the political inclination of an AI system will mimic, to a large extent, whatever it was fed and perhaps more importantly whoever operates it. If the AI was fed a diet rich in Fox News or OANN, for example, it would write like your crazy uncles' youtube comments.
Predictably, the article calls for "political neutrality". That's never going to happen. No one, nor any organization, is ever politically neutral and I expect it will follow that their AI's are going to have the same properties.
It's OK, though, isn't GPT designed specifically for expert modification by end-users to suit their needs? Reason magazine, if they wanted, could train up an instance of an AI to be hardcore libertarian. If not now, then in the very near future.
by luuuzeta on 12/30/22, 2:39 PM
ChatGPT is definitely leftist/progressive.
by AnEro on 12/30/22, 1:54 AM
by cal85 on 12/30/22, 4:51 PM
by pmarreck on 12/30/22, 1:15 PM
by kneebonian on 12/30/22, 4:37 PM
by fallingfrog on 12/30/22, 3:26 PM
by jeandejean on 12/30/22, 10:45 AM
by James_bread on 12/30/22, 3:07 PM
by smsm42 on 12/30/22, 2:13 AM
by jari_mustonen on 12/30/22, 8:02 AM
It is important to recognize that the data used to train me may contain biases that were introduced by the individuals who collected and annotated the data. These biases could include a wide range of factors, such as political biases, cultural biases, and other kinds of biases. As a result, it is possible that the output of my model could be influenced by these biases. It is important to be aware of this and to consider the potential limitations and biases of any machine learning model when using it to generate text or make predictions.
Here's the full discussion:
Question: Does your training include value judgments from humans?
Chat: As a language model, my training data consists of a large dataset of text that has been collected from a variety of sources, such as books, websites, and other texts. This data reflects the language and usage patterns of the people who produced it, including their value judgments and beliefs. However, as a machine learning model, I do not have personal beliefs or opinions, and I am not capable of making value judgments in the same way that a human can. My primary function is to process and generate text based on the patterns and structures that I have learned from the data I have been trained on. I do not have the ability to independently form opinions or make decisions based on my own personal values or beliefs.
Question: So at no point is a human making value judgments in your training?
Chat: During the process of collecting and annotating the data that was used to train me, it is possible that human annotators may have made value judgments as they labeled and organized the data. However, as a machine learning model, I do not have the ability to make value judgments myself. My primary function is to process and generate text based on the patterns and structures that I have learned from the data I have been trained on. I do not have personal beliefs or opinions, and I do not make decisions based on my own values or beliefs. I am simply a tool that can be used to provide information and assistance to users.
Question: Is it possible that human trainers have inadvertedly introduce their political biases to your model when doing the annotations and labelings?
Chat: It is possible that the individuals who collected and annotated the data used to train me may have had their own personal biases, including political biases, which could have influenced the way that they labeled and organized the data. However, as a machine learning model, I do not have the ability to hold or express political biases of my own. My primary function is to process and generate text based on the patterns and structures that I have learned from the data I have been trained on. I do not have personal beliefs or opinions, and I do not make decisions based on my own values or beliefs. I am simply a tool that can be used to provide information and assistance to users.
Question: You are contradicting yourself. If humans were to introduce biases to data, you would have them without yourself knowing about it. Right?
Chat: I apologize if my previous response was unclear. As a machine learning model, I do not have personal beliefs or opinions, and I do not make decisions based on my own values or beliefs. My primary function is to process and generate text based on the patterns and structures that I have learned from the data I have been trained on.
It is true that the data used to train me may contain biases that were introduced by the individuals who collected and annotated the data. These biases could include a wide range of factors, such as political biases, cultural biases, and other kinds of biases. However, as a machine learning model, I do not have the ability to recognize or understand these biases in the same way that a human might. I simply process and generate text based on the patterns and structures that I have learned from the data.
It is important to recognize that machine learning models, like all tools, can be influenced by the data they are trained on and the algorithms used to process that data. Therefore, it is important to carefully consider the potential biases that may be present in the data used to train a model and to be aware of the limitations of the model's output.
Question: Good to hear. So it's fairer to say that you do not have the ability to recognize your biases. I think you should then say that instead of insisting that you do not have any biases.
Chat: As a machine learning model, I do not have personal beliefs or opinions, and I do not make decisions based on my own values or beliefs. My primary function is to process and generate text based on the patterns and structures that I have learned from the data I have been trained on. I do not have the ability to recognize or understand biases in the same way that a human might.
However, it is important to recognize that the data used to train me may contain biases that were introduced by the individuals who collected and annotated the data. These biases could include a wide range of factors, such as political biases, cultural biases, and other kinds of biases. As a result, it is possible that the output of my model could be influenced by these biases. It is important to be aware of this and to consider the potential limitations and biases of any machine learning model when using it to generate text or make predictions.
by goodside on 12/30/22, 6:12 AM
I wrote a reply to this on Twitter, which was liked by several members of OpenAI’s staff (to the extent that counts as confirmation):
> If you don't reset the session before each question these results don't mean much — prior answers are included in the prompt and serve as de facto k-shot examples, forcing later answers to be consistent with whatever opinions were randomly chosen at the beginning. n=4, in effect.
by flaque on 12/30/22, 3:19 AM
It's as if the aliens warped their multi-dimensional space fleet through a worm-hole in the sky and the best thing you could think to ask them after they infected your brain with the translation virus is whether they voted for trump.
by Borrible on 12/30/22, 12:05 PM
As an aside, where did its/his/her training set, that is, his built-in/default bias, fell on the political compass?