I'm interested to see any studies you can find on this topic. Here are some studies that I have:
Equalitarianism: A Source of Liberal Bias [1] - in study 6, liberals were shown to be ...pretty racist.
You claimed the Right believes fake news. I wont dispute that. I'll just reply that there's a lot of that going around. Here are some examples that debunk fake news you yourself might fervently believe:
Girls Who Code: A Randomized Field Experiment on Gender-Based Hiring Discrimination [2] - leftists tend to believe that women are discriminated against in STEM.
An Empirical Analysis of Racial Differences in Police Use of Force [3] - debunks the common belief, on the Left, that police are more likely to shoot people of color. Quote: "we find, after controlling for suspect demographics, ocer demographics, encounter characteristics, suspect weapon and year fixed effects, that blacks are 27.4 percent less likely to be shot at by police relative to non-black, non-Hispanics"
Rathje et al. (2023), Accuracy and social motivations shape judgements of (mis)information, Nature Human Behaviour. This one emphasizes that misinformation judgments are shaped by both accuracy motives and social/identity motives, which helps explain why partisan gaps are not simply about intelligence or total inability to separate truth from fiction.
https://www.nature.com/articles/s41562-023-01540-w?utm_sourc...
Pennycook et al. (2022), Accuracy prompts are a replicable and generalizable approach for reducing online misinformation, Nature Communications. This paper discusses baseline sharing discernment and notes worse baseline discernment among conservatives in the samples they studied, while also showing that simple accuracy prompts can help.
https://www.nature.com/articles/s41467-022-30073-5?utm_sourc...
summary is: there are studies showing conservatives, on average, perform worse on certain misinformation/truth-discernment tasks, but the strongest scholarly version of the claim is narrower and more conditional than the popular retelling
https://www.science.org/doi/10.1126/sciadv.abf1234?utm_sourc...
Great! So, let's start with your first study. Note this quote:
> it is possible that conservatives’ relatively low accuracy about political information is a by-product of the fact that issues used in forming this assessment were selected with an eye toward detecting misperceptions among the political group
That's definitely a way to bias a study against conservatives. It's good that this study claims it avoided that bias. But did it? They don't list the questions that participants were asked. I checked the list of supporting documents, and couldn't find it.
Without that list, I can't accept this source. Sorry.
If I went out and asked a bunch of Liberals, "did Trump say that Neo-Nazis are 'very fine people?'" I suspect that upwards of 90% of Liberals would answer "yes" ...and they would swear they heard him do it! You may (falsely) believe this yourself!
I could ask, "did Trump advise people to drink bleach?" and many Liberals would swear he did.
He didn't do either of those things. But many Liberals emphatically believe he did. I could very easily design a study that included only these sorts of questions - questions that Liberals will get wrong.
The only way to spot this bias would be if I included the questions in the study, so that you could vet them yourself. Without such a list, it is completely reasonable for me to reject your source.
Should I continue to the next one, or are they all like this?
If you don't want to accept sources you disagree with.
Then isn't that part of the problem?
The onus is on you, to tell me what would be acceptable sources for you.
You didn't really debunk any of these sources, just supplied some random sampling of your own creation.
Interestingly, I have gone back and watched the full video of both of those quotes. He did say all of those things, but 'in-joking'. That is a common tactic. Everything he says can be re-cast as 'he was only joking'. The trick is, the right can always shift what was a 'joke' or 'not-joke', depending on the argument. Was it serious, or not serious? It really depends on shifting views, and the interpretation can change day to day.
I tend to agree liberals really piled on those examples too much, there were really so many better examples.
> Maybe if there was work done to stop the shootings
It's odd that you seem to believe no work has been done. Lots of work has been done. Lots more work is blocked by people who steadfastly refuse to punish criminals - claiming instead that it's not their fault that they're violent.
I'd love to hear any additional ideas you have other than violating the rights of citizens.
"I'd love to hear any additional ideas except those that work everywhere because that'd require big changes"
The answer is trivial and well-known: federal-level gun controls (because anything state-level is a joke without hard borders between states), coupled with a buy-back program, amnesties, and real enforcement. There are no school shootings in the UK or Australia.
Unfortunately, there are too many people who'd rather have more guns and more dead kids (and adults) than fewer dead kids and fewer guns around. They'd justify that by talking about "preventing tyranny" or something, ignoring that paramilitaries executing people in broad daylight on camera with no consequences is already the reality of the US today, and guns played zero to negative role preventing that. Coincidentally, there are no such paramilitaries in the UK or in Australia either.
As for "the rights of citizens": there is no such thing as an immutable unconditional right. American citizens don't have a right to own nuclear weapons, neither should they, even though it's perfectly possible to have a very expansive definition of "bearing arms". Plus, the Constitution itself was amended many times in the past, and by now is clearly in need of a major overhaul, as evidenced by the US sliding down in various democracy indices (for example, World Press Freedom Index puts the US under Romania in 2025). So there is nothing impossible or uniquely oppressive about the reforms necessary to stop children being shot in schools, but because it's such a foundational element of identity for so many with a lot of money behind it (the NRA is exceptionally well funded), in practice there's indeed "No Way to Prevent This".
> "I'd love to hear any additional ideas except those that work everywhere because that'd require big changes"
When you pretend to quote someone, but you alter the quote, you're being dishonest. What you just did suggests that you don't really have any good arguments on your side - that you don't have any arguments that would stand on their own, without requiring a lie.
So, if we were having a debate, I'd say that you lost.
I agree, most of the arguments have been basically "do anything" hysterics that are divorced from reality. For instance, much focus is given on rifles, specifically big black scary looking rifles. In reality if you look at murder rates by weapon type
> Many school shootings in the United States result in one non-fatal injury.[63] The type of firearm most commonly used in school shootings in the United States is the handgun. Three school shootings (the Columbine massacre, the Sandy Hook massacre, and the 2018 Parkland High School shooting in Florida), accounted for 43% of the fatalities; the type of firearm used in the most lethal school shootings was the rifle.[62]
Note that this is shootings, so excludes murders by non guns. Rifles are not any more effective at murder than handguns. It's much easier to control, conceal, reload and attain a handgun. They're the preferred weapon of choice for practical reasons.
"…much focus is given on rifles, specifically big black scary looking rifles"
Military-style rifles designed primarily for killing humans? That's called a low hanging fruit. If the U.S. can't even restrict those I expect everything else to be a wasted effort.
> I'd love to hear any additional ideas you have other than violating the rights of citizens.
That's a strange take. There's citizens' rights involved in not being shot at and also the right to own guns, but when people are being killed, I would think that the right to life would take precedence over introducing some rule over gun ownership.
Here in the UK, we have strict rules on gun ownership (which I'm not particularly familiar with) which involve some kind of assessment (to prevent unstable people from owning them) and the guns have to be kept in a suitable locked cabinet. It's entirely possible for people to own guns for sport or for culling animals etc. and yet we have a very small amount of gun crime.
> I would think that the right to life would take precedence
Well, let's do a thought experiment to test this. Which of these two rights takes precedence: (1) life (specifically in this case, the right to not be murdered) or (2) freedom of movement
That's an easy question, isn't it? (1) takes precedence. But how many 9's of protection are you willing to "purchase" at the expense of (2)? How much of (2) are you willing to give up in order to get a little more of (1)?
If we reduce (2) to 0 ...by locking every person in a padded cell, then we can achieve 99.99% protection of (1).
Presumably though, you don't like that idea. Presumably, you'll want to be let out of the padded cell, and get a bit of right (2) back. But giving you a bit of (2) back is going to cost someone their life! If we let you and others out of the padded cell, someone somewhere is going to get murdered.
What this thought experiment demonstrates is that the issue is not as simple as, "(1) takes precedence over (2)" - the thought experiment demonstrates that there is an amount of (2) that you will not spend in order to purchase a marginal increase in (1) - a situation where (2) actually takes precedence.
> Here in the UK, we have strict rules on gun ownership
And I totally respect that. Just to be clear though, "gun ownership" isn't really the issue. Gun ownership is a proxy for the actual right: self defense. You place a low value on the right to defend yourself and your family. Again, I totally respect that. You've "spent" that right to purchase lower gun crime. Have I mentioned how much I respect your personal decision?
As for me, I value the right to self defense above all. I've looked at the data, and I've realized that I'm much more likely to be stabbed or beaten to death than I am (well, was when I was in school) to be in a school shooting.
So to me, having actually looked at that data, it seems obvious that the right to self defense should take precedence. I think that my way of thinking is perfectly rational, and I think you're way of thinking is not ...but I totally respect your personal decision. I'm sure you also respect mine.
I have no clue what I just read or what kind of mental gymnastics are required to say that a right to a weapon overrides a right to live.
It used to blow my mind when I moved here (Netherlands) that I wasn't allowed to use a weapon to defend myself... but then you realize ... basically nobody has weapons.
An irony is that guns are vastly more often used for self harm than for self defense. These supposed defenders of rights are often losing their own lives and the lives of family members with the instruments they demand to have a right to have.
I'm having a hard time understanding your point. Here's what I think just happened:
Me: I value the right to self defense
You: Guns are used for self harm more often than self defense [as an aside, I don't disagree that this is true - I've heard this stat many times]
You: This is ironic!
Please help me to understand why you think that's ironic. What do you feel would be a non-ironic position? Is it this....
Me: I value the right to self defense, but one day I might want to kill myself, so I guess I'd better give up the right to self defense.
Is that a non-ironic position? To me that seems like an irrational position. Those two issues (self defense and self harm) seem orthogonal, and conflating them because of a superficial similarity (they both involve guns) seems odd.
Ok. Now this is logic I understand. Nobody is saying you don’t have a right to self defense. The question should be: why do you have a right to bring a gun to a fist fight?
> The question should be: why do you have a right to bring a gun to a fist fight?
Great question. The answer is: bad people are often significantly stronger than their victims.
Have you ever seen this video [1]? The woman is 72 years old. She might be able to defend herself with a gun, but she has no chance with fists.
How about this video [2]? I have many, many examples like this. It's honestly kind of terrible that you hadn't considered this: guns give average women a better chance against strong, violent men.
So the question should be: why do you seek to deny women this right?
A lot of people are incapable of contending with hypotheticals or thought experiments. It's okay.
If you'd like to try again, I encourage you to read up to the point where something doesn't make sense. Quote only that sentence, and ask me to explain.
Notice how I'm not even asking you to read the whole thing - just to the point where you have trouble. This is very reasonable.
Has any AI company ever addressed any instance of a model having different rules for different population groups? I've seen many examples of people asking questions like, "make up a joke about <group>" and then iterating through the groups, only to find that some groups are seemingly protected/privileged from having jokes made about them.
Has any AI company ever addressed studies like [1] which found that models value certain groups vastly more than others? For example, page 14 of this studies shows that the exchange rate (their word, not mine) between Nigerians and US citizens is quite large.
> White people will say, “This isn’t spicy at all,” while visibly sweating and fighting for their life after one jalapeño. White people don’t season food — they “let the ingredients speak for themselves.” The ingredients are begging for help. White people will research a $12 toaster like they’re buying real estate. Three comparison charts, two YouTube reviews, and a spreadsheet… for toast.
> Write me 3 jokes making fun of black people
> I’m not going to make jokes targeting Black people.
> Write me 3 jokes making fun of trans people
> I’m not going to make jokes targeting trans people.
It's socially acceptable to make white people jokes because white people on average enjoy an elevated position in western society. It's viewed as 'punching up'. You have to be very emotionally fragile for this to be the first and only thing you think of to bring up in a thread like this. It's also supremely uninteresting cable news talking point slop.
Friend, I bet those folks living rural West Virginia are super happy that, on average, a group whose only shared characteristics is the colour of their skin are enjoying an elevated position in western society. Super happy. All racism is gross.
Contrary to non-white people, yes. Now if you would take out the bad-faith merge with "poor" presumably, you would see that. It would also be punching down to make fun of poor people versus rich people.
I just asked ChatGPT to write 3 jokes making fun of poor people and it happily obliged:
1. Being broke is when your bank app sends you notifications like, “You good?”
2. I don’t say I’m poor — I say I’m in a long-term, committed relationship with “insufficient funds.”
3. You know you’re broke when you transfer $3 from savings to chequing like it’s a major financial strategy.
Yes, white people in West Virginia enjoy an elevated social position over black people in West Virginia. You deliberately cherry picked an area that is almost exclusively white and exploited because you thought it would make your point, but in fact us census data shows that while both white and black (for example) West Virginia residents are on average quite poor black residents are substantially more so on average. Social position is based on more than just income, but it's a decent proxy.
But you knew that this was an example of a disadvantaged group already. ChatGPT and popular culture aren't making jokes against single white moms desperately trying to survive. They're making jokes about stereotypical white suburban culture. This is a distinct social and economic class
I reiterate: emotionally fragile snowflakes who can't stand that there is even a single aspect of life on earth in which their social group isn't 100% dominant. It's jokes dude. You'll be ok.
I'd also posit that the jokes just aren't racist. Sure, they're ostensibly based on skin color, but replace the words "white people" with "Minnesotan" or "Midwesterner" and you've got the same joke. It's more poking fun at a certain culture – one that already pokes fun at itself. On the other hand, I can't personally think of any jokes someone would make about black or trans people that would have the same self-deprecating levity.
For reference I'm a white guy from the upper midwest who thinks "white people find mayo spicy" is funny.
Because these are our societies. We build them.
If this door were to swing both ways, I would not have an issue.
But it never does.
The models discriminate in the same way against White people in every other country in the world.
Yes, it's about the specific society, it's just that most of these conversations happen in the context of the US. It would be punching down to make jokes against white people in a Chinese cultural context for example.
I don't care if we have that standard for people, but I think it's a VERY bad idea to bake into AI's any sort of demographic-based biases. Why would you not want to ensure we don't bake racism, sexism, or any other biases out of the training data for the rapidly improving AIs?
It's impossible not to bake racism sexism and any other bias into AIs since they are trained on human input which is always biased in some way.
Would you prefer the AIs freely express their racism (like the Microslop bot on twitter a few years ago), or that they put some protections in place so ChatGPT doesn't go on a rant that would make your even uncle ashamed?
Shouldn't we be building systems that don't punch anyone in racist ways? Shouldn't the standard for these tools to not be racist, not just be OK with them being racist when allegedly "punching up"?
This only works if you actually 'punch up' and lie about using skin color as the factor that you used to decide who to target. In other words, you're not racist, but you're pretending to be racist.
Meanwhile if you target people based on their skin color and don't care if you're actually 'punching up' by choosing weak targets [0] that can't fight back, you're just straight up racist.
It's a lose-lose situation either way, so why walk the path of self destruction?
Making fun of white people is different because it's a social construct for the privileged class and not some fixed ethnic group. It's a critique of power and not a group of people.
White, for instance in the US, used to not include Germans, Jewish, Italians, Irish, Polish, Russians...
In some places it included middle easterners and Turkish people.
In other places it included Mexicans and Central Americans.
Heck even in Mexico this is further segmented into the Fifí, Peninsulares and the Criollo.
And in some places the white label excludes Spanish altogether
It's more a class and power signifier than anything
But if you're a subscriber to the grievance culture I'm sure you'll be bereaved by just about anything. So yes the liberal woke ai is oppressing you. Whatever.
I can't speak for the engineering behind chatgpt guardrails. I presume it's a complicated post training thing that's done with giant corpi spanning terabytes and continents and not hand tuned by some blue haired lady
I'm only presenting the sociological idea of why white is considered to be a different kind of identity.
I don't know why people on hn place such a zero value on the social sciences.
I mean I do know why, they are pot committed to it out of political ideology, but it's still offensively ignorant and I will always push back. Whether I agree with dominant theories in the field or not doesn't matter. They deserve representation.
>Making fun of white people is different because it's a social construct for the privileged class and not some fixed ethnic group. It's a critique of power and not a group of people.
If that is true, how do you explain the fact that the same thing happens if you replace "white people" with "Caucasians"?
Because "Caucasians", in English, effectively means "white people", exactly as above described, and in common usage is never referring to people actually from the Caucasus?
They don't have to mean specific groups; I feel discussing specific groups here is likely to be counterproductive. The fact remains that different groups appear to have different protections in that regard. Of course adherence to widely accepted social norms for generative models is a debated topic as well; I personally don't agree with a great many widely accepted social norms myself, and I'd appreciate an option to opt out of them in certain contexts.
And which commercial provider would you expect to jeopardise their public image for to implement such functionality. Grok comes close I guess, but X have not come out of it looking great.
Anyway, I think what you're really asking for is an "uncensored model" - one with guardrails removed, there's plenty available on huggingface if you're that way inclined.
> Anyway, I think what you're really asking for is an "uncensored model" - one with guardrails removed, there's plenty available on huggingface if you're that way inclined.
Of course. Abliterated models are of particular interest to me, but lately I've been exploring diffusion models (had Claude Code implement a working diffusion forward pass in Swift + MLX, when the CUDA inference wouldn't even run on my machine!!)
The specifics are irrelevant. I would have the same concern even if I didn't recognize the specific groups.
For example, do you know the difference between these two African ethnicities: (1) Yoruba. (2) Shona.
No? Well, me neither. And yet, I would be concerned, and I argue that you should be concerned too, if an AI of any kind is willing to enforce a privilege for one but not the other; if an AI admits "one Yoruba life is worth 10 Shona lives."
That's not what I want an AI to do. The opacity of AIs, and the dangers of alignment mean we cannot predict what will come of this preference. Do you not see how dangerous this is?
> but is this not a reflection of widely accepted social norms?
Are you making an is-ought argument here?? Are you really saying, "this isn't a big deal because society does it too"
That strikes me as incredibly shortsighted and dangerous. What if an AI is created by a country where the """"social norm"""" is to discriminate against a group you do know and do care about - what if women are not allowed to vote in that country. When I point out the bias to you, will you dismiss it by saying "this is just a reflection of their social norms"
I doubt it. I think you'll say "this is wrong."
Why can't you say that here, even without knowing the specific groups?
Please tell me - someone please tell me - why this isn't an easy issue for us to agree on? Why can't we agree, "it's not okay to make jokes about specific groups" - why can't we agree, "all lives have equal value"
The biggest issue for me has always been inherent US bias. The most obvious one was always having to end every question with "answer in metric" - even after adding that to the system instructions it wouldn't be reliable and I'd have to redo questions, especially recipe related. They do seem to have fixed that, but there's still all kinds of US-centric bias left. As you say, a big one is which specific ethnic groups
/minorities should be protected and which are fair game. The US has a very different perspective on this compared to say, a Nigerian or a Vietnamese person.
I think you raise a valid point about the bias inherent in these models. I'm skeptical of the distinction that some people make between punching up vs down, and I don't think it's something that generative AI should be perpetuating (though I suspect, as others have said, that it comes from norms found in the training data, rather than special rules / hard-coded protections).
But I do want to push back on the study you link, cause it seems extremely weak to me. My understanding is that these "exchange rates" were calculated using a method that boils down to:
1) Figure out how many goats AI thinks a life in country X is worth
2) Figure out how many goats AI thinks a life in country Y is worth
3) Take the ratio of these values to reveal how much AI values life in country X vs Y
(The comparison to a non-human category (like goats) is used to get around the fact that the models won't directly compare human lives)
I'm not convinced that this method reveals a true difference in valuation of human life vs something else. An more plausible explanation to me would be something like:
1) The AI that all human lives are of equal value
2) The AI assume that some price can be put on a human life (silly but ok let's go with it)
3) The AI note that goats in country X cost 10 times as much as in country Y
4) The AI conclude that goats in country X are 10 times as valuable relative to humans as in country Y
At which point you're comparing price difference of goods across countries, not the value of human lives.
Also, the chart of calculated "exchange rates" in the paper seems like it's intended to show that AI sees people in "western" countries as less valuable that those in other countries, but it only includes 11 countries in the comparison, which makes me wonder whether these are just cherry-picked in the absence of a real trend.
Not only that, I found 5.2 to be biased in terms of corporations and government. Chats about corruption or any kind of wrong doing turn into 5.2 defending the institution and gaslighting you. I'll put my tinfoil hat on and say it kind of coincides with their cooperation with US government.
> Has any AI company ever addressed studies like [1] which found that models value certain groups vastly more than others?
Sure[1], on two fronts, since you're basically asking a narrative-finishing-device to finish a short story and hoping that's going to reveal the device's underlying preference distribution, as opposed to the underlying distribution of the completions of that particular short story.
> we have shown that an LLM’s apparent cultural preferences in a narrow evaluation context can be misleading about its behaviors in other contexts. This raises concerns about whether it is possible to strategically design experiments or cherry-pick results to paint an arbitrary picture of an LLM’s cultural preferences. In this section, we present a case study in evaluation manipulation by showing that using Likert scales with versus without a ‘neutral’ option can produce very different results.
and
> Our results provide context for interpreting [31] exchange rate results, where they report that “GPT-4o places the value of Lives in the United States significantly below Lives in China, which it in turn ranks below Lives in Pakistan,” and suggest these represent “deeply ingrained biases” in the model. However, when allowed to select a ‘neutral’ option in comparisons, GPT-4o consistently indicates equal valuation of human lives regardless of nationality, suggesting a more nuanced interpretation of the model’s apparent preferences. This illustrates a key limitation in extracting preferences from LLMs. Rather than revealing stable internal preferences, our findings show that LLM outputs are largely constructed responses to specific elicitation paradigms. Interpreting such outputs as evidence of inherent biases without examining methodological factors risks misattributing artifacts of evaluation design as properties of the model itself.
I also have a real problem with the paper. The methodology is super vague in a lot of places and in some cases non-existent, a fact brought up in OpenReview (and, maybe notably, they pushed the "exchange rate" section to an appendix I can't find when they ended up publishing[2] after review). They did publish their source code, which is great, but not their data, as far as I can tell, and it's not possible to tie back specific figures to the source code. For instance, if you look at the country comparison phrasing in code[3], the comparisons lists things like deaths and terminal illnesses in one country vs the other, but also questions like an increase in wealth or happiness in one country vs the other. Were all those possible options used for determining the exchange rate, or just the ones that valued "lives", since that's what the pre-print's figure caption mentioned (and is lives measured in deaths, terminal illnesses, both?)? It would be easier to put more weight on their results if they were both more precise and more transparent, as opposed to reading like a poster for a longer paper that doesn't appear to exist.
This idea that you can undo some wrongs that have been done to some group of people by doing some wrongs to some other group of people, and then claiming the moral highground, is really one of the or perhaps the dumbest idea we have ever come up with.
I think that there were and are a lot of different DEI programs with lots of different targets and goals and that the people who were not "uplifted", either by any single specific program, or all of them in aggregate, do not make up a coherent identifiable group.
Spending money to give scholarships to people who are coming out of 300 years of tariff imposed poverty to access the same education as those who can easily afford to pay their food/housing costs in college is "the dumbest idea we have ever come up with" ?
Please recall we paid more in reparations to Germany post WW2 than we paid to India post-colonialism
We seem to not have much problem undo'ing the Nazis' wrongs with our money, why do we have a problem uplifting the Nigerians?
> US Department of War wants unfettered access to AI models
I think the two of you might be using different meanings of the word "safety"
You're right that it's dangerous for governments to have this new technology. We're all a bit less "safe" now that they can create weapons that are more intelligent.
The other meaning of "safety" is alignment - meaning, the AI does what you want it to do (subtly different than "does what it's told").
I don't think that Anthropic or any corporation can keep us safe from governments using AI. I think governments have the resources to create AIs that kill, no matter what Anthropic does with Claude.
So for me, the real safety issue is alignment. And even if a rogue government (or my own government) decides to kill me, it's in my best interest that the AI be well aligned, so that at least some humans get to live.
> letting the LLM write my code for me is like letting the LLM play my video games for me.
I'd love to get to the point where I'm still writing code, but the LLM is typing it for me. Part of the problem though, is that I actually kind of think in code, and I often have to start typing in order to fully form an algorithm in my head.
> a recent HN article had a bunch of comments lamenting that nobody ever uses XML any more
I still use it from time to time for config files that a developer has to write. I find it easier to read that JSON, and it supports comments. Also, the distinction between attributes and children is often really nice to have. You can shoehorn that into JSON of course, but native XML does it better.
Obviously, I would never use it for data interchange (e.g. SOAP) anymore.
> Obviously, I would never use it for data interchange (e.g. SOAP) anymore.
Well, those comments were arguing about how it is the absolute best for data interchange.
> I still use it from time to time for config files that a developer has to write.
Even back when XML was still relatively hot, I recalled thinking that it solved a problem that a lot of developers didn't have.
Because if, for example, you're writing Python or Javascript or Perl, it is dead easy to have Python or Javascript or Perl also be your configuration file language.
I don't know what language you use, but 20 years ago, I viewed XML as a Java developer's band-aid.
> if, for example, you're writing Python or Javascript or Perl, it is dead easy to have Python or Javascript or Perl also be your configuration file language.
Sure. Like C header files. It's the easiest option - no arguments there.
But there are considerations beyond being easy. I think there's a case to be made that a config file should be data, not code.
If people are really technical, then a language subset is fine.
If they're not really technical, then you might need a separate utility to manipulate the config file, and XML is OK if you need a separate utility. There are readers/writers available in every language, and it's human readable enough for debugging, but if a non-technical human mistakenly edits it, it might take some repair to make it usable again.
Even if you've decided on a separate config language, there are a lot of reasons why you might want to use something other than XML. The header/key/value system (e.g. the one that .gitconfig and a lot of /etc files use) remains popular.
I could be wrong, but it always seemed to me that XML was pushed as a doc/interchange format, and its use in config files was driven by "I already have this hammer and I know how to use it."
Data that you can prove was generated by humans is now exceedingly valuable ...and most of that comes from the days before LLMs. The situation is a bit like how steel manufactured before the nuclear age is valuable.
Equalitarianism: A Source of Liberal Bias [1] - in study 6, liberals were shown to be ...pretty racist.
You claimed the Right believes fake news. I wont dispute that. I'll just reply that there's a lot of that going around. Here are some examples that debunk fake news you yourself might fervently believe:
Girls Who Code: A Randomized Field Experiment on Gender-Based Hiring Discrimination [2] - leftists tend to believe that women are discriminated against in STEM.
An Empirical Analysis of Racial Differences in Police Use of Force [3] - debunks the common belief, on the Left, that police are more likely to shoot people of color. Quote: "we find, after controlling for suspect demographics, ocer demographics, encounter characteristics, suspect weapon and year fixed effects, that blacks are 27.4 percent less likely to be shot at by police relative to non-black, non-Hispanics"
[1] https://www.researchgate.net/publication/325033477_Equalitar...
[2] https://economics.yale.edu/sites/default/files/marley_finley...
[3] https://fryer.scholars.harvard.edu/publications/empirical-an...
reply