ReedyBear's Blog

My thoughts on LLMs

I was reluctant to write this post because AI is already talked about a lot on Bear & I've seen a complaint about us being sort of an echo chamber and needing other topics to talk about - that's a fair complaint, but fuck it I blog, at least in part, to help me process, and if someone complains about it that's okay.

I've been accused of "black and white" thinking by friends. I take some offense to this, because it is dismissive of my views. It's probably not entirely wrong, but it is dismissive.

I think it is morally wrong to use LLMs. I think many of the individual uses in-and-of themselves are not immoral, but I think any use of LLMs is immoral because it supports the further development of these LLMs, and the LLMs are deeply unethical.

First, stolen works

To build an LLM, one requires massive amounts of training data. That is - all of the books, art, movies & shows, YouTube Videos, blog posts, websites, software, and basically every single thing humans have ever made. You may not care if your blog posts or book or art were consumed by LLMs without your consent. But other people do. We have longstanding systems to prevent theft of creative works.

These laws have been entirely ignored when training LLMs. There have been some lawsuits about this, but it doesn't really change anything. It happened and the AI companies will keep trucking along. These works were stolen not for individual consumption like when you pirate a movie to watch at home, but so a machine could be built that will be used to replace human creativity, so that the few Capitalists (owners of these machines) can make exorbitant profits.

And then when you ask an AI for information, instead of you visiting a news website and reading journalism, you get a summary of that (stolen) journalism from an AI. When your google search result shows you an AI answer, you're not clicking through to websites, so they're not getting your clicks, Google is collecting all the ad revenue. Google is stealing both creative works and money from these websites through this process.

(I am actually PRO piracy, especially when it comes to works that have recuperated their costs. But theft for corporate profits and LLM use is not the same as theft for personal use.)

Second, Earth's resources

LLMs require a huge amount of processing power in order to be trained and in order to continue running. Many companies are building their own AI models and building data centers nationwide in the U.S. and probably globally. China is building data centers underwater to help keep them cool.

The raw materials to build data centers are already significant - building materials, GPUs, RAM, etc. The purchasing of computer parts for data centers has caused significant price increases for consumers buying computer parts. This part of it is the least of my gripes.

Data Centers require a significant amount of electricity, generate a lot of heat, and require a lot of cooling. This exacerbates climate change. The electricity is somewhat solvable by the development of more solar and wind energy, as well as the building of new nuclear energy. But in the short term, at least one data center has been caught using an illegal source of energy (methane-based I think?) that produces significant air pollution and health problems for nearby populations.

Further, what's actually been happening is that everybody's energy bills are increasing. There is too much demand for electricity, the grid can barely keep up, and so everybody's energy cost goes up. Data Centers don't pay for all of the increased demand. They get subsidized by everybody who has power in their homes. I suspect this increased-cost aspect will level out in a few years as energy capacity increases.

Next is the water. They use water for cooling, which depletes local resources. This is already an issue in farming where large agricultural producers over-use water, and smaller farmers and residents are hurt. We should not be taking people's water or polluting the air or exacerbating climate change, and using AI signals to these companies that they should continue doing these things. It signals to lawmakers that the people want it.

Third, jobs

Republicans in the U.S. have been largely pro-data center, saying they create jobs. They lambast democrats for opposing data centers and ignore all of the downsides. But the job creation is mostly a lie. A lot of the jobs related to data centers are in the initial construction. A lot of the people who participate in that construction come in from other communities.

The influx of outside people puts a strain on local economies, drives up housing prices, and makes people's communities harder to live in, economically speaking. These construction jobs are short-term. Once the data centers are built, they are not great sources of jobs, yet they continue to pollute the local environment, drive up energy costs, and use tons of water.

And then there's the job losses and low quality coming from these LLMs. Many tech companies have laid off great numbers of programmers and are boasting about the amount of code they write using AI. Often times these same companies have increased cases of bugs in their code and security vulnerabilities. Lawyers have been caught using AI to write legal briefs, in which imaginary legal cases are cited.

Game developers are using AI to generate 3D assets, taking jobs from artists. Large corporations are using AI to generate commercials, taking jobs from artists. Corpos are also generating highly targeted ad campaigns using a bunch of ai-generated alternatives to target different racial groups. Again, this takes jobs but it also has the added dystopian aspect of giving large companies far more control over the population by manipulating us with infinitely malleable AI-generated ad campaigns.

I don't hate the idea of a world where we all work less and get a universal basic income or transition into a moneyless society or whatever. But (a lack of) AI isn't the reason we haven't done these things. Our global society has become exponentially more productive over the last 100 years, and yet our goal remains "full employment" or damn near it. The roadblock is social-political, not technological. Though, I do admit, if jobs are taken by AI, this could grow the social-political capital toward a post-work society.

Fourth, Information Problems

Deepfakes are the most obvious. The ability to generate videos and images that seem entirely real, as a means to spread political propaganda and increase a government's fascistic control over their populations. This is happening now. Misinformation isn't new, but the AI deepfake nightmare has escalated things. Further, it's unsettling as a regular person just watching videos and having a whole new layer of "is this real?" Like that already existed to some extent, but it's a whole nother level now. (also, Grok is generating child porn and this problem will not go away)

They lie, but you can't tell. LLMs communicate with a confident tone. You ask them a question, they give you an answer, and they sound authoritative. Apple had previously rolled out notification summaries and rolled-back the feature because it was giving false information and misrepresentations of the news and of communications with friends and family.

It is incredibly important to understand that LLMs do not know anything, it isn't how they work. When you generate an image, nobody thinks the AI "remembers" this scene and is showing you what it "remembers". You understand the image is generated, fake, a figment of "imagination". Well, text is the same way. It does not remember (stolen) books and encyclopedias and web pages. What it is doing is generating text. It's kind of like autocomplete, except farrr more advanced. This is not a recitation of knowledge, but a generation of words that seem correct for the given context based on complex underlying mathematical models.

Of course, LLMs do get a lot right, with regard to information. But if you use them, you should understand that they're not operating from knowledge in the way you or I might.

I'm also utterly creeped out by the consumer-facing AI tools for images. The ability to change colors or remove people from photos or whatever else. It makes it incredibly easy for an (emotionally) abusive partner to lie to you about what happened. I don't want fake memories. I don't want my ex removed from the family photo after we break up. I don't want the stranger removed from the background. I don't want to be lied to about what my life was. But it is incredibly easy for anybody to forge a false reality through edited images now.

Fifth, on being human

All of this stolen data, pollution, increased energy prices, depletion of resources, misinformation, and job destruction is purportedly to make our lives easier, but what it's really doing is replacing humanity.

I care about art - drawings, photos, videos, movies, poems (okay I'm not into poems), novels, etc - because people made it. I'm not interested in machine-generated art. The whole point of living is to be human and do human shit and connect with eachother. If there is no outlet for human creativity (because all the stuff we're consuming is AI-Generated), then there is no fucking point in being a human, in being alive. Of course, you can still draw and make silly little videos even if the AI has taken over Hollywood. But I just have no interest in the computer-generated "culture".

Next is communication. (FB) Messenger started prompting me recently to "summarize with AI" the last 3 or 4 messages from my best friend. In comments on FB, it tries really hard to get me to use phrases the AI has suggested for me to say.

If my friends were summarizing my messages with AI and sending me ai-written messages, I'd be really hurt. It's just a pretend humanity at that point. Actually processing MY words and coming up with YOUR words is human, and is a purpose of friendship and human connection. If you're using AI for this, that connection is fake.

We also have resumes being written by AI and job applications being reviewed by AI and its just all garbage and inhuman and cold and dystopian.

A lot of the ads for AI products have also been extremely anti-human and promoted basically being a piece of shit. One I remember was using AI to summarize documents you were supposed to read for a meeting, and faking your way through it, because you slacked off and didn't do your work. Another, much stupider one, is a dad with his kid racing a snail and a slug. The Dad asks his phone's AI to predict which will win. Just fucking hang out with your kid, bro.

And a small note about personal growth and learning - when you use an AI to "know" things for you, you do not learn and do not grow as a person. The shortcut you take to get the output you desire also skips the human aspect of putting in effort and learning and growing.

Sixth, Utility

This is the part where I glaze AI a little bit.

AI has been advertised to watch you workout and give you tips and a workout plan, and photograph your plumbing and tell you how to fix a problem. These are both very useful things. They're also anti-human and rely on stolen information and are prone to giving you false information. But still, some potential usefulness.

Summarizing large documents - Let's say you want highlights from a school board meeting or a 200 page bill from the U.S. Congress. Summaries could be useful for regular people. But again, it is prone to error and there already journalists who do these things so that lowers reliability and destroys jobs. But still, potentially useful.

Health - This may be more of a U.S. problem, but it can be really hard to get good treatment for whatever condition you may or may not have. Being able to "discuss" your illness with an AI and get information from it has the potential to be extremely useful. If you're someone struggling with a health condition and doctors have not been helpful, it's hard for me to hold it against you for using AI to get help. Again, it is prone to misinformation, and this brings in a whole new slew of privacy concerns. But I admit the utility. If you're doing this, please talk to your doctor about what the AI tells you, and don't do anything risky based on the AI's advice.

I think it also has the potential to be useful in law, especially for laypeople. But you get the point by now. Useful, unreliable, kills jobs, removes humanity, but possibly useful.

Some mundane tasks are made much simpler by AI. I complained earlier about photo editing. WELL. We've had digital tools for extensive photo editing for several years now. Experts had the ability to remove people from photos, add things to photos, create amazing CGI, and all kinds of stuff. Simplifying this creation process can make it easier for people to turn their creative ideas into products that can be shared with others. There is definitely utility to that. I still hate it.

I could go on about utility. I stop here.

Conclusions

I am an AI hater. I have some very practical concerns about AI in regard to Earth's resources and the elimination of jobs and information problems. I also have deep ethical concerns about the stolen works and violation of consent, as well as the erasure of humanity.

There are many counterpoints to many of the things I've griped about here. There are also some specific uses I actually hope to see - medical breakthroughs, for example.

I also think that it can be extremely useful for programming (though I refuse to use it), and there's some question of, like: Why should I be working SO HARD to write software when I don't have to? This same question applies to many creative fields, journalism in some cases, and the legal field. Partially, I write code because I actually enjoy programming. But there are parts I don't like, and it might be nice to outsource those. I won't, though.

This post has two purposes. First is to advocate for people to stop using AI and to summarize the reasons why. Second, is to help me process my thoughts and feelings with regard to LLMs.

I believe that it is wrong to use AI for any purpose, not because the specific thing you're doing with it is immoral (though sometimes it is), but because using it supports a broader system that is highly unethical. Even using it for "good" things still supports all of the "bad" things.

Oh and I forgot to talk about AI's use in military operations. Whatever.

But I also understand the benefit for some people may be significant, like if you use AI to help you with your personal health.

My individual refusal to use AI isn't stopping anything. Your individual refusal won't either. But data centers are built in local communities, and one day this community may be yours. When it's coming to your town or your state, you should be ready to oppose it in a meaningful way, and you should do so without hypocrisy if you can.

I do the right thing, first and foremost, because it is right and I want to be the kind of person who does the right thing. I also care about the broader impact on the world. My actions alone aren't enough to fix things. But collective action is, and we should all see ourselves as part of this collective.

I don't use Spotify because they pay artists half as much as other platforms, because they paid Joe Rogan $100 million dollars to platform nazis, because their CEO contributes money to AI-based warfare. My choice to use Deezer isn't fixing the problems with Spotify. But it does mean I'm not contributing to the hellscape that is Spotify, and it means I'm contributing more money to artists than I would on Spotify. Plus, it was a super easy switch that has almost no impact on my life.

If everyone tried to do the right thing, and was willing to educate themselves (or participate in a community that advises them) on issues of the world, we would not have the problems we do today.

But it's also not that simple, I know. I pay taxes when I shop at stores. Those taxes help fund foreign wars and the genocide in Gaza and the ICE Nazis in our streets. I drive a pickup truck that uses gas. I watch hella YouTube videos, generating ad revenue for Google, a company that participates in warfare, builds LLMs, and violates everyone's privacy.

I am not innocent. I also contribute to the hellscape. I am not a big fan of purity tests. But I am a big fan of doing the right thing when you can. But perhaps it is unfair for me to decide that "there's not a viable alternative to YouTube" justifies my usage of it, and then suggest that you are unjustified in using AI. I think its more likely that my use of YouTube is unjustified and the moral, ethical thing would be to stop using. Maybe I'll think more deeply on this in the next few months.

At the very least, I ask that you consider the information I've shared today, and reflect upon your participation in this system. I don't share any of this to shame or judge you. I share this to inform you and to advocate for a better world and to ask you to participate in the better world. If you choose to use AI, it is my duty to give you grace, respect your choice, accept you, and let you be.

Take your time. Even if you're willing to hate AI with me, you don't need to make that decision today. Sleep on it.


Like I said about my post on Animal Agriculture:

I'm coming to think the root of all evil is not money nor power, but complacency and compartmentalization.


Please read my followup post, "Grammar Nazi", where I challenge the moral prescriptivism I display in this post.


Edit: I FORGOT TO MENTION - I watch Neural Viz. This is a YouTube channel which uses generative AI to create videos. It seems the person behind these videos is doing creative work, even if depending on AI to generate the product. This (Neural Viz) is my one personal exception for using and consuming generative AI. I also agree, in a rational sense, with my argument above that this is immoral. And I'm deeply uncomfortable about this. And almost unwilling to look at it honestly. I feel somewhat justified too. I'm not going to stop watching/supporting Neural Viz. Make of that what you will. It definitely makes me somewhat of a hypocrite. I'd still decommission all the ai-related data centers TODAY though, even if it meant no more Neural Viz.

#best #featured