this post was submitted on 06 Feb 2025
103 points (95.6% liked)

196

2553 readers
1984 users here now

Community Rules

You must post before you leave

Be nice. Assume others have good intent (within reason).

Block or ignore posts, comments, and users that irritate you in some way rather than engaging. Report if they are actually breaking community rules.

Use content warnings and/or mark as NSFW when appropriate. Most posts with content warnings likely need to be marked NSFW.

Most 196 posts are memes, shitposts, cute images, or even just recent things that happened, etc. There is no real theme, but try to avoid posts that are very inflammatory, offensive, very low quality, or very "off topic".

Bigotry is not allowed, this includes (but is not limited to): Homophobia, Transphobia, Racism, Sexism, Abelism, Classism, or discrimination based on things like Ethnicity, Nationality, Language, or Religion.

Avoid shilling for corporations, posting advertisements, or promoting exploitation of workers.

Proselytization, support, or defense of authoritarianism is not welcome. This includes but is not limited to: imperialism, nationalism, genocide denial, ethnic or racial supremacy, fascism, Nazism, Marxism-Leninism, Maoism, etc.

Avoid AI generated content.

Avoid misinformation.

Avoid incomprehensible posts.

No threats or personal attacks.

No spam.

Moderator Guidelines

Moderator Guidelines

  • Don’t be mean to users. Be gentle or neutral.
  • Most moderator actions which have a modlog message should include your username.
  • When in doubt about whether or not a user is problematic, send them a DM.
  • Don’t waste time debating/arguing with problematic users.
  • Assume the best, but don’t tolerate sealioning/just asking questions/concern trolling.
  • Ask another mod to take over cases you struggle with, if you get tired, or when things get personal.
  • Ask the other mods for advice when things get complicated.
  • Share everything you do in the mod matrix, both so several mods aren't unknowingly handling the same issues, but also so you can receive feedback on what you intend to do.
  • Don't rush mod actions. If a case doesn't need to be handled right away, consider taking a short break before getting to it. This is to say, cool down and make room for feedback.
  • Don’t perform too much moderation in the comments, except if you want a verdict to be public or to ask people to dial a convo down/stop. Single comment warnings are okay.
  • Send users concise DMs about verdicts about them, such as bans etc, except in cases where it is clear we don’t want them at all, such as obvious transphobes. No need to notify someone they haven’t been banned of course.
  • Explain to a user why their behavior is problematic and how it is distressing others rather than engage with whatever they are saying. Ask them to avoid this in the future and send them packing if they do not comply.
  • First warn users, then temp ban them, then finally perma ban them when they break the rules or act inappropriately. Skip steps if necessary.
  • Use neutral statements like “this statement can be considered transphobic” rather than “you are being transphobic”.
  • No large decisions or actions without community input (polls or meta posts f.ex.).
  • Large internal decisions (such as ousting a mod) might require a vote, needing more than 50% of the votes to pass. Also consider asking the community for feedback.
  • Remember you are a voluntary moderator. You don’t get paid. Take a break when you need one. Perhaps ask another moderator to step in if necessary.

founded 1 month ago
MODERATORS
 

image description (contains clarifications on background elements)Lots of different seemingly random images in the background, including some fries, mr. crabs, a girl in overalls hugging a stuffed tiger, a mark zuckerberg "big brother is watching" poser, two images of fluttershy (a pony from my little pony) one of them reading "u only kno my swag, not my lore", a picture of parkzer parkzer from the streamer "dougdoug" and a slider gameplay element from the rhythm game "osu". The background is made light so that the text can be easily read. The text reads:

i wanna know if we are on the same page about ai.
if u diagree with any of this or want to add something,
please leave a comment!
smol info:
- LM = Language Model (ChatGPT, Llama, Gemini, Mistral, ...)
- VLM = Vision Language Model (Qwen VL, GPT4o mini, Claude 3.5, ...)
- larger model = more expensivev to train and run
smol info end
- training processes on current AI systems is often
clearly unethical and very bad for the environment :(
- companies are really bad at selling AI to us and
giving them a good purpose for average-joe-usage
- medical ai (e.g. protein folding) is almost only positive
- ai for disabled people is also almost only postive
- the idea of some AI machine taking our jobs is scary
- "AI agents" are scary. large companies are training
them specifically to replace human workers
- LMs > image generation and music generation
- using small LMs for repetitive, boring tasks like
classification feels okay
- using the largest, most environmentally taxing models
for everything is bad. Using a mixture of smaller models
can often be enough
- people with bad intentions using AI systems results
in bad outcome
- ai companies train their models however they see fit.
if an LM "disagrees" with you, that's the trainings fault
- running LMs locally feels more okay, since they need
less energy and you can control their behaviour
I personally think more positively about LMs, but almost
only negatively about image and audio models.
Are we on the same page? Or am I an evil AI tech sis?

IMAGE DESCRIPTION END


i hope this doesn't cause too much hate. i just wanna know what u people and creatures think <3

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 10 points 1 month ago

I used to think image generation was cool back when it was still in the "generating 64x64 pictures of cats" stage. I still think it's really cool, but I do struggle to see it being a net positive for society. So far it has seemed to replace the use of royalty free stock images from google more than it has replaced actual artists, but this could definitely change in the future.

There are some nicer applications of image generation too, like dlss upscaling or frame generation, but I can't think of all that much else honestly.

[–] [email protected] 1 points 1 month ago

I agree 👍

[–] [email protected] 9 points 1 month ago

I think we should avoid simplifying it to VLMs, LMs, Medical AI and AI for disabled people.

For instance, most automatic text capture ais (optical Character Recognition, or OCR) are powered by the same machine learning algorithms. Many of the finer-capability robot systems also utilize machine learning (Boston Dynamics utilizes machine learning for instance). There's also the ability to ID objects within footage, as well as spot faces and referencing it with a large database in order to find the person with said face.

All these are Machine Learning AI systems.

I think it would also be prudent to cease using the term 'AI' when what we actually are discussing is machine learning, which is a much finer subset. Simply saying 'AI' diminishes the term's actual broader meaning and removes the deeper nuance the conversation deserves.

Here are some terms to use instead

  • Machine Learning = AI systems which increase their capability through automated iterative refinement.
  • Evolutionary Learning = a type of machine learning where many instances of randomly changed AI models (called a 'generation') are run simultaneously, and the most effective is/are used as a baseline for the next 'generation'
  • Neural Network = a type of machine learning system which utilizes very simple nodes called 'neurons' for processing. These are often used for image processing, LMs, and OCR.
  • Convolution Neural Network (CNN) = a Neural network which has an architecture of neuron 'fliters' layered over each other for powerful data processing capabilities.

This is not exhaustive but hopefully will help in talking about this topic in a more definite and nuanced fashion. Here is also a document related the different types of neural networks

[–] [email protected] 11 points 1 month ago (1 children)

Mr crabs would use unethical llms, very accurate

[–] [email protected] 5 points 1 month ago

true, he would totally replace his workers with robots, and then complain about hallucinated recipes.

[–] [email protected] 3 points 1 month ago* (last edited 1 month ago) (1 children)

There is an over arching issue with most of the extant models being highly unethical in where they got their data, effectively having made plagiarism machines.

It is not ok to steal the content of millions of small independent creators to create slop that drowns them out. Most of them were already offering their work for free. And I am talking about LMs here, writing is a skill.

Say what ever you want about big companies being bad for abusing IP laws, but this is not about the laws, not even paying people for their work, this is about crediting people when they do work, acknowledging that the work they did had value, and letting people know where they can find more.

Also, I don’t really buy the “it’s good for disabled people” that feels like using disabled people as a shield against criticism, and I’ve yet to see it brought up in good faith.

[–] [email protected] 1 points 1 month ago

A human can read examples of good articles to learn how to write a good article, but an AI can't?

It seems kinda arbitrary, I don't think you can say anything objective about whether AI is plagiarism or not besides the most literal definition of the law (which is impossible as it itself is made arbitrary through the idea of fair use)

[–] [email protected] 11 points 1 month ago

There are so many different things that are called AI, the term AI doesn't have any meaning whatsoever. Generally it seems to mean anything that includes machine learning somewhere in the process, but it's largely a marketing term.

Stealing art is wrong. Using ridiculous amounts of power to generate text is ridiculous. Building a text model that will very confidently produce misinformation is pretty dumb.

There are things that are called AI that are fine, but most aren't.

[–] [email protected] 11 points 1 month ago (1 children)

I'll just repeat what I've said before, since this seems like a good spot for this conversation.

I'm an idiot with no marketable skills. I want to write, I want to draw, I want to do a lot of things, but I'm bad at all of them. gpt like ai sounds like a good way for someone like me to get my vision out of my brain and into the real world.

My current project is a wiki of lore for a fictional setting, for a series of books that I will never actually write. My ideal workflow involves me explaining a subject as best I can to the ai (an alien technology or a kingdom's political landscape, or drama between gods, or whatever), telling the ai to ask me questions about the subject at hand to make me write more stuff, repeat a few times, then have the ai summarize the conversation back to me. I can then refer to that summary as I write an article on the subject. Or, me being lazy, I can just copy-pasta the summary and that's the article.

As an aside, I really like chatgpt 4o for lore exploration, but I'd prefer to run an ai on my own hardware. Sadly, I do not understand github and my brain glazes over every time I look at that damn site.

It is way too easy for me to just let the ai do the work for me. I've noticed that when I try to write something without ai help, it's worse now than it was a few years ago. generative ai is a useful tool, but it should be part of a larger workflow, it should not be the entire workflow.

If I was wealthy, I could just hire or commission some artists and writers to do the things. From my point of view, it's the same as having the ai do the things, except it's slower and real humans benefit from it. I'm not wealthy though, hell, I struggle to pay rent.

The technology is great, the business surrounding it is horrible. I'm not sure what my point is.

[–] [email protected] 0 points 1 month ago

I'm sorry, but did you ever think of the option to try? To write a story you have to work on it and get better.

GPT or llms can't write a story for you, and if you somehow wrangle it to write a story without losing it's thread - then is it even your story?

look, it's not going to be a good story if you don't write it yourself. There's a reason for why companies want to push it, they don't want writers.

I'm sure you can write something, but that you have issues which you need to deal with before you can delve into this. I'm not saying it's easy, but it's worth it.

Also read books. Read books to become a better writer.

PPS. If you make an llm write it you'll come across issues copyrighting it, at least last I heard.

[–] [email protected] 4 points 1 month ago* (last edited 1 month ago) (1 children)

Genuinely, the only problem I see with the development of LLMs and AI in general is that said development has a massive tumor on its back called Corporate Interest. That's pretty much the one and only cause for absolutely every destructive, shady, or downright immoral aspect tied to these things nowadays...

As tools in and of themselves, yes! LLMs have an immense potential not of replacing people, but of helping people get stuff done faster, which in turn would give us a lot of extra time to polish the everloving spit out of the stuff we make!

LLM/AI research should be 100% non-profit and democratised, with well-established guidelines and full transparency, as I see it. This is a huge step in our development as a species, and Altman-likes are not the people who should be in charge of it.

Edit: as for VLMs, I kinda' see them as a fad, to be honest. It still irks me when anyone adds "art" to anything artificially generated at the moment, but I get the feeling people will tire of the novelty once the need for genuine art will cease being satisfied by the above-mentioned.

[–] [email protected] 5 points 1 month ago (1 children)

oh, nonon, VLMs only accept text and images as input. they don't produce images. they just have image inputs as an option.

what you are refering to are "image generators", or "diffusion networks". unfortunately, many news outlets already only use AI images for their stories. i find this pretty sad, cuz i liked that they made a human put together some panel for the news! but not anymore... now it's a mixture of stock footage and AI image crap... big sad ;(

yes, i am negative to image gen models.

alsoalso yes, communism go, non-profits are cool, and i wish what you said became true

[–] [email protected] 2 points 1 month ago (1 children)

Oooh, thank you for the clarification and I apologise for the confusion!

We really are losing a lot of our personality as a species by using generated imagery, yes... It's, unfortunately, been a general trend over the last couple of decades in pretty much all things, architecture especially imho (referring to "average" buildings, not the ones specifically designed to be crazy, which are cool, but far and few between...)

[–] [email protected] 2 points 1 month ago

yes.... older cities look so much more interesting! where u can see the wooden beams and such! for some reason building big blocks is cool now tho... I guess it's good for storage, but surely people don't this super boring.

[–] [email protected] 3 points 1 month ago (1 children)

Smorty!!!!
Thank you for this conversation

[–] [email protected] 2 points 1 month ago (1 children)

i don think i understand your comment...

or maybe that's the point?

or maybe ur making a funi joke about u being an AI assistant?
If so:
haha lol that's so hilarious<|endoftext|><|endoftext|><|endoftext|><|endoftext|><|fim_prefix|>func get_length(vec1:Vector2) -> float:\n<|fim_suffix|> return length\n\nyea i like LMs kinda a smol bit and like experimenting with em a lot, cuz it's kinda fun to test their capabilities and such

if not: pls explain <3

[–] [email protected] 4 points 1 month ago (1 children)

if not: pls explain <3

response output --verbose:
Line 1: Smorty!!!
Explanation: You brighten my day every time I see you doing your thing. Line 1 expresses this joy.
Line 2: Thank you for this conversation
Explanation: I am glad to see peoples' replies to your post. Line 2 thanks you for starting this discussion.

[–] [email protected] 3 points 1 month ago

really??? i didn't kno i make u comf when i post a thing!! ~ i'm very happi about that!!! <3

also, i'm surprsied that u still like the fact that i made this convo spring up. many peeps are very one-sided about this, and i recognize that i am more pro-ai than con-ai. i wanted to hear peeps's thoughts about it, so i jus infodump in a image with fluttershy in it, and now we are here!

i would think that u wouldn't like this kind of very adult topic about ai stuffs but apparenty u are oki with me asking very serious things on here...

i hope u have a comf day and that u sleep well and that u eat something nice!!!! <3

[–] [email protected] 6 points 1 month ago (2 children)

This list is missing: AI generated images are not art.

[–] [email protected] 1 points 1 month ago

I disagree, but I can respect your opinion.

[–] [email protected] 4 points 1 month ago

i also think that way, but it's also true that generated images are being used all over the web already, so people generally don't seem to care.

[–] [email protected] 7 points 1 month ago (1 children)

A lot of those points boil down to the same thing: "what if the AI is wrong?"

If it's something that you'll need to check manually anyway, or where a mistake is not a big deal, that's probably fine. But if it's something where a mistake can affect someone's well-being, that is bad.

Reusing an example from the pic:

  • Predicting 3D structures of proteins, as in the example? OK! Worst hypothesis the researchers will notice that the predicted structure does not match the real one.
  • Predicting if you have some medical problem? Not OK. A false negative can cost a life.

That's of course for the usage. The creation of those systems is another can of worms, and it involves other ethical concerns.

[–] [email protected] 3 points 1 month ago (1 children)

of course using ai stuffs for medical usage is going to have to be monitored by a human with some knowledge. we can't just let it make all the decisions... quite yet.

in many cases, ai models are already better than expert humans in the field. recognizing cancer being the obvious example, where the pattern recognition works perfectly. or with protein folding, where humans are at about 60% accuracy, while googles alphafold is at 94% or so.

clearly humans need to oversee AIs output, but we are getting to a point where maybe humans make the wrong decision, and deny an AIs correct generation. so: no additional lives are lost, but many more could be saved

[–] [email protected] 3 points 1 month ago* (last edited 1 month ago)

I mostly agree with you, I think that we're disagreeing on details. And you're being far, far more level-headed than most people who discuss this topic, who pretend that AI is either e-God or Satanic bytes. (So no, you aren't an evil AI tech sis. Nor a Luddite.)

That said:

For clinical usage, just monitoring it isn't enough - because when people know that there's some automated system to catch their mistakes, or that they're just catching the mistakes of that system, they get sloppier. You need really, really good accuracy.

Like, 95% accuracy might look like a lot, right? If it involves death or life, it means a death for each 20 cases, it's rather high. In the meantime, if AlphaFold got it wrong 60% of the time instead of just 6%, it wouldn't be a big deal.

Also, note that we're both talking about "AI" as if it was a single thing. Under the hood it's a bunch of completely different things; pattern recognition AI, predictive AI, generative AI, they work so differently from each other that we'd need huge walls of text to decide how good or bad each of them is.

[–] [email protected] 23 points 1 month ago* (last edited 1 month ago) (1 children)

I wish people stopped treating these fucking things as a knowledge source, let alone a reliable one. By definition they cannot distinguish facts, only spit out statistically correct-sounding text.

Are they of help to your particular task? Cool, hope the model you're using hasn't been trained on stolen art, or doesn't rely on traumatizing workers on the global south (who are paid pennies btw) to function.

Also, y'know, don't throw gasoline to an already burning planet if possible. You might think you need to use a GPT for a particular task or funny meme, but chances are you actually don't.

That's about it for me I think.

edit: when i say "you" in this post i don't mean actually you OP, i mean in general. sorry if this seems rambly im sleep deprived as fuckj woooooo

[–] [email protected] 5 points 1 month ago

peeps who use these models for facts are obv not aware what the models are doing. they don't know that these models are just guessing facts.

also yes, big sad about peeps in the south being paid very poorly.

can totally see your point, thank you for commenting! <3

[–] [email protected] 3 points 1 month ago (1 children)

I think generative AI is mainly a tool of deception and tyranny. The use cases for fraud, dehumanization and oppression are plentiful. I think Iris Meredith does a good job of highlighting the threat at hand. I don’t really care about the tech in theory: what matters right now is who builds it and how it is being deployed onto the world.

[–] [email protected] 3 points 1 month ago

oof this is brutal. but a good analysis.

at the end of the day it, no matter what good uses people might have for this tech, it's hard to reconcile the fact that it's also being used by the worst possible people, with the worst possible intentions, in the worst possible ways.

[–] [email protected] 3 points 1 month ago

i'm personally not too fond of llms, because they are being pushed everywhere, even when they don't make sense and they need to be absolutely massive to be of any use, meaning you need a data center.

i'm also hesitant to use the term "ai" at all since it says nothing and encompasses way too much.

i like using image generators for my own amusement and to "fix" the stuff i make in image editors. i never run any online models for this, i bought extra hardware specifically to experiment. and i live in a city powered basically entirely by hydro power so i'm pretty sure i'm personally carbon neutral. otherwise i wouldn't do it.

the main things that bother me is partially the scale of operations, partially the philosophy of the people driving this. i've said it before but open ai seem to want to become e/acc tech priests. they release nothing about their models, they hide them away and insinuate that we normal hoomans are unworthy of the information and that we wouldn't understand it anyway. which is why deepseek caused such a market shake, it cracked the pedestal underneath open ai.

as for the training process, i'm torn. on the one hand it's shitty to scrape people's work without consent, and i hope open ai gets their shit smacked out of them by copyright law. on the other hand i did the math on the final models, specifically on stable diffusion 1.0: it used the LAION 5B scientific dataset of tagged images, which has five billion ish data points as the name suggests. stable diffusion 1.0 is something like 4GB. this means there's on average less than eight bits in the model per image and description combination. given that the images it trained on were 512x512 on average, that gives a shocking 0.00003 bits per pixel. and stable diffusion 1.5 has more than double the training data but is the same size. at that scale there is nothing of the original image in there.

the environmental effect is obviously bad, but the copying argument? i'm less certain. that doesn't invalidate the people who are worried it will take jobs, because it will. mostly through managers not understanding how their businesses work and firing talented artists to replace with what is basically noise machines.

[–] [email protected] 0 points 1 month ago (2 children)

I don't see how AI is inherently bad for the environment. I know they use a lot of energy, but if the energy comes from renewable sources, like solar or hydroelectric, then it shouldn't be a problem, right?

[–] [email protected] 2 points 1 month ago

The problem is that we only have a finite amount of energy. If all of our clean energy output is going toward AI then yeah it's clean but it means we have to use other less clean sources of energy for things that are objectively more important than AI - powering homes, food production, hospitals, etc.

Even "clean" energy still has downsides to the environment also like noise pollution (impacts local wildlife), taking up large amounts of space (deforestation), using up large amounts of water for cooling, or having emissions that aren't greenhouse gases, etc. Ultimately we're still using unfathomably large amounts of energy to train and use a corporate chatbot trained on all our personal data, and that energy use still has consequences even if it's "clean"

[–] [email protected] 2 points 1 month ago (1 children)

i kinda agree. currently many places still use oil for engery generation, so that kinda makes sense.

but if powered by cool solar panels and cool wind turbine things, that would be way better. then it would only be down to the production of GPUs and the housing.

[–] [email protected] 2 points 1 month ago* (last edited 1 month ago)

Also cooling! Right now each interaction from each person using chatGPT uses roughly a bottle's worth of water per 100 words generated (according to a research study in 2023). This was with GPT-4 so it may be slightly more or slightly less now, but probably more considering their models have actually gotten more expensive for them to host (more energy used -> more heat produced -> more cooling needed).

Now consider how that scales with the amount of people using ChatGPT every day. Even if energy is clean everything else about AI isn't.

load more comments
view more: next ›