It's so wild how ChatGPT and this "style" of AI literally didn't exist two years ago yet we're all expected to believe it's this essential, indispensable, irreplaceable tool that people can't live without, and actually you're the meanie for suggesting people do something the exact same way they would have in 2022 instead of using the environmental-disaster spam machine
TechTakes
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
This fractal sucks because it’s the voices of the underprivileged that need to be amplified. Using gen AI will smother them entirely.
This... sucks.
NaNoWriMo should know more than anyone that AI can't supplement bad writing. It's not ableist to condemn it, it's recognition that writing is a skill that needs to be honed. I know Ao3 doesn't condemn AI either but that kind of just aligns with their 'anything goes' posting rules.
I was actually kind of excited to attempt NaNoWriMo this year, but now I'm not to sure :(
Free advice from a stranger on the Internet: Don't let the assholes ruin your fun! If you want to try writing 50,000 words for the sake of having written 50,000 words, go for it. And I mean that quite sincerely!
that's kind of what I think I'm gonna do anyway! I'm already a casual writer but I'd really love to turn it into something more. and actually forcing myself to get 50k words out in a specific time frame might be a fun step. I've already GOT complete stories rolling around in my head, it's just the problem of getting them out on paper :p
I can at least understand the guys who are using the AI text conveyor belt to make a cheap buck. Do the hustle, get your bag, whatever. We live in a capitalist hellscape and if that's how you choose to survive, then fuck you, but I get it.
I don't understand these guys who think it's actively good that people don't write their own words. It's just a level of misanthropy that doesn't make sense for how inflated their egos are.
People who hire writers, don't write their own words. You can say that human connection is a crucial part of the writing process. But I just honestly don't think that's true for the vast majority of things we write. But also, eventually AI will be indistinguishable, If not better, than a human writer.
When we hit AGI, if we can continue to keep open source models, it will truly take the power of the rich and put it in the hands of the common person. The reason the rich are so powerful is they can pay other people to do things. Most people only have the power to do what they can physically do in the world, But the rich can multiply that effort by however many people they can afford.
When we hit AGI, if we can continue to keep open source models, it will truly take the power of the rich and put it in the hands of the common person.
Setting aside the “and then a miracle occurs” bit, this basically seems to be “rich people get to have servants and slaves… what if we democratised that?”. Maybe AGI will invent a new kind of ethics for us.
But the rich can multiply that effort by however many people they can afford.
If the hardware to train and run what currently passes for AI was cheap and trivially replicable, Jensen Huang wouldn’t be out there signing boobs.
when my dick grows wings, it will truly therefore be a magical flying unicorn pony
Do you not think AGI is possible?
A Guy in India is not only possible, but the secret sauce behind so many AI companies!
Howdy, alt-account here. You wouldn't have banned me and THEN replied to this thread to get the last word would you?
You're saying he's decided to go on a banning spree? Fair enough. If my views don't fit the community y'all are building, I get it. But to reply a rebuttal after the fact is a little cringe. Let the downvotes and other members speak for themselves.
no, you troglodyte
it isn't a banning spree. this isn't some "oh whoops you got caught in the net" shit.
it's just a pretty fucking direct way to portray that your shitty posts and the shitty viewpoint that drives them just ain't welcome here, bub
and the fact that you fucking walked back in with an alt, outright calling that you did it, and then try to "politely" debatelord it
I mean, 3 points. I'll score that 3 points. brave.
but the chance of it working? buh-bye now.
Totally understand that y'all don't want my view point here, that's fair. No debate on that. I understand I wasn't mistakenly caught in some net.
A Good Inkwell is definitely possible, I even possess one!
I feel like this has to be built on a lack of appreciation for words as a facilitator of human connection. By finding means of expression and being understood we manage to link our brains together on a conceptual level. By building these skills communally we expand the possible bandwidth of connection and even the range and fidelity of our own thoughts.
This has to be motivated by a view of words as Authoritative Things that sit on shelves and bestseller lists and are authored by Smart And Successful People.
Exactly, its a natural conclusion of accepting commodification as the One True Path. Words don't mean anything if they don't make a profit, and clearly you're a bozo who can't Make It (and the bar of Making It is always rising because Number Go Up), so you should join the borg and let my buddy Claude speak for you.
By the way, thank you Terry Pratchett for teaching me the use of Meaningful Capitalisation.
Doesn't even mention the one use case I have a moderate amount of respect for, automatically generating image descriptions for blind people.
And even those should always be labeled, since AI is categorically inferior to intentional communication.
They seem focused on the use case "I don't have the ability to communicate with intention, but I want to pretend I do."
They added those at my work and they are terrible. A picture of the company CEO standing in front of a screen with the text on it announcing a major milestone? "man in front of a screen" Could get more information from the image filename.
AI and ML (and I'm not talking about LLM, but more about those techniques in general) have many actual uses, often when the need is "you have to make a decision quickly, and there's a high tolerance for errors or imprecision".
Your example is a perfect example: it's not as good as a human-generated caption, it can lack context, or be wrong. But it's better than the alternative of having nothing.
But it’s better than the alternative of having nothing.
I’d take nothing over trillions of dollars dedicated to igniting the atmosphere for an incorrectly captioned video
Oh yeah I'm not arguing with you on that. AI has become synonymous with LLM, and doing the most generic models possible, which means syphoning (well stealing actually) stupid amounts of data, and wasting a quantity of energy second only to cryptocurrencies.
Simpler models that are specialized in one domain instead do not cost as much, and are more reliable. Hell, spam filters have been partially based on some ML for years.
But all of that is irrelevant at the moment, because IA/ML is not one possible solution among other solutions that are not based on ML. Currently they are something that must be pushed as much as possible because it's a bubble that gets investors, and I'm so waiting forward for it to burst.
I don't accept a wrong caption is better than not being captioned. I'm concerned that when you say "High tolerance for error", that really means you think it's something unimportant.
No, what I'm saying is that if I had vision issues and had to use a screen reader to use my computer, if I had to choose between
- the person who did that website didn't think about accessibility, so sucks to be you, you're not gonna know what's on those pictures
- there's no alt, but your screen reader tries to describe the picture, you know it's not perfect, but at least you probably know it's not a dog.
I'd take the latter. Obviously the true solution would be to make sure everyone thinks about accessibility, but come on... Even here it's not always the case and the fediverse is the place where I've seen the most focus on accessibility.
Another domain I'd see is preprocessing (a human will do the actual work) to make some tasks a bit easier or quicker and less repetitive.
There is a wealth of reasons why individuals can't "see" the issues in their writing without help.
If you can't see the issues in your own writing, you're exactly who is most vulnerable to AI's "syntactically valid but complete nonsense" output.
I don't entirely agree, though.
That WAS the point of NaNoWriMo in the beginning. I went there because I wanted feedback, and feedback from people who cared (not offense to my friends, but they weren't interested in my writing and that's totes cool).
I think it is a valid core desire to want constructive feedback on your work, and to acknowledge that you are not a complete perspective, even on yourself. Whether the AI can or does provide that is questionable, but the starting place, "I want /something/ accessible to be a rubber ducky" is valid.
My main concern here is, obviously, it feels like NanoWriMo is taking the easy way out here for the $$$ and likely it's silicon valley connections. Wouldn't it be nice if NaNoWriMo said something like, "Whatever technology tools exist today or tomorrow, we stand for writer's essential role in the process, and the unethical labor implications of indiscriminate, non consensus machine learning as the basis for any process."
I can entertain the classism argument if they reframe it as a choice, where the alternative is expanding the scope of what is currently considered plagiarism to include the degrees of ghost-authorship privilege buys, since their argument hinges on the assumption that it is acceptable.
The ableism argument is the one I’ve grappled with the most from the standpoint of disability advocacy. Usually we first must ask whether the achievement in question is the proper measurement. In this case it is quite simply creative origin, which might be difficult to deconstruct further without reaching for the terribly abstract. Next comes the more complicated task of determining the threshold beyond which a simple modifier, like a sports handicap, is simply no longer sufficient, i.e. whether such differing abilities merit a separate category with unique standards. In this case, they provide several examples of cohorts with great enough support requirements that AI assistance might be the only option available for participation. Such differing ability would, I think, suggest the formation of a new category with differing standards as a beneficial compromise.
The issue of systemic unfairness is a larger one, I think, than the matter of AI’s use can address. When we are looking for ways to mitigate systemic unfairness, usually it’s preferred to relieve each disadvantage directly and surgically by accounting for the cumulative impedance and ongoing support necessary to give them a fighting chance. What is not preferred is to actually fight their battles for them, however, and that happens to be what the latest LLM’s are capable of: robust human-like authorship with minimal prompting.
Ultimately, I think the real solution to the issue of AI in the liberal arts will be to adapt our notion of what an essentially human achievement entails, given the capacity of current technology. For example, we no longer consider mathematical computation an essential human achievement, but rather the more abstract instrumentation of it. Similarly, handwriting is no longer a skill emphasized for any purpose other than personal note-taking, as with off-hand recall of vocabulary definitions and historical dates. What we will de-emphasize in response to this technology is yet to be seen, but I suspect it will not be creative originality itself.
Given the context is that NaNoWriMo just took on a new AI-based sponsor who they're promoting hard to their users, there isn't really much justification to bend this far backwards to concoct an excuse for them.
Oh, I was actually disagreeing from an educator perspective, I just entertained some of their arguments in case they were serious.
Edit: apparently the whole post was bad faith drivel. I didn’t know anything about the site until now. Will delete comment.
As an act of protest, in October, I'll be posting a mediocre idea for some work of art, but not make it into an actual work. Why should I even put them into an AI regurgitator when techbros think the idea is the most important part in the arts?
A note for the unawares that Nanowrimo also tried to cover up a scandal when one of their mods was found to be referring minors to an ABDL fetish site. To my knowledge Nanowrimo never tried to own up to it, never even admitted anything was wrong until the FBI got involved, and still blocks any discussion of the situation. https://speak-out.carrd.co/
Reportedly they're now shilling AI hard on their Facebook (I don't have Facebook to check). I consider it 100% likely that, from this year on, everyone who uploads their 50k words to the organisation to prove completion will have their work promptly fed to the hungry algorithms.
At least one writer in the board has already resigned over the AI blog post https://xcancel.com/djolder/status/1830464713110540326
I swear if I hear "being against AI is ableist" one more time I'm gonna lose my shit. Disabled artists have existed for as long as art itself, and the only ableism here is AI-brained fuckwits using disabled people as an escape goat by suggesting they are unable to create things from their own effort and need spicy autocomplete to do so.
*scapegoat
I think autocorrect boned you there.
And I agree whole-heartedly.
The escape goat is the goat that is released by pressing the ESC key. It solves the problem of a frozen computer by eating the computer.
The escape GOAT is the protagonist of the elusive samurai (逃げ上手の若君), Hōjō Tokiyuki
Also a pretty good puzzle platformer computer game where you play as a goat trying to escape a labyrinthine prison.
I can see this as a method for a starting point. I'd consider that analogous to getting inspiration from reading a book. And while I understand the sentiment behind this, it's not really that simple. This argument makes a lot more sense when you ignore the fact that it's stealing from other people. And not just the works of famous authors and writers, but from everyone. If AI made up everything from thin air and didn't need to steal from everyone to make it work I would 100% be on board. If being against stealing other people's work and passing it off as your own is ableist and classist, by that argument so is being against things like identity theft or stealing someone's credit card.
THE ONLY REWARD IS HAVING WRITTEN SOMETHING
THAT'S IT THAT'S THE WHOLE POINT