this post was submitted on 05 May 2025
117 points (100.0% liked)

Main, home of the dope ass bear.

15932 readers
264 users here now

THE MAIN RULE: ALL TEXT POSTS MUST CONTAIN "MAIN" OR BE ENTIRELY IMAGES (INLINE OR EMOJI)

(Temporary moratorium on main rule to encourage more posting on main. We reserve the right to arbitrarily enforce it whenever we wish and the right to strike this line and enforce mainposting with zero notification to the users because its funny)

A hexbear.net commainity. Main sure to subscribe to other communities as well. Your feed will become the Lion's Main!

Good comrades mainly sort posts by hot and comments by new!


gun-unity State-by-state guide on maintaining firearm ownership

guaido Domain guide on mutual aid and foodbank resources

smoker-on-the-balcony Tips for looking at financials of non-profits (How to donate amainly)

frothingfash Community-sourced megapost on the main media sources to radicalize libs and chuds with

just-a-theory An Amainzing Organizing Story

feminism Main Source for Feminism for Babies

data-revolutionary Maintaining OpSec / Data Spring Cleaning guide


ussr-cry Remain up to date on what time is it in Moscow

founded 4 years ago
MODERATORS
 

I'm sorry, so fucking angry. Students with sources that don't exist. Students with sources that exist but then the quotation doesn't exist.

I'm so fucking mad, because it's extra work for me (that I'm sure as hell not getting compensated for), and it also entirely defeats the purpose of the fucking class (it's writing/research, so like, engaging in a discipline and looking at what's been written before on your topic, etc.)

Kill me please. Comrades, I'm so tired. I just want to teach writing. I want to give students a way to exercise agency in the world -- to both see bad arguments and make good ones. They don't care. I'm so tired.

BTW, I took time to look up some of these sources my student used, couldn't find the quotes they quote, so told them the paper is an "A" if they can show me every quotation and failing otherwise. Does this seem like a fair policy (my thought is -- no matter the method, fabrication of evidence is justification for failing work)?

foucault-madness agony-shivering allende-rhetoric

(page 2) 28 comments
sorted by: hot top controversial new old
[–] [email protected] 29 points 2 weeks ago (2 children)

Using AI to write papers for a writing class is like using speech to text for a touch typing course. You’re bypassing the exercises that will actually provide the value you’re paying for

load more comments (2 replies)
[–] [email protected] 19 points 2 weeks ago (2 children)

I was in conversation with a friend who works in tech and we were talking about a thing that we wanted to find some science on. So I found a paper on it and started to read, but before I got done he replied by sending me a chat gpt summary of the paper. And I could already tell it wasn't correct from reading it myself even a little. What I really wanted to say to him was that I'd rather think for myself, tyvm.

If students are all doing this now, nobody will think of anything themselves anymore or form an actual deep understanding of a thing, be it whatever. Not really. Anyone who has done reading knows how that shapes us and then can develop into something deeper. It's the f'n "innovation" these same tech types go on and on about that dies with this tech.

load more comments (2 replies)
[–] [email protected] 26 points 2 weeks ago

BTW, I took time to look up some of these sources my student used, couldn't find the quotes they quote, so told them the paper is an "A" if they can show me every quotation and failing otherwise. Does this seem like a fair policy (my thought is -- no matter the method, fabrication of evidence is justification for failing work)?

Most class syllabuses I've seen tie LLM into the same category as plagiarism. That's an automatic failure on the assignment and sometimes failure of the class.

[–] [email protected] 32 points 2 weeks ago

(it's writing/research, so like, engaging in a discipline and looking at what's been written before on your topic, etc.)

BTW, I took time to look up some of these sources my student used, couldn't find the quotes they quote, so told them the paper is an "A" if they can show me every quotation and failing otherwise. Does this seem like a fair policy (my thought is -- no matter the method, fabrication of evidence is justification for failing work)?

I think they will learn an important life lesson: that if they're going to cheat, then they have to, at a minimum, be sure that they are at least "getting the right answer". The tide of AI dystopia is unstoppable, but you can at least teach them that they can't just completely shut their brains off to the extent that they are just presenting completely fabricated research and factual claims.

[–] [email protected] 63 points 2 weeks ago (1 children)

Trash Future repeatedly makes the point that AI chat bots are the inverse of the printing press. The printing press created a way for information to be reliably stored, retrieved, and exchanged. It created a sort of ecosystem where ideas (including competing ideas) could circulate in society.

Chat bots do the opposite. They basically destroy the reliable transmission of information and ideas. Instead of creating reliable records of human thought (models, stories, theories, etc.), it's a black box which randomly messes with averages. It's so fucking harmful

[–] [email protected] 23 points 2 weeks ago* (last edited 2 weeks ago) (6 children)

This makes no sense because it gives the general problem of epistemology a year zero date of November 30th, 2022.

People were lying, distorting and destroying prior to the invention of the printing press. For example one of the most obvious is the Donation of Constantine which the Catholic Church used to extort European kings starting 8th century.

The printing press actually made things worse. For example the Gospel of Barnabas was thought to have been so widely proliferated because the forger printed fabricated copies of the the Gelasian Decree.

Creating reliable records of "human thought" doesn't matter because the problem isn't one of what do people think, it's what is the actual truth. This isn't even the first system that greatly obscures historical thought for the benefit of a select few. If you were a peasant in 1500's your ChatGPT was the conspiracy between your local lord and your local pastor to keep you compliant. The German peasants literally fought a war over it.

There is no place in academia in which an LLM would be a reliable store of information because it's a statistical compilation not a deterministic primary source, secondary or tertiary source. Trash Future as always is tilting at windmills erected by capitalist propaganda.

load more comments (6 replies)
[–] [email protected] 20 points 2 weeks ago (1 children)

I had to do this in like 2008 for a humanities course about web 2.0, because my paper was about forum types and how different designs had different outcomes, etc. I literally could find almost no relevant stuff to quote so eventually I just fully bullshitted lol.

[–] [email protected] 21 points 2 weeks ago

Yeah that's definitely not an excuse for this paper. Also, my move in that position was always to just find a source broader than my topic and apply it like a lens. Works pretty well.

[–] [email protected] 53 points 2 weeks ago (1 children)

cheating in education in general, AI or not, is more caused by the financial and systemic repercussions of failing. When these students fail a class, it's often another few thousand dollars they don't have down the drain, and if they fail too many classes it locks them out of higher education entirely

failure is one of the biggest drivers of true learning, and the educational system directly discourages it

[–] [email protected] 38 points 2 weeks ago

Oh I get that -- the financial reality is there for sure, and I recognize they have other classes, etc. Don't get me wrong, I know who the "true" villain is.

Doesn't mean I can't be mad at these AI companies for unleashing this on us. It actively makes teaching the skills to understand writing harder since students can get close to "good" writing with these machines, but the writing it produces crumbles under the slightest scrutiny. We're actively harming thought and understanding with them.

[–] [email protected] 12 points 2 weeks ago (1 children)

Is this for a creative writing course? Because wtf

[–] [email protected] 16 points 2 weeks ago

Writing and research. So not "creative" - it's like, citing your sources in an argument, researching social issues, etc.

[–] [email protected] 23 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

if two students are using exact same (unneeded) variable in a script, did they do similar prompt or do they talk to each other saruman-orb

fucking hate this shit, something which could be done in like 20 lines of code is like 200. I don't particularly have to care, cause i'm not teaching programming but jesus christ

load more comments (2 replies)
[–] [email protected] 52 points 2 weeks ago

Kids can't even cheat properly anymore, because of woke

[–] [email protected] 51 points 2 weeks ago (4 children)

I tried using AI to help find sources for my partners thesis. It's a niche topic on body phenomenology and existentialism in pregnancy and birth. Instead, it cited Heidegger books that don’t even exist. A colleague recommended it, but honestly, you would have to be insane to rely on this.

[–] [email protected] 16 points 2 weeks ago

LLMs in general cannot handle finding quotes because they can't discern between real ideas or regurgitating ideas in a slightly different format.

[–] [email protected] 33 points 2 weeks ago (2 children)

I get so annoyed when people tell me to ask an AI something. It has no knowledge and no capacity for reason. The only thing it can do is produce an output that an inexpert human could potentially accept as true because the underlying statistics favour sequences of characters that, when converted to text and read by a human, appear to have a confident tone. People talk about AI hallucinating wrong answers and that's giving it too much credit; either everything it outputs is a hallucination that's accepted more often than not, or nothing it outputs is a hallucination because it's not conscious and can't hallucinate, it's just printing sequential characters.

[–] [email protected] 17 points 2 weeks ago

It's advanced autocorrect. Calling it AI is an insult to Skynet.

[–] [email protected] 21 points 2 weeks ago

it's not all that far from the "post what your autocorrect completes this sentence as" thing
the llms are considerably more sophisticated sure, but what they do is fundamentally the same

[–] [email protected] 34 points 2 weeks ago* (last edited 2 weeks ago)

The more specific the information the more it lies

load more comments (1 replies)
[–] [email protected] 31 points 2 weeks ago

Seems a fair policy. I like to imagine if you stress this policy up front in advance, students might actually check and verify all their own sources (and thus actually do their own research even with ai stuff)

[–] [email protected] 23 points 2 weeks ago (2 children)

It seems 100% fair to me. Using AI will be a big part of the future, but if your class is about a particular set of skills that don't involve asking Computer-Daddy to do your homework for you then good on you for trying to ensure it.

[–] [email protected] 24 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

Using AI will be a big part of the future,

Yet absolutely NONE of the people pushing for this future educate people about the limitations of LLM chatbots. In fact, they deliberately mislead the public. I think about the doctor scene in Idiocracy a lot these days

load more comments (2 replies)
load more comments (1 replies)
[–] [email protected] 35 points 2 weeks ago (1 children)

told them the paper is an "A" if they can show me every quotation and failing otherwise. Does this seem like a fair policy (my thought is -- no matter the method, fabrication of evidence is justification for failing work)?

If the policy for plagiarism at your school is a F on the assignment, that seems fair to me. Asking LLMs to do your work is plagiarism.

[–] [email protected] 29 points 2 weeks ago* (last edited 2 weeks ago) (8 children)

I mean, I could go to that, but I figure as a writer, to fabricate quotations and evidence is fundamentally failing work.

I'm trying to give the student the chance to save themselves too. If they just cited that (for instance) the quotation about "all great historical figures appear twice" was from The German Ideology instead of 18th Brumaire that's not a problem -- the quotation exists, it's simply the student being sloppy at documentation.

However, to claim that someone stated something they didn't -- that's just fundamentally failing work (it would be like going online and saying Mao said that "power grows out of the hands of the peasantry" instead of "power grows out of the barrel of a gun").

I should note - my class has a policy that students can use AI as long as they clear it with me. However, they're responsible for their work, and I won't accept work with fake quotes. That's dogshit writing.

[–] [email protected] 14 points 2 weeks ago

Seems generous tbh, if I submitted a work with incorrect citing I would lose marks and I would have to accept it, because that's fair enough

[–] [email protected] 16 points 2 weeks ago (1 children)

to fabricate quotations and evidence is fundamentally failing work.

it would be writing fiction, if they weren't using an llm

load more comments (6 replies)
load more comments
view more: ‹ prev next ›