Using AI to write papers for a writing class is like using speech to text for a touch typing course. You’re bypassing the exercises that will actually provide the value you’re paying for
Main, home of the dope ass bear.
THE MAIN RULE: ALL TEXT POSTS MUST CONTAIN "MAIN" OR BE ENTIRELY IMAGES (INLINE OR EMOJI)
(Temporary moratorium on main rule to encourage more posting on main. We reserve the right to arbitrarily enforce it whenever we wish and the right to strike this line and enforce mainposting with zero notification to the users because its funny)
A hexbear.net commainity. Main sure to subscribe to other communities as well. Your feed will become the Lion's Main!
Good comrades mainly sort posts by hot and comments by new!
State-by-state guide on maintaining firearm ownership
Domain guide on mutual aid and foodbank resources
Tips for looking at financials of non-profits (How to donate amainly)
Community-sourced megapost on the main media sources to radicalize libs and chuds with
Main Source for Feminism for Babies
Maintaining OpSec / Data Spring Cleaning guide
Remain up to date on what time is it in Moscow
I was in conversation with a friend who works in tech and we were talking about a thing that we wanted to find some science on. So I found a paper on it and started to read, but before I got done he replied by sending me a chat gpt summary of the paper. And I could already tell it wasn't correct from reading it myself even a little. What I really wanted to say to him was that I'd rather think for myself, tyvm.
If students are all doing this now, nobody will think of anything themselves anymore or form an actual deep understanding of a thing, be it whatever. Not really. Anyone who has done reading knows how that shapes us and then can develop into something deeper. It's the f'n "innovation" these same tech types go on and on about that dies with this tech.
BTW, I took time to look up some of these sources my student used, couldn't find the quotes they quote, so told them the paper is an "A" if they can show me every quotation and failing otherwise. Does this seem like a fair policy (my thought is -- no matter the method, fabrication of evidence is justification for failing work)?
Most class syllabuses I've seen tie LLM into the same category as plagiarism. That's an automatic failure on the assignment and sometimes failure of the class.
(it's writing/research, so like, engaging in a discipline and looking at what's been written before on your topic, etc.)
BTW, I took time to look up some of these sources my student used, couldn't find the quotes they quote, so told them the paper is an "A" if they can show me every quotation and failing otherwise. Does this seem like a fair policy (my thought is -- no matter the method, fabrication of evidence is justification for failing work)?
I think they will learn an important life lesson: that if they're going to cheat, then they have to, at a minimum, be sure that they are at least "getting the right answer". The tide of AI dystopia is unstoppable, but you can at least teach them that they can't just completely shut their brains off to the extent that they are just presenting completely fabricated research and factual claims.
Trash Future repeatedly makes the point that AI chat bots are the inverse of the printing press. The printing press created a way for information to be reliably stored, retrieved, and exchanged. It created a sort of ecosystem where ideas (including competing ideas) could circulate in society.
Chat bots do the opposite. They basically destroy the reliable transmission of information and ideas. Instead of creating reliable records of human thought (models, stories, theories, etc.), it's a black box which randomly messes with averages. It's so fucking harmful
This makes no sense because it gives the general problem of epistemology a year zero date of November 30th, 2022.
People were lying, distorting and destroying prior to the invention of the printing press. For example one of the most obvious is the Donation of Constantine which the Catholic Church used to extort European kings starting 8th century.
The printing press actually made things worse. For example the Gospel of Barnabas was thought to have been so widely proliferated because the forger printed fabricated copies of the the Gelasian Decree.
Creating reliable records of "human thought" doesn't matter because the problem isn't one of what do people think, it's what is the actual truth. This isn't even the first system that greatly obscures historical thought for the benefit of a select few. If you were a peasant in 1500's your ChatGPT was the conspiracy between your local lord and your local pastor to keep you compliant. The German peasants literally fought a war over it.
There is no place in academia in which an LLM would be a reliable store of information because it's a statistical compilation not a deterministic primary source, secondary or tertiary source. Trash Future as always is tilting at windmills erected by capitalist propaganda.
I had to do this in like 2008 for a humanities course about web 2.0, because my paper was about forum types and how different designs had different outcomes, etc. I literally could find almost no relevant stuff to quote so eventually I just fully bullshitted lol.
Yeah that's definitely not an excuse for this paper. Also, my move in that position was always to just find a source broader than my topic and apply it like a lens. Works pretty well.
cheating in education in general, AI or not, is more caused by the financial and systemic repercussions of failing. When these students fail a class, it's often another few thousand dollars they don't have down the drain, and if they fail too many classes it locks them out of higher education entirely
failure is one of the biggest drivers of true learning, and the educational system directly discourages it
Oh I get that -- the financial reality is there for sure, and I recognize they have other classes, etc. Don't get me wrong, I know who the "true" villain is.
Doesn't mean I can't be mad at these AI companies for unleashing this on us. It actively makes teaching the skills to understand writing harder since students can get close to "good" writing with these machines, but the writing it produces crumbles under the slightest scrutiny. We're actively harming thought and understanding with them.
Is this for a creative writing course? Because wtf
Writing and research. So not "creative" - it's like, citing your sources in an argument, researching social issues, etc.
if two students are using exact same (unneeded) variable in a script, did they do similar prompt or do they talk to each other
fucking hate this shit, something which could be done in like 20 lines of code is like 200. I don't particularly have to care, cause i'm not teaching programming but jesus christ
Kids can't even cheat properly anymore, because of woke
I tried using AI to help find sources for my partners thesis. It's a niche topic on body phenomenology and existentialism in pregnancy and birth. Instead, it cited Heidegger books that don’t even exist. A colleague recommended it, but honestly, you would have to be insane to rely on this.
LLMs in general cannot handle finding quotes because they can't discern between real ideas or regurgitating ideas in a slightly different format.
I get so annoyed when people tell me to ask an AI something. It has no knowledge and no capacity for reason. The only thing it can do is produce an output that an inexpert human could potentially accept as true because the underlying statistics favour sequences of characters that, when converted to text and read by a human, appear to have a confident tone. People talk about AI hallucinating wrong answers and that's giving it too much credit; either everything it outputs is a hallucination that's accepted more often than not, or nothing it outputs is a hallucination because it's not conscious and can't hallucinate, it's just printing sequential characters.
It's advanced autocorrect. Calling it AI is an insult to Skynet.
it's not all that far from the "post what your autocorrect completes this sentence as" thing
the llms are considerably more sophisticated sure, but what they do is fundamentally the same
The more specific the information the more it lies
Seems a fair policy. I like to imagine if you stress this policy up front in advance, students might actually check and verify all their own sources (and thus actually do their own research even with ai stuff)
It seems 100% fair to me. Using AI will be a big part of the future, but if your class is about a particular set of skills that don't involve asking Computer-Daddy to do your homework for you then good on you for trying to ensure it.
Using AI will be a big part of the future,
Yet absolutely NONE of the people pushing for this future educate people about the limitations of LLM chatbots. In fact, they deliberately mislead the public. I think about the doctor scene in Idiocracy a lot these days
told them the paper is an "A" if they can show me every quotation and failing otherwise. Does this seem like a fair policy (my thought is -- no matter the method, fabrication of evidence is justification for failing work)?
If the policy for plagiarism at your school is a F on the assignment, that seems fair to me. Asking LLMs to do your work is plagiarism.
I mean, I could go to that, but I figure as a writer, to fabricate quotations and evidence is fundamentally failing work.
I'm trying to give the student the chance to save themselves too. If they just cited that (for instance) the quotation about "all great historical figures appear twice" was from The German Ideology instead of 18th Brumaire that's not a problem -- the quotation exists, it's simply the student being sloppy at documentation.
However, to claim that someone stated something they didn't -- that's just fundamentally failing work (it would be like going online and saying Mao said that "power grows out of the hands of the peasantry" instead of "power grows out of the barrel of a gun").
I should note - my class has a policy that students can use AI as long as they clear it with me. However, they're responsible for their work, and I won't accept work with fake quotes. That's dogshit writing.
Seems generous tbh, if I submitted a work with incorrect citing I would lose marks and I would have to accept it, because that's fair enough
to fabricate quotations and evidence is fundamentally failing work.
it would be writing fiction, if they weren't using an llm