1
Disapproving of automated plagiarism is classist ableism, actually: Nanowrimo
(nanowrimo.zendesk.com)
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
I can entertain the classism argument if they reframe it as a choice, where the alternative is expanding the scope of what is currently considered plagiarism to include the degrees of ghost-authorship privilege buys, since their argument hinges on the assumption that it is acceptable.
The ableism argument is the one I’ve grappled with the most from the standpoint of disability advocacy. Usually we first must ask whether the achievement in question is the proper measurement. In this case it is quite simply creative origin, which might be difficult to deconstruct further without reaching for the terribly abstract. Next comes the more complicated task of determining the threshold beyond which a simple modifier, like a sports handicap, is simply no longer sufficient, i.e. whether such differing abilities merit a separate category with unique standards. In this case, they provide several examples of cohorts with great enough support requirements that AI assistance might be the only option available for participation. Such differing ability would, I think, suggest the formation of a new category with differing standards as a beneficial compromise.
The issue of systemic unfairness is a larger one, I think, than the matter of AI’s use can address. When we are looking for ways to mitigate systemic unfairness, usually it’s preferred to relieve each disadvantage directly and surgically by accounting for the cumulative impedance and ongoing support necessary to give them a fighting chance. What is not preferred is to actually fight their battles for them, however, and that happens to be what the latest LLM’s are capable of: robust human-like authorship with minimal prompting.
Ultimately, I think the real solution to the issue of AI in the liberal arts will be to adapt our notion of what an essentially human achievement entails, given the capacity of current technology. For example, we no longer consider mathematical computation an essential human achievement, but rather the more abstract instrumentation of it. Similarly, handwriting is no longer a skill emphasized for any purpose other than personal note-taking, as with off-hand recall of vocabulary definitions and historical dates. What we will de-emphasize in response to this technology is yet to be seen, but I suspect it will not be creative originality itself.
Given the context is that NaNoWriMo just took on a new AI-based sponsor who they're promoting hard to their users, there isn't really much justification to bend this far backwards to concoct an excuse for them.
Oh, I was actually disagreeing from an educator perspective, I just entertained some of their arguments in case they were serious.
Edit: apparently the whole post was bad faith drivel. I didn’t know anything about the site until now. Will delete comment.