I literally just can't wrap my AuDHD brain around professional formatting. I'll probably use AI to take the paper I wrote while ignoring archaic and pointless rules about formatting and force it into APA or whatever. Feels fine to me, but I'm but going to have it write the actual paper or anything.
Microblog Memes
A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.
Created as an evolution of White People Twitter and other tweet-capture subreddits.
Rules:
- Please put at least one word relevant to the post in the post title.
- Be nice.
- No advertising, brand promotion or guerilla marketing.
- Posters are encouraged to link to the toot or tweet etc in the description of posts.
Related communities:
AFAIK those only help the instructor with grading as it would put all the essays they need to review on an even (more or less) playing ground. I've never really seen any real use in the professional world outside of scholarly/scientific journals.
My opinion is that they tend to stifle creativity of expression and the evolution of our respective languages.
don't worry, you can become president instead
How people think I use AI "Please write my essay and cite your sources."
How I use it
"please make my autistic word slop that I wrote already into something readable for the nerotypical folk, use simple words, make it tonally neutral. stop using emdashes, headers, and list and don't mess with the quotes"
Cries in "The Doctor" from Voyager.
The Doctor would absolutely agree. He was intended to be a short-term assistant when a doctor wasn't available, and he was personally affronted when he discovered that he wouldn't be replaced by a human in any reasonable amount of time.
No child left behind already stripped it from public education...
Because there was zero incentives for a school performing well. And serious repercussions if a school failed multiple years, the worst schools had to focus only what was on the annual test. The only thing that matters was that year's scores, so that was the only thing that got taught.
If a kid got it early. They could be largely ignored so the school could focus on the worst.
It was teaching to the lowest common denominator, and now people are shocked the kids who spent 12 years in that system don't know the things we stopped teaching 20+ years ago.
Quick edit:
Standardized testing is valuable. For lots of rural kids getting 99*'s was how they learned they were actually smart and just for in their tiny schools.
The issue with "no child left behind" was the implementation and demand for swift responses to institutional problems that had been developing for decades. It's the only time moderates and Republicans agreed to do something fast, and it was obviously something that shouldn't be rushed.
One of the worst parts about that policy was that some states had both a "meets standards" and "exceeds standards" results and the high school graduation test was offered five times, starting in sophomore year.
So, you would have students getting "meets standards" on sophomore year and blowing off the test in later attempts because they passed. You would then have school administrators punishing students for doing this since their metrics included the number of students who got "exceeds standards".
Idk, I think we're back to "it depends on how you use it". Once upon a time, the same was said of the internet in general, because people could just go online and copy and paste shit and share answers and stuff, but the Internet can also just be a really great educational resource in general. I think that using LLMs in non load-bearing "trust but verify" type roles (study buddies, brainstorming, very high level information searching) is actually really useful. One of my favorite uses of ChatGPT is when I have a concept so loose that I don't even know the right question to Google, I can just kind of chat with the LLM and potentially refine a narrower, more google-able subject.
Something I think you neglect in this comment is that yes, you're using LLMs in a responsible way. However, this doesn't translate well to school. The objective of homework isn't just to reproduce the correct answer. It isn't even to reproduce the steps to the correct answer. It's for you to learn the steps to the correct answer (and possibly the correct answer itself), and the reproduction of those steps is a "proof" to your teacher/professor that you put in the effort to do so. This way you have the foundation to learn other things as they come up in life.
For instance, if I'm in a class learning to read latitude and longitude, the teacher can give me an assignment to find 64° 8′ 55.03″ N, 21° 56′ 8.99″ W
on the map and write where it is. If I want, I can just copy-paste that into OpenStreetMap right now and see what horrors await, but to actually learn, I need to manually track down where that is on the map. Because I learned to use latitude and longitude as a kid, I can verify what the computer is telling me, and I can imagine in my head roughly where that coordinate is without a map in front of me.
Learning without cheating lets you develop a good understanding of what you: 1) need to memorize, 2) don't need to memorize because you can reproduce it from other things you know, and 3) should just rely on an outside reference work for whenever you need it.
There's nuance to this, of course. Say, for example, that you cheat to find an answer because you just don't understand the problem, but afterward, you set aside the time to figure out how that answer came about so you can reproduce it yourself. That's still, in my opinion, a robust way to learn. But that kind of learning also requires very strict discipline.
Your example at the end is pretty much the only way I use it to learn. Even then, it's not the best at getting the right answer. The best thing you can do is ask it how to handle a problem you know the answer to, then learn the process of getting to that answer. Finally, you can try a different problem and see if your answer matches with the LLM. Ideally, you can verify the LLM's answer.
So, I'd point back to my comment and say that the problem really lies with how it's being used. For example, everyone's been in a position where the professor or textbook doesn't seem to do a good job explaining a concept. Sometimes, an LLM can be helpful in rephrasing or breaking down concepts; a good example is that I've used ChatGPT to explain the very low level how of how greenhouse gasses trap heat and raise global mean temperatures to climate skeptics I know without just dumping academic studies in their lap.
And just as back then, the problem is not with people using something to actually learn and deepen their understanding. It is with people blatantly cheating and knowing nothing because they don’t even read the thing they’re copying down.
To add to this, how you evaluate the students matters as well. If the evaluation can be too easily bypassed by making ChatGPT do it, I would suggest changing the evaluation method.
Imo a good method, although demanding for the tutor, is oral examination (maybe in combination with a written part). It allows you to verify that the student knows the stuff and understood the material. This worked well in my studies (a science degree), not so sure if it works for all degrees?
I might add that a lot of the college experience (particularly pre-med and early med school) is less about education than a kind of academic hazing. Students assigned enormous amounts of debt, crushing volumes of work, and put into pools of students beyond which only X% of the class can move forward on any terms (because the higher tier classes don't have the academic staff / resources to train a full freshman class of aspiring doctors).
When you put a large group of people in a high stakes, high work, high competition environment, some number of people are going to be inclined to cut corners. Weeding out people who "cheat" seems premature if you haven't addressed the large incentives to cheat, first.
Except I find that the value of college isn't just the formal education, but as an ordeal to overcome which causes growth in more than just knowledge.
No. There will always be incentives to cheat, but that means nothing in the presence of academic dishonesty. There is no justification.
trust but verify
The thing is that LLM is a professional bullshitter. It is actually trained to produce text that can fool ordinary person into thinking that it was produced by a human. The facts come 2nd.
I don’t trust LLMs for anything based on facts or complex reasoning. I’m a lawyer and any time I try asking an LLM a legal question, I get an answer ranging from “technically wrong/incomplete, but I can see how you got there” to “absolute fabrication.”
I actually think the best current use for LLMs is for itinerary planning and organizing thoughts. They’re pretty good at creating coherent, logical schedules based on sets of simple criteria as well as making communications more succinct (although still not perfect).
Yeah, I know. I use it for work in tech. If I encounter a novel (to me) problem and I don't even know where to start with how to attack the problem, the LLM can sometimes save me hours of googling by just describing my problem to it in a chat format, describing what I want to do, and asking if there's a commonly accepted approach or library for handling it. Sure, it sometimes hallucinate a library, but that's why I go and verify and read the docs myself instead of just blindly copying and pasting.
That last step of verifying is often being skipped and is getting HARDER to do
The hallucinations spread like wildfire on the internet. Doesn't matter what's true; just what gets clicks that encourages more apparent "citations". Another even worse fertilizer of false citations is the desire to push false narratives by power-hungry bastards
AI rabbit holes are getting too deep to verify. It really is important to keep digital hallucinations out of the academic loop, especially for things with life-and-death consequences like medical school
Even more concerning, their dependance on AI will carry over into their professional lives, effectively training our software replacements.