Students punished for using grammar editing/suggesting software

Joined
May 6, 2005
Messages
14,047
Location
San Francisco Bay Area
Most notably I've seen a lot of (annoying) ads for Grammarly, although I'm sure there are other tools out there. The rationale is that parts aren't actually written by the student, or that it's plagarism.

https://www.fox5atlanta.com/news/gr...ent-academic-probation-plagiarism-allegations

Back when I was writing college essays/papers/etc. the most I might have had available was a spell or grammar checker, and at the time they didn't autocorrect. My understanding about Grammarly is that it started off as a grammar checker, but eventually morphed into more features including suggestions and rewriting.
 
Our school board uses copyleaks. Our son had an English report flagged as "most or all AI generated". My wife literally sat beside him for over 10 hours while he wrote the paper. Needless to say, a meeting took place to clarify the concerns.

Copyleaks is the third AI software the school board has tried (and failed, from our experience) to use for checking papers for AI content. Copyleaks considers The Declaration of Independence 90% AI generated as an example of how poorly these things work.
 
This is a catch-22 situation that I predicted a few years back. Tools, when new, confound existing policies; technology often gets ahead of society.

I remember when it was taboo to use a calculator in class; all my math tests had to have the answers fully written out, showing all work. (yeah; I'm old).
Years later, basic handheld calculators were the norm in every classroom.
It didn't take long for those same calculators to become very advanced (HP12C, HP15C, etc) for complex scientific and financial work. And they were scorned by accademia, until later they were finally accepted.
Now, any basic smart-phone is WAY more capable, because not only does it have very advanced calculators, but also internet access.

Now, we're told we must accept that AI is here to stay, and we are to believe it's the best thing since sliced bread, and yet this student is punished for using the technology that, once she's in a job somewhere, she's going to be expected to use to advance the pace of her work ....


Ya can't fix stupid.
 
Grammarly might not be AI but I'm of the mindset that all of it is making us dumber. Less thinking on our part is not better thinking, it's just rewarding us for doing less & thinking less. How is encouraging us to think less better for us?

Now, we're told we must accept that AI is here to stay, and we are to believe it's the best thing since sliced bread, and yet this student is punished for using the technology that, once she's in a job somewhere, she's going to be expected to use to advance the pace of her work ....
You do have a point there. If this is where industry goes... usually companies want schools to teach kids the basics. If they want new hires to be using AI then the schools have to teach AI (or allow it).

But I still think it's all to our detriment. At some point you have to learn how to do it once, then afterwards you can tell when it just does not look right--intuition is a valuable skill. But you get that from doing it the long hard way.
 
My kids had to use "graphing" calculators for their high school classes. They were required for the tests to work the problems.

My son just graduated from Physicians Assistant school. One of the final papers the professor told the class they HAD to use an AI version like ChatGPT to write it. The school has some professors that are trying to utilize it and teach them. Of course another professor went to the Dean saying the kids should be failed because they used ChatGPT. Story I got was a bunch of back and forth between the professors on stay the F out of my class and papers. They did as instructed, you fail your kids and don't help them advance and learn, leave mine alone. My son graduated and passed his PANCE accordingly.
 
It’s just another learning tool for young adults to help them learn good grammar. The actual content, subject and proposition of the paper should still be the focus, and this cannot be assisted by this grammar program. It’s not much different (albeit a lot easier) than the dictionary and thesaurus I use in HS college and law school .
 
Grammarly might not be AI but I'm of the mindset that all of it is making us dumber. Less thinking on our part is not better thinking, it's just rewarding us for doing less & thinking less. How is encouraging us to think less better for us?

But I still think it's all to our detriment. At some point you have to learn how to do it once, then afterwards you can tell when it just does not look right--intuition is a valuable skill. But you get that from doing it the long hard way.


I agree; letting machines think for us in complex ways will just dumb down our society.
Not unlike all these "technologies" which are supposed to make folks safer drivers, but all it really does is encourage lazy behaviors behind the wheel.

Like anything else in life ... too much of a good thing becomes a bad thing.

But in regard to the storyline, I feel a bit sad for the girl. The grand irony is that the academic folks had to use AI to confirm their suspicion of AI. "Do as you're told, not as we do ..."
At some point, there's going to be an assertion that no content will be human any longer. Since AI can think/produce WAY faster than us, then won't it eventually be able to "create" faster than we can? And therefore anything that can be said/done/written will be done before we get there.

I, for one, shall NEVER use AI for creative content. I cherish original thoughts and efforts. Machines should assist me, not become me.
 
Public education is producing an enormous number of illiterates and innumerates. Nobody even calls K-12 grammar school anymore. AI will accelerate the decline. Mark Twain gave us this sage advice;

"I have never let my schooling interfere with my education."
 
It’s just another learning tool for young adults to help them learn good grammar. The actual content, subject and proposition of the paper should still be the focus, and this cannot be assisted by this grammar program. It’s not much different (albeit a lot easier) than the dictionary and thesaurus I use in HS college and law school .

Their advertising claims that providing ideas and even rewrites are part of the software features.
 
I will remind everyone that BITOG policy PROHIBITS the posting of any AI generated content; none is allowed whatsoever.

Two posts have been taken down already.
Any more and this thread will be locked.

It is acceptable to discuss AI. It is NOT acceptable to post AI generated content.
See our Standard of Conduct policy.
 
This is a catch-22 situation that I predicted a few years back. Tools, when new, confound existing policies; technology often gets ahead of society.

I remember when it was taboo to use a calculator in class; all my math tests had to have the answers fully written out, showing all work. (yeah; I'm old).
Years later, basic handheld calculators were the norm in every classroom.
It didn't take long for those same calculators to become very advanced (HP12C, HP15C, etc) for complex scientific and financial work. And they were scorned by accademia, until later they were finally accepted.
Now, any basic smart-phone is WAY more capable, because not only does it have very advanced calculators, but also internet access.

Now, we're told we must accept that AI is here to stay, and we are to believe it's the best thing since sliced bread, and yet this student is punished for using the technology that, once she's in a job somewhere, she's going to be expected to use to advance the pace of her work ....


Ya can't fix stupid.
I feel your pain, as some would say. I remember electronics tech at West Valley Community College. The big questions was use of calculators...

I am not sure it is about "stupid"; it is the wheels of progress turn slowly. I have come to embrace tech. While different, new, etc are against human nature, change is the only constant. It's not like I have a choice.

Today we have AI Agents for the "unknown unknowns".
 
I've already had to delete a few posts regarding PC taunts, etc.
Knock it off; stick to the topic.
Why bring these topic's to Bob's site ? Is there not a platform for grammar? I certainly don't come here to scroll through 20 post to find 1 about automotive
 
When I took engineering, calculators had just barely arrived on the scene and were too expensive for students (for example $300 when a very good starting salary for an engineer was $825/month). We used slide rules when we calculated anything.

The objective in exams was to demonstrate how to solve a problem, and (especially for many higher classes) not to come to a discrete answer. I remember one final exam (in 4th year thermodynamics) with 4 problems in which I got to no answers and scored a strong A.

In real life there would be plenty of time to do the actual calculations.
 
When I took engineering, calculators had just barely arrived on the scene and were too expensive for students (for example $300 when a very good starting salary for an engineer was $825/month). We used slide rules when we calculated anything.

The objective in exams was to demonstrate how to solve a problem, and (especially for many higher classes) not to come to a discrete answer. I remember one final exam (in 4th year thermodynamics) with 4 problems in which I got to no answers and scored a strong A.

In real life there would be plenty of time to do the actual calculations.

I was never allowed to use a calculator during tests in any kind of math class. Ever. But for science or engineering classes, always. High school or college.

I do remember a 7th grade class where we learned how to use a calculator, although most of us already knew. Our classroom had a supply of simple Casio calculator. We also had slide rules, even though they were already outdated. I think we had a small amount of instruction on how to use them.
 
This is a catch-22 situation that I predicted a few years back. Tools, when new, confound existing policies; technology often gets ahead of society.

I remember when it was taboo to use a calculator in class; all my math tests had to have the answers fully written out, showing all work. (yeah; I'm old).
Years later, basic handheld calculators were the norm in every classroom.
It didn't take long for those same calculators to become very advanced (HP12C, HP15C, etc) for complex scientific and financial work. And they were scorned by accademia, until later they were finally accepted.
Now, any basic smart-phone is WAY more capable, because not only does it have very advanced calculators, but also internet access.

Now, we're told we must accept that AI is here to stay, and we are to believe it's the best thing since sliced bread, and yet this student is punished for using the technology that, once she's in a job somewhere, she's going to be expected to use to advance the pace of her work ....


Ya can't fix stupid.
I had a similar discussion (which I won/convinced more than one professor, BTW 😎) with some of my masters’ classes. In the electrical/electronics/digital classes, there can obviously be dozens of potential formulas on the test.

In a few classes I convinced professors to allow a single sheet of paper front and back with only formulas, that had to be spot-checked prior to ensure no cheating. In this case, the argument was since all work had to be shown for credit anyways, having the formula was no guarantee you knew how to use it properly from start to finish, or where to apply it, for that matter.

In another class, I convinced a prof to allow the whole textbook, on the grounds that in the real world that we would have the textbook available as reference, and combined with timed testing, there wasn’t time to “read” the text during the test. But if you knew the material and just needed a reminder here and there to answer some questions, you may have enough to eke out a good grade. There again, grading was merciless due to the concessions, but several people in the class later thanked me for pushing the issue.

The biggest issue with AI that I’ve seen in my direct experience is GIGO, and any bias by who is “teaching” will quickly color the results. I know the LLM models are slightly different, but I’d imagine plagiarism filters that don’t result in false positives are probably among the hardest and most expensive possible. One could plagiarize while using synonyms and it may not be caught; or one could write original content but due to a limited amount of specific jargon or teminology for a topic, may result in a false positive.

The biggest thing I’ve learned about designing & implementing AI in “pass/fail” applications, false positives are much more harmful to overall acceptance of a model, compared to a missed anomaly. If you want to think of it in historical social context, its a paraphrasing of “it’s better to let a thousand criminals go free than to convict an innocent man.”

AI models that affect the public, regardless of application, that are not thoroughly vetted by multiple, disinterested third parties are poisoning the well and will result in significant pushback from the affected parties. As they should!
 
Back
Top Bottom