This is a catch-22 situation that I predicted a few years back. Tools, when new, confound existing policies; technology often gets ahead of society.
I remember when it was taboo to use a calculator in class; all my math tests had to have the answers fully written out, showing all work. (yeah; I'm old).
Years later, basic handheld calculators were the norm in every classroom.
It didn't take long for those same calculators to become very advanced (HP12C, HP15C, etc) for complex scientific and financial work. And they were scorned by accademia, until later they were finally accepted.
Now, any basic smart-phone is WAY more capable, because not only does it have very advanced calculators, but also internet access.
Now, we're told we must accept that AI is here to stay, and we are to believe it's the best thing since sliced bread, and yet this student is punished for using the technology that, once she's in a job somewhere, she's going to be expected to use to advance the pace of her work ....
Ya can't fix stupid.
I had a similar discussion (which I won/convinced more than one professor, BTW

) with some of my masters’ classes. In the electrical/electronics/digital classes, there can obviously be dozens of potential formulas on the test.
In a few classes I convinced professors to allow a single sheet of paper front and back with only formulas, that had to be spot-checked prior to ensure no cheating. In this case, the argument was since all work had to be shown for credit anyways, having the formula was no guarantee you knew how to use it properly from start to finish, or where to apply it, for that matter.
In another class, I convinced a prof to allow the whole textbook, on the grounds that in the real world that we would have the textbook available as reference, and combined with timed testing, there wasn’t time to “read” the text during the test. But if you knew the material and just needed a reminder here and there to answer some questions, you may have enough to eke out a good grade. There again, grading was merciless due to the concessions, but several people in the class later thanked me for pushing the issue.
The biggest issue with AI that I’ve seen in my direct experience is GIGO, and any bias by who is “teaching” will quickly color the results. I know the LLM models are slightly different, but I’d imagine plagiarism filters that don’t result in false positives are probably among the hardest and most expensive possible. One could plagiarize while using synonyms and it may not be caught; or one could write original content but due to a limited amount of specific jargon or teminology for a topic, may result in a false positive.
The biggest thing I’ve learned about designing & implementing AI in “pass/fail” applications, false positives are much more harmful to overall acceptance of a model, compared to a missed anomaly. If you want to think of it in historical social context, its a paraphrasing of “it’s better to let a thousand criminals go free than to convict an innocent man.”
AI models that affect the public, regardless of application, that are not thoroughly vetted by multiple, disinterested third parties are poisoning the well and will result in significant pushback from the affected parties. As they should!