by mittermayr
2 subcomments
- I teach a tech class to marketing students, and it definitely works very well. They are allowed to use ChatGPT and other tools, with one caveat: you remain responsible for the output. I hide white-text prompt injections in specs or longer task instructions (usually in PDFs, works well enough there with copy and paste), and sometimes place a phrase near the end of the text that prompts the LLM to append something like, "I submit this assignment without checking its output, and I accept point deductions as agreed."
I used to do this for a laugh and not deduct points, next year, I showed them this before class as an introduction to working with AI and kind of as a warning, I'll deduct points, expecting nobody falling for it, then they fell for it over and over again. Well.
by subscribed
1 subcomments
- Good. I wouldn't like cheaters to compete with honest students on the job market.
In my kid's school (American high school equivalent) being caught on using LLM in papers is a failed subject. Students must pass all the subjects to finish the school. Some of these subjects won't be taught the next year so effectively they lose year, two, three....
by lukewarm707
3 subcomments
- i wonder why the labs don't put a small model for detecting prompt injection in front of the main llm.
it's 20b at most and it can work quite well.
for now you can proxy http through llama guard. 'luxury' security if you can build and pay.
is there an architectural limitation?
by jeremie_strand
0 subcomment
- [dead]
- [dead]
by ididisjcjsj
0 subcomment
- [flagged]
by LandenLove
1 subcomments
- Prompt injecting homework assignments is a funny idea, but doesn't seem very productive.
Either the teacher needs to adjust how they are teaching new concepts or the student needs to ask themself why they are attending college in the first place.