Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
I am no longer chairing defenses where students use generative AI (columbia.edu)
48 points by compiler-guy 5 months ago | hide | past | favorite | 9 comments


I don't want to read stuff that AI has expanded upon.

If you turned 3 brief bullet points into a long paragraph with chatgpt, you have effectively used a chat bot to waste other's time and effort.

If you are doing so, just put the prompt at the top, and they leave the flowery nonsense afterwards to make yourself feel better.

Similarly, if you vibe coded something, just put the prompt at the top of the file in a comment, and the rest can be ignored. It's fine to force computers to read it, but there aren't enough lifetimes for humans to bother.


Same goes for documentation and podcasts. If you don't have the time then it must not be all that important.


Most scientific "products" are not bound books or pdf. The software, datasets, proofs, algorithms, etc contained within them (or in the dreaded "replication materials") are usually more valuable than the text that scientists write around them. We use the text merely to communicate the contribution and its value to other humans. I suspect AI will outperform the best humans at this communication task very soon. Is the purpose of a dissertation really to demonstrate that a human has the capacity to write effective prose? There are many scientists who are brilliant but are terrible at this task. Do we really wish to shun them and their ideas?

In the future, the best science will be produced by those that wield AI tools most effectively. Academics need to figure out how to assess scientific work within this context. This is not a good solution.


I feel like debates about AI and plagiarism are quite surface-level definitional debates. They miss out an important discussion of why we believe (old-fashioned) plagiarism is bad. Answers about when using AI is acceptable should be based on that reasoning.


I can see why profs don’t want to read stuff the student may not even have read.

The request for the student to provide a memo showing with A.I. and without samples seems a bit silly though

Much like plagiarism this seems like a hopeless battle. Doesn’t help that genAI inherently favours the generating more than the reviewing side


I understand the point and the issue but I wonder if the author will be as strict with the colleague


This educator makes a valid argument and is entitled to their restrictions, but I have an idea for alternative approach. Allow unfettered use of AI, but raise the bar for grading. Tell the students to knock themselves out and use any tool they want, but you will now expect their papers to read like polished, professional-level prose.


This person is talking about phd theses and masters-level writing intended for publication, not graded papers.


Great read




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: