Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Direct derivative works of a single work is easy to prove by model activation but input/output similarity is much easier to get outrage points. True internal function would show that no-use is required to "distribute" derivative seeming content which is rather confusing and is effectively the defense. At these levels a derivative of a derivative is indistinguishable to the human eye anyway.

Soon people will get that you can no longer assume when two pieces of text are similar it is because of direct plagiarism.



No, you don't only look at the end result when determining whether a work is derivative of another. The process with which one produced the work has implications whether it is a derivative or not.

For one, if you can show that you didn't use the original copyrighted work, then your work is not a derivative, no matter how similar the end results are.

And then if the original work was involved, how it was used and what processes were used to are also relevant.

That's why OpenAI employees who did the scraping first-hand are valuable witnesses to those who are suing OpenAI.

Legal processes proceed in a way that is often counter-intuitive to technologists. IMHO you'd gain a better perspective if you actually tried to understand it rather than confidently assume what you already know from tech-land applies to law.


Few things frustrate me more than so many developers’ compulsion to baselessly assume that their incredible dev ultrabrain affords them this pan-topic expertise deep enough to dismiss other fields’ experts based on a few a priori thought experiments.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: