There's zero overlap between banning social media for kids and banning news from Rupert.
P.S. that soveregnity issue is not likely to be acted on because there are always a lot of people who prefer foreign influence to domestic opposition! Just ask the Roman Empire.
I get that. But I think the AI-deriders are a bit nuts sometimes because while I’m not running around crying about AGI… it’s really damn nice to change the arguments of a function and have it just go everywhere and adjust every invocation of that function to work properly. Something that might take me 10-30 minutes is now seconds and it’s not outside of its reliability spectrum.
We build headsets that lets you control your computer directly with your mind. Initially I expect we can get increased bandwidth / efficiency on common tasks (including coding) - but I think it gets really exciting when people start designing new software / interaction paradigms with this in mind.
If you invest $100B and get back $40B in sales, you're investing $60B of money and $40B of your products. This is simple stuff. The question is whether or not it is a good investment. Probably not.
They key difference between plagarism and building on someone's work is whether you say, "this based on code by linsey at github.com/socialnorms" or "here, let me write that for you."
but as mlinsey suggests, what if it's influenced in small, indirect ways by 1000 different people, kind of like the way every 'original' idea from trained professionals is? There's a spectrum, and it's inaccurate to claim that Claude's responses are comparable to adapting one individual's work for another use case - that's not how LLMs operate on open-ended tasks, although they can be instructed to do that and produce reasonable-looking output.
Programmers are not expected to add an addendum to every file listing all the books, articles, and conversations they've had that have influenced the particular code solution. LLMs are trained on far more sources that influence their code suggestions, but it seems like we actually want a higher standard of attribution because they (arguably) are incapable of original thought.
It's not uncommon, in a well-written code base, to see documentation on different functions or algorithms with where they came from.
This isn't just giving credit; it's valuable documentation.
If you're later looking at this function and find a bug or want to modify it, the original source might not have the bug, might have already fixed it, or might have additional functionality that is useful when you copy it to a third location that wasn't necessary in the first copy.
If the problem you ask it to solve has only one or a few examples, or if there are many cases of people copy pasting the solution, LLMs can and will produce code that would be called plagiarism if a human did it.
Do you have a source for that being the key difference? Where did you learn your words, I don’t see the names of your teachers cited here. The English language has existed a while, why aren’t you giving a citation every time you use a word that already exists in a lexicon somewhere? We have a name for people who don’t coin their own words for everything and rip off the words that other painstakingly evolved over a millennia of history. Find your own graphemes.
What a profoundly bad faith argument. We all understand that singular words are public domain, they belong to everyone. Yet when you arrange them in a specific pattern, of which there are infinite possibilities, you create something unique. When someone copies that arrangement wholesale and claims they were the first, that’s what we refer to as plagiarism.
It’s not bad faith argument. It’s an attempt to shake thinking that is profoundly stuck by taking that thinking to an absurd extreme. Until that’s done, quite a few people aren’t able to see past the assumptions they don’t know they making. And by quite a few people I mean everyone, at different times. A strong appreciation for the absurd will keep a person’s thinking much sharper.
>> They key difference between plagarism and building on someone's work is whether you say, "this based on code by linsey at github.com/socialnorms" or "here, let me write that for you."
> [i want to] shake thinking that is profoundly stuck [because they] aren’t able to see past the assumptions they don’t know they making
what is profoundly stuck, and what are the assumptions?
Copyright isn't some axiom, but to quote wikipedia: "Copyright laws allow products of creative human activities, such as literary and artistic production, to be preferentially exploited and thus incentivized."
It's a tool to incentivse human creative expression.
Thus it's entirely sensible to consider and treat the output from computers and humans differently.
Especially when you consider large differences between computers and humans, such as how trivial it is to create perfect duplicates of computer training.
It’s tiresome to see unexamined assumptions and self-contradictions tossed out by a community that can and often does do much better. Some light absurdism often goes further and makes clear that I’m not just trying to setup a strawman since I’ve already gone and made a parody of my own point.
reply