> I dislike the GPL, and the stigmatization of proprietary software for any reason, and the notion that it's inherently evil and unethical.
A flaw in OP's assertion is that I feel that most of what we have today wouldn't exist if it weren't for the GPL licensing.
Edit: Despite RMS's unique bent on things, he hasn't been wholly wrong about the nature of free/nonfree software. If the Linux kernel, or the GNU software suite hadn't been GPL'd in the first place - we wouldn't be using it today. Instead, it would have died out or we'd be using some proprietary fork. I think, in the long term, GPL software benefits everyone because we can fork and share modifications. It fosters a mindset and community that's beneficial to a hacker culture (as in tinkerer, not as in black).
> A flaw in OP's assertion is that I feel that most of what we have today wouldn't exist if it weren't for the GPL licensing.
That is pure RMS rethoric. The narrative that free software exists because of GPL is a compelling one (and sounds plausible), but its not based on reality. It was quite common in the end of the 70's and beginning of the 80's (specially after the creation of the C language) to distribute software in source form, instead of binary packages. Some early examples of this that still exists today are the BSD utilities (granted, now without the Berkeley clause), laying in the ground the notion that GPL is determinant in preserving OSS as an ecosystem.
The "trigger story" about not having access to a printer driver source code is also often simplified. The printer was a prototype of a commercial product, and he was offered the source under an NDA. More details on this in https://www.oreilly.com/openbook/freedom/ch01.html
> If the Linux kernel, or the GNU software suite hadn't been GPL'd in the first place
While there is no arguing that GPL is at least in part responsible for the success of Linux, it wasn't GPL until version 0.99. The major reason why you use Linux today is because of the "unix wars" (https://en.wikipedia.org/wiki/Unix_wars) - the lack of a free i386 unix system is what drove Linus to write one in the first place. If AT&T was less beligerant at the time, you'd probably be running some form of BSD :)
You mean macOS?
> The relevant parts of macOS for this discussion are actually opensource (kernel & most userland utilities), albreit not updated often;
> Playstation 3/4?
Does the fact that product XYZ has a BSD-based product reduces or affect the continuity of the opensource project? No, it doesn't. And that is the beauty of BSD. Some companies build closed source products on it, some companies actually give back contributions (such as Netflix). Its fine either way.
> Proprietary, non-free form of BSD, yes.
Well, Microsoft used BSD code to some extent in previous versions of Windows (afaik tcp stack, services for unix, ftp client, etc). On the other hand, I do use Android-based devices - which are, for practical purposes, mostly a generic non-free form of Linux. In fact, not only the source code (when you can get it easily) is almost useless for practical use, but often bootcode and drivers are proprietary signed blobs - without which the device is just a fancy brick. Also, I'd argue there is little value brought into the linux project just because it is a core Android component. How is this scenario different from a "non-free form of BSD"?
> Does the fact that product XYZ has a BSD-based product reduces or affect the continuity of the opensource project? No, it doesn't. And that is the beauty of BSD. Some companies build closed source products on it, some companies actually give back contributions (such as Netflix). Its fine either way.
The irony of the GPL is that it does not require giving back. The only requirement is to provide source code when you do binary distribution. There is no provision about giving back to the upstream developers.
Forced code dumps are not giving back and projects like Linux do not consider them to be a form of giving back. Sending useful patches and responding to maintainer feedback would be giving back. The GPL is no better at obtaining that than a 2-clause BSD license.
> You mean macOS? Or Playstation 3/4? Proprietary, non-free form of BSD, yes.
As someone who has been dedicated to OSS since I started writing software and also wrote some proprietary code for the health care industry (which was not open sourced mainly because there was no point since it would have been a code dump useless to people because hardware patents were involved), I have this feeling:
I would rather good code be reused by proprietary stuff, because someone is going to put that code in systems where failures kill people and it is better that they reuse good code to prevent deaths. It is ridiculous to embrace a philosophy in software licenses where people dying because of your decision to deny good code to people writing proprietary software is okay. The OSS versus proprietary debate should not be done in a manner that causes fatalities.
You use unfree Linux too...for example google. What people seam to forget, under GPL you don't need to give your code back if your DON'T redistribute it.
And that's why google forbids the use of AGPL internally.
It would be neat if GPL advocates only complained about proprietary software, but some of them are hellbent against other open source software. They bash BSD-style licensing and treat anything not under what are regarded to be GPL compatible OSS licenses even worse.
I have done a significant amount of work in OpenZFS and the hostility from certain GPL proponents has been ridiculous. Meanwhile, the way that Linux's kernel module interface interacts with the GPL means there is no problem for using ZFS with Linux since ports from other systems in the form of kernel modules are not derived works works. This has passed legal review by actual lawyers, one of which is a co-author of the GPLv3:
The view that the GPL is what enabled OSS to develop is also ridiculous. The original BSD license for example, predates the GPL. If anything, OSS developed in spite of the GPL, since controversies surrounding the GPL have been the single most counterproductive aspect of the OSS community. It is a wonder developers have any time to develop software given all of the headaches caused by GPL supremacists.
If the GPL had never existed, the community would have avoided GPL supremacists and we would all be happily using BSD/Apache/Mozilla licensed software.
The GPL consumed all the (OSS developer) oxygen in the room in a number of useful spaces (gcc, for example), where it was far too expensive for far too long to start a non-GPL’d replacement.
This is still the case for things like GPG, where we have zero integration with mainstream e-mail clients, and zero penetration into the “normie” consumer market.
The GPL creates an inescapable sub-critical mass. You can almost never grow beyond the “hacker” market of other GPL software developers, because the people who cannot code — but can spend money — have no reasonable way to pay for development.
The GPL is a cancer on open source that has hobbled OSS (and the general progress of the industry!) at almost every juncture.
Imagine if BSD and Linux developers could cross-pollinate code, for example. How much more vibrant would OSS be for the interaction of those different branches of the UNIX tree?
GPL OSS that has succeeded has done so despite the GPL, not because of it. The GPL is viral, coercive, and captures and co-opts the value of OSS to further the aims of the GPL, rather than the OSS development itself.
The code quaity of GCC leaves much to be desired. Their mailing list has some figures from Coverity, where the defect rate in 2017 was at 1.67 defects per thousand lines of code for their 2.5 million lines versus 0.62 defects in LLVM per thousand lines of code for LLVM's 5.1 million lines:
At present, GCC is at 1.60 defects and LLVM is at 0.76. LLVM was at 0.32 in July, but it jumped to 0.76 in August around the time it would have branched to start LLVM 16 development and has stayed there since then.
GCC has a policy of hiding its coverity data contrary to the spirit of open source, so I have not been able to see its historical data. I could forking it on github and submit my own builds to coverity to see what the historical data likely was, but I do not see much reason to do that beyond knowing the current state is a mess.
Anyway, I felt like expanding on your GCC remark, since it is a pile of bad code that is definitely holding back OSS and the industry.
> Imagine if BSD and Linux developers could cross-pollinate code
Nothing in the GPL prevents you from making your own derivative. Anyone can take that derivative and make their own. What it prevents is forking and closing the source of the software. We wouldn't have code to cross-pollinate at this stage if it were not for the GPL.
> Of course we’d have code to cross-pollinate; an entire ecosystem of liberally licensed code exists.
> The GPL absolutely prevents you from making your own derivative unless you adopt the same coercive, viral license.
And what incentive I have to allow everyone to create closed derivatives of my code and use them to take freedom away from their users(most likely including me).
If they love closed code so much, they are free to hire as many developers that will write it for them. But it costs money while taking someone else code is cheap. I don't want to help people who want to enrich themselves on someone else work without giving anything back.
It was literally decades before LLVM and Clang become a reality, and even longer before they could supplant the GNU toolchain for most popular architectures.
GCC’s dominance held back new languages, and language front-end development, for decades.
That's a 13 year gap, 16 if you count first release rather than the start of work.
That is not, to quote you, "literally decades". It is significantly less than 2 decades.
Secondly, there was lots of language development between 1987 and 2000 -- for convenience, we can say the 1990s.
That is the era when Python ('95), Ruby ('95), Java ('96) and Javascript ('96) and many others first appeared. Perl is contemporaneous with GCC.
It is not true to say that GCC stifled development. What is true is that this is the era when a lot of interpreted and VM/bytecode/runtime-based languages originated, including both the JVM and .NET CLR.
The focus of FOSS language development was arguably elsewhere, maybe _because_ GCC evolved to be a serious competent professional tool in that timeframe.
Additionally, there was a lot of R&D in compiled languages in that decade or so, including Delphi, VB6, the first native C++ compiler from Zorland, and so on.
It took time for FOSS code to catch up and surpass proprietary compilers, yes, but with one good compiler, there was less pressure to do so, and so people worked on scripting languages instead.
It is a buggy compiler and it is a wonder that it works as well as it does. It is no surprise that LLVM is better.
As for people working on scripting languages instead, it is more that they were forced to work on scripting languages because the GCC codebase was designed to be incomprehensible to all but a select few. This is obvious if you try reading GCC. If you have not, I highly recommend that you pick some random files and begin reading. I can read random parts of the Linux kernel and generally follow along what is being done. I can do the same with LLVM to some extent, which I know since I have patched it locally in the past. I cannot do that with GCC. It is a mess. :/
I personally have no particular feelings either way about GCC; I don't write code any more, and when I did, I barely used compilers or xNix to do it.
All I am doing is responding to @catiopatio's comment: that apparently it was good _enough_ to inhibit competitors for a decade and a half.
The economics of FOSS are not yet well understood (and FWIW, I think @catiopatio is totally wrong about the GPL). There were lots and lots of compilers in the proprietary space, and aggressive competition was the norm.
The headhunting of Anders Hejlsberg from Borland to Microsoft shifted the entire competitive market of programming languages for _decades_, but FOSS people don't tend to notice such things.
(He developed Delphi. Without him, Borland floundered and died. With him, MS grafted a compiler onto VB, making a vastly successful product, and then abandoned it, and came up with .NET and C# instead, muddying the waters in ways that still shape the industry.)
I don't get the argument about the separation between /usr/bin and /usr/local that FreeBSD does: why there should be a separation of concerns about the core system and the rest of the packages?
In Linux distributions everything is installed by the package manager, and while it may be in different repositories (for example in ArchLinux you have core and extra) the packages are exactly the same, and the package manager maintains a database of all installed files. Do you want to remove all the "non core" stuff? You can easily do that with a command.
Meanwhile /usr/local is retained for stuff that is not installed by the package manager of the distro, and thus it's not registered in its database. This will include for example binaries and libraries installed with other package manager such as pip, npm, etc, or things that you install with `make install`, that are not upgraded automatically.
It generates a lot of confusion, for example some files are in /etc, some others are in /usr/local/etc, some other files are in both locations and you have to know which is the correct one. And who decides that is core and what is extra anyway? There is not a clear distinct definition, for example do you define as core only the software necessary to boot? Do you include system utilities? Compilers? Of which language? It's a mess!
Sounds like OP's assertion is that with 3rd party packages installing into /usr/bin, things are mingled to an extent that upgrades (even with a package manager) cannot be cleanly completed, and (perhaps?) disk partitioning can't be done effectively (eg in the case where your 3P app is an absolute disk hog).
I personally accept the second claim more than the first. UNIX used to have stronger standards here (eg /opt, /opt/etc, /usr/local, and so on) but I'm not sure if Linux ever followed such standards. And disk partitioning seems to have gone the way of the dodo these days.
If everything is managed by the package manager, there is a database of files that are installed, and assuming the package manager works correctly it shouldn't create any problems. I use installations made 10 years ago and they are perfectly fine!
Different is for software that is not installed by the package manager, that in Linux is typically installed in /usr/local (if we talk about stuff that is compiled, or installed by another package manager such as pip or npm) or to /opt (for proprietary applications distributed as a binary).
So, in FreeBSD, the distinction is between base and ports/pkg +/- whatever else you install directly from the outside world (if anything?), in theory some vendor packages might install to /opt, but it's been a while since I saw that actually happen.
>It generates a lot of confusion, for example some files are in /etc, some others are in /usr/local/etc, some other files are in both locations and you have to know which is the correct one.
On FreeBSD, this is "simple"; if it comes in the base install, configuration files should be in /etc; if you had to install it, configuration files should be in /usr/local/etc. Now with regard to /etc/rc.conf, there's a lot of options on where to put additional configuration, and the combined configuration controls rc scripts run from /usr/local and /opt/. So that's not totally clear.
> And who decides that is core and what is extra anyway?
Well, the FreeBSD team does.
> There is not a clear distinct definition, for example do you define as core only the software necessary to boot? Do you include system utilities? Compilers? Of which language? It's a mess!
I'm not sure there's a clear definition written down anywhere, or frankly if a definition could be made that would be specific enough to be accurate while being general enough to change as time changes. In a nutshell, IMHO, FreeBSD base is enough software to be a somewhat useful system that can replicate itself, compile itself, and expand to fit the needs of the user. Not very many users will be satisfied with just base, but if your needs were small, maybe.
Practically, FreeBSD base includes basic connectivity tools (ppp, dhcp/bootp client, bootp server, tftp client/server, ftp client/server, http client (fetch), ssh client/server, nfs client/server), a mail system, an editor, three shells, basic system utilities (cal grep, diff, etc), at least one pager (I don't think it always included less, but it does now), awk, a C/C++ compiler, BSD Make, a caching recursive resolver (unbound now, used to be BIND), tools to configure the kernel, and tools to do updates, install packages / ports, etc.
No perl, no python, no bash, no gui, no nothing. If you want any of that stuff, you've got to install it yourself.
The benefit of this separation is, again IMHO, the base system is small and compact and doesn't need to change very often. The basics you need to operate the computer are all there, and don't interfere (much) with the applications you probably actually want to install. With some exceptions, if the main application for you fits in the domain of something in base, you are actually likely to want a more feature filled option from elsewhere anyway; if you run a ftp server, you probably want more features; if you run a bootp server, well you probably actually want a dhcp server; some people like sendmail, but a lot of other people prefer postfix (and sendmail in base might be being replaced with something much simpler sometime soon, so if you wanted sendmail, you probably want that from outside base as well). If you want a webserver, that'll be from outside of base, and probably you'll want a language runtime to go with it.
But if you need/want to upgrade the kernel and its accompanying software, you can probably upgrade base, and not disturb your applications in /usr/local (depending on versions, and how you upgrade, you may need to install a compat package to keep applications compiled for older versions of FreeBSD still running, and you may want to rebuild for the new version at some point).
To me all of this is a mess that make administering a system more complex.
Personally, I use ArchLinux, and everything I have to do to update everything on my system is launch `pacman -Syu`. I can install extra software from the AUR, but this is managed by the same package manager in the end and thus installed in /usr/bin.
For work I use mainly Debian based systems, that is Debian or Ubuntu, similar thing happens in these systems, you update everything with `apt upgrade` and don't worry about the rest.
I'm also of the philosophy that is always better to update to the latest version available, that is I don't fear upgrading my system (rather on critical systems I install OS such as Debian stable or Ubuntu LTS that only provide bugfix and not new features).
While it's true that with the FreeBSD approach you can update one software and not update the core of the operating system, why would you want to do so? Updates usually fix security problems, you typically want all the software updated to the latest version, unless we talk about some specific edge cases.
To me it seems the distinction that Linux once had about /bin, /sbin, /usr/bin and /usr/sbin: in modern distributions (such as ArchLinux, Fedora/Red Hat, etc) everything is a symlink to /usr/bin because this distinction doesn't make a lot of sense anymore. Only a few distro such as Debian keep this distinction as far as I know (only for legacy purposes probably).
> No perl, no python, no bash, no gui, no nothing. If you want any of that stuff, you've got to install it yourself.
This is a really important point, and an argument in favor of BSD’s central model. Watching various Linux distros try and try for years to remove Perl and Python 2 from their base installs only to fail because all of their system management tools are written in a mishmash of the two speaks volumes about the usefulness of confining all your core system bits to one scripting language.
Over the past four years, I have come to avoid using Linux for anything not needing Docker or not run as a supported image in the cloud.
The divergent distributions and design philosophies are nice when they produce positive change (systemd is just fine as an init). But more often than not, it produces four different equally bad ways to configure an ethernet device, still no real third party installation mechanism and depreciation of tools because someone at RedHat needed an epic.
>There was a time when I was younger when I really liked the GPL (v2), but as I got older I realized that even though I am a strong supporter of Open Source (...), and I strive to use only Open Source components, I've always been perfectly fine with some applications that are proprietary.
Whenever I hear someone start to reason with "as I got older", I interpret it as: I've gone back to the fridge more than a dozen times by now so I've lowered my standards far enough to not care anymore.
It's funny as I've gone the other direction. Particularly motivated by all the companies that have gone from permissive to source available citing their need for a reward while ignoring that they themselves were the beneficiary of permissive licenses has soured me on licensing any of my own stuff permissively and left me considering the GPL or AGPL specifically so such companies would have to drop any of my libraries to make such a move.
Generally I agree although in case of the GPL there's a lot to criticize especially about the whole GNU project in my opinion. It's hard to identify with it these days and at the same time seeing how so many projects with less restrictive licenses flourish. Also working at a day job with GPL isn't exactly inspiring
I think that the GPL philosophy is correct: if you modify an open source software you have to release the work you did back to the community. Without it Linux wouldn't be the thing that is today, rather there would be a ton of proprietary Linux variations, or at least a ton of proprietary Linux kernel modules, that the user cannot modify or inspect.
If we talk about FreeBSD, the reason why FreeBSD is not successful as Linux is... the fact that is not GPL licensed. Multiple companies (including Apple for OSX, Sony for the PlayStation OS, and many others) user FreeBSD in their products, but since the license allows them they do not release their improvement to the public. Meaning that the project evolves very slowly, or doesn't evolve at all.
Doesn’t the fact that Sony and Apple put FreeBSD-derivatives into more end users hands than Linux ever reached (depending on how you view Android) count as being successful?
Sony and Apple don't benefit from each other's changes, and neither do users of FreeBSD.
I like that you brought up Android, because the experience in shipping kernels to billions of devices has led to numerous upstream improvements, as well as an ecosystem of devices that support other Linux-based operating systems. Sony and Apple don't enable this by forking FreeBSD.
It used to be even more comprehensive, which enabled projects like OpenDarwin to exist, but yes upstream can benefit to at least some degree.
And this happens even though there is no copyleft forcing them. Remember that if you retain your own fork, you now also own the maintenance, including aligning your proprietary changes with incoming upstream stuff.
Well if you count Linux devices... probably one have 10 or more in their homes, just count the number of embedded devices that has Linux inside, routers, smart TV, set top boxes, NAS, IP security cameras, VOIP phones, hell this day you probably have Linux even in washing machines!
Neither the culture of some particular working group nor the well-known inherent challenges of GPL licensing itself - many of which others licensing models certainly share - are legitimate arguments against the method itself.
This kind of line of reasoning is like supporting fossil fuels because they work, are well-understood, employ a lot of people, and the run the global economy while the future of green energy is still uncertain, a lot of tech not even invented, and no infrastructure established.
"I dislike the GPL, and the stigmatization of proprietary software for any reason, and the notion that it's inherently evil and unethical."
Wow that is a wildly extreme characterization of GPL and people who support it. GPL uses laws to restrict how one's work product is distributed and used, just like proprietary software does. WTF?
> With proprietary software, there is always some entity, the developer or “owner” of the program, that controls the program—and through it, exercises power over its users. A nonfree program is a yoke, an instrument of unjust power.
[2]:
> To release a nonfree program is always ethically tainted
I've read enough of passages like these from GNU in the past to say that your quote is not a mischaracterisation.
Yes. Even though RMS is the founder of the movement, I think it is fair to say most people who contribute to GPL code have a more pragmatic view. Linus Tovalds springs to mind and is the 'owner' of the particular OS the article's author was writing about. I would argue the author has a similarly silly attitude which is the GPL is so bad he won't even taint his machine with it.
I think it's valid to not want to use software if you think that using it will contribute to a philosophy that you don't like. And it is true that many people choose GPL because they didn't bother to do any research and think it's just a safe default, but enough people do choose it because they believe that proprietary software is evil or morally wrong in some way.
Personally I'd like to see BSD become more of a serious contender for the user-friendly FOSS desktop, able to use proprietary blobs of code without legal factors to consider.
A flaw in OP's assertion is that I feel that most of what we have today wouldn't exist if it weren't for the GPL licensing.
Edit: Despite RMS's unique bent on things, he hasn't been wholly wrong about the nature of free/nonfree software. If the Linux kernel, or the GNU software suite hadn't been GPL'd in the first place - we wouldn't be using it today. Instead, it would have died out or we'd be using some proprietary fork. I think, in the long term, GPL software benefits everyone because we can fork and share modifications. It fosters a mindset and community that's beneficial to a hacker culture (as in tinkerer, not as in black).