WASM sandboxes don't do much to guarantee the soundness of your program. It can hose your memory all it wants, it can just only do so within the confines of the sandbox.
Using a sandbox also limits what you can do with a system. With stuff like SECCOMP you have to methodically define policies for all its interactions. Like you're dealing with two systems. It's very bureaucratic and the reason we do it, is because we don't trust our programs to behave.
With Fil-C you get a different approach. The language and runtime offer a stronger level of assurance your program can only behave, so you can trust it more to have unfettered access to the actual system. You also have the choice to use Fil-C with a sandbox like SECCOMP as described in the blog post, since your Fil-C binaries are just normal executables that can access powerful Linux APIs like prctl. It took Linux twenty years to invent that interface, so you'll probably have to wait ten years to get something comparable from WASI.
> It can hose your memory all it wants, it can just only do so within the confines of the sandbox.
True, although as I understand it the WASI component model at least allows multiple fine-grained sandboxes, so it's somewhere in-between per-object capabilities and one big sandbox for your entire program. I haven't actually used it yet so I might be wrong about that.
> so you'll probably have to wait ten years to get something comparable from WASI
I think for many WASI use cases the capability control would be done by the host program itself, so you don't need OS-level support for it. E.g. with Wasmtime I do
WASI is basically CORBA, and DCOM, PDO for newer generations.
Or if you prefer the bytecode based evolution of them, RMI and .NET Remoting.
I don't see it going that far.
The WebAssembly development experience on the browser mostly still sucks, especially the debugging part, and on the server it is another yet another bytecode.
Finally, there is hardly any benefit over OS processes, talking over JSON-RPC (aka how REST gets mostly used), GraphQL, gRPC, or plain traditional OS IPC.
WASM sacrifices guest security & performance in order to provide mediocre host sandboxing, though. It might be a useful tradeoff sometimes, but proper process-based sandboxing is so much stronger and lets the guest also have full security & performance.
How is process-based sandboxing stronger? Also the performance penalty is not only due to sandboxing (I doubt it's even mostly due to it). Likely more significant is the portability.
Because the guarantees themselves are stronger, process isolation is something we have decades of experience with, it goes wrong every now and then but those are rare instances whereas what amounts to application level isolation is much weaker in terms the guarantees it that it provides and the maturity level of the code. That suggests that if you base your isolation scheme on processes rather than 'just' sandboxing that you will come out ahead and even with all other things the same you'd have one more layer in your stack of swiss cheese slices. A VM would offer another layer of protection on top of that, one with yet stronger guarantees.
process-based sandboxing has hardware support and thus stronger defenses against things like spectre. So far every CPU on the market has only elected to address spectre as it relates to crossing ring or process boundaries. Nobody has added hardware spectre defenses for in-process sandboxing. Also, process-based sandboxing allows the guest to also have a full suite of security protections like ASLR. If you are doing sandboxing for defense in depth, reducing the security of what's inside the guest in turn reduces the security of your entire chain.
And I didn't say the performance penalty was because of sandboxing (although in the case of WASM there is cost as it's doing software enforcement of things that otherwise are "for free" in hardware), but just that WASM has a performance cost compared to native. If you are using WASM just for sandboxing, you still then pay a performance cost for portability you didn't need.
No because the host can present the same interface on every platform. I do think that WASI is waaay to much "let's just copy POSIX and screw other platforms", but it does work pretty well even on Windows.
You've named half of the weasel security technologies of the last three decades. The nice thing about SECCOMP BPF is it's so difficult to use that you get the comfort of knowing that only a very enlightened person could have written your security policy. Hell is being subjected to restrictions defined by people with less imagination than you.
Wishful thinking: Any possible chance that means you might make a Fil-C APE hybrid? It would neatly address the fact that Fil-C already needs all of its dependencies to also use Fil-C.
It'd probably be such a small amount of money that it'd cost me more to cash the cheque. Lawyers are the only people who get rich over that kind of thing. Don't share any information with the robot that you don't feel comfortable with them using to make their service better. If you want to be fully in control of your interactions with AI then use llamafile which is 100% local. That's the healthy thing to do. Everything else is just rent-seeking and the fact that so many people are doing it is threatening much more important goals than money like transcendence.
This is a much lighter take than mine which is that our behaviors being input into this system will eventually be used to subjugate and control future generations. I like it
I think App Engine was really ahead of its time in showing how simple cloud deployments can be. It had a similar ease of use as setting up a YouTube account. For that reason, a lot of people thought of it as a toy, which was kind of unfair because companies like Niantic were able to build global products on it. So a lot of Google Cloud afterward ended up being designed to be more "normal" like how Amazon is. Now people are seeing what normal gets them, so maybe it's going to be time for the Google way of doing things to finally shine. (Disclaimer: I'm a Google employee)
C and Perl CGI, and later PHP, would like a word ...
(CGI is a webserver starting a binary in response to a web request, essentially piping the request into stdin and sending stdout back to the client, in some ways it's brilliantly simple and very unix-y)
Oh in terms of coding, definitely other simple options existed. But you still had to go through the work of provisioning a server, deploying your code to it, and then getting a public IP address and a domain name before you could access it. I meant in terms of going from "0 to Cloud", GAE was ahead of its time.
Most ISPs let you host CGI and later PHP files on your account, to encourage people to make websites. A guestbook, for example. For a while this was really popular.
App Engine caused a huge innovator's dilemma for Google when it came to cloud.
It only addressed a set of use cases around web application development- Folks would ask "why shoudl we build a low-margin cloud?" and I'd tell people: "if I can't import numpy, App Engine is useless to me". Eventually, the useful bits of AE were extracted to other services with APIs (datastore is one example), but it wasn't until google built a full GCP that they started to see real cloud growth (revenue).
I totally second this. AppEngine was way ahead of its time. I used it for all my projects back then. The only reason I stopped using it for clients because even internal Google employees had no idea when the plug might be pulled out on AppEngine. AppEngine supported Elixir far earlier than anyone else (Elixir is my main go-to stack)
Ah, that's a shame they went that way. What I tell people when they're first getting into GCP is that it's gonna be a passion to set up what you want. But, the offerings are great, and once it's working, it'll just work
It's always been the case with local infrastructure that if you run it yourself, you have to secure it yourself. It's not a vulnerability for local software to do what I tell it to do. Maybe I want to ask an LLM to try to hack into the things on my local network, to make sure nothing is vulnerable. The real vulnerability would be if the LLM does things I didn't ask it to do, like delete my production database. So it always irks me when security work is approached with the viewpoint that I'm the one who's untrustworthy and needs to be controlled rather than the machine. The whole point of tools throughout history has been to give people more power.
Folks have been ringing the alarm bell for a decade. https://www.nongnu.org/lzip/xz_inadequate.html xz is insane because it appears to be one of the most legitimately dangerous compression formats with the potential to gigafry your data but is exclusively used by literal turbonormies who unironically want to like "shave off a few kilobytes" and basically get oneshotted by it.
Possibly for any number of reasons. A sole maintainer with a bit too little capacity to keep up the development. A central role as a dependency for crucial packages in a couple of key distros.
What would be the connection between the backdoor (or indeed any supply chain security) and any design details of the xz file format? How would the backdoor have been avoided if the archive format were different?
Turbonormies, as you say, tend to use gzip not xz. Which is sad because gzip is just as bad for archiving. A few bytes changed and your entire file is lost (in a .tar.gz it means everything is lost).
Frankly, tarballs are an embarrassing relic, and it's not the turbonormies that insist they're still fit for purpose. They don't know any better, they'll do what people like you tell them to do.
There's that word we again. So you're the crazy gringo who always picks my pocket?
Deflation is only bad for people who hold a lot of debt. For people who are cash positive, deflation means you're richer, you're being paid more to do the same job, etc. all while maintaining your freedom. Deflation actually being good is the central gamble behind bitcoin's design. If more people understood that then they'd probably stop using it for such frivolous purposes. Not everyone is privileged enough to even hold debt, so it's really an exclusionary system. And what do the people who the system trusts to have debt (e.g. private equity firms) do with it? They do leveraged buyouts to rip out the heart and soul of responsible American companies. The only thing inflation is good for is keeping folks running on the hamster wheel and bankrolling entitlements.
> Not everyone is privileged enough to even hold debt, so it's really an exclusionary system
This seems backwards: I think the most salient debt in the average American consumer's life is student loans, car loans, credit card debts, mortgages, etc. These aren't hallmarks of privilege; not having any of them would be the hallmark.
(You might be right about corporate debt, I don't know. But I do think "deflation is only bad for people who hold a lot of debt" does a disservice in suggesting that that isn't a lot of ordinary people.)
I’m pretty sure I’m saying the exact opposite of that. The point was that debt doesn’t map cleanly onto privilege at all.
(Notably, the credit industry has moved onto schemes like BNPL that target individuals who would otherwise be protected from predatory credit by consumer protection laws. Those people are exploited, and unambiguously benefit - albeit not much - from an inflationary instead of deflationary environment.)
I think you missed their point: you are referring to the American middle class, but really, there are people much poorer who couldn't even go to university, or get out of an apartment rental in lousy neighbourhood, or own a crappy $500 car or...
Short-term deflation can be beneficial for some, but sustained deflation leads to a downward spiral that harms all. A lower cost of goods is great until it causes reduced aggregate investment and demand, which in turn can lead to wage reductions and layoffs. It's also harder to navigate out of a deflationary environment. Deflation isn't desirable in the long run.
Upthread poster is wrong about deflation (at least, who it is bad/good for, its not clear at all to me what they think it is.) But you are also wrong about what it is.
> Deflation is prices are going up and wages are stagnant or falling.
No, deflation is a decrease in the general price level, not an increase (which is inflation), no matter what else happens at the same time.
You just (approximately) described stagflation (the combination of inflation and economic stagnation/recession), which, like deflation is generally bad, but is a very different flavor of bad.
It's a foolish idea to short gold on the eve of a currency crisis. Gold went up 1812% the last time this happened. You'll be paying through the nose if you do it with $GLD since it's hard to borrow. You'll get IV crushed if you do it with put options. The smart way to profit off gold's fall from grace is by selling futures each time it hits a new high and then closing your position quickly after the inevitable ~50 point pullback. Markets can be timid. They sometimes price in new information slowly and reluctantly.
Using a sandbox also limits what you can do with a system. With stuff like SECCOMP you have to methodically define policies for all its interactions. Like you're dealing with two systems. It's very bureaucratic and the reason we do it, is because we don't trust our programs to behave.
With Fil-C you get a different approach. The language and runtime offer a stronger level of assurance your program can only behave, so you can trust it more to have unfettered access to the actual system. You also have the choice to use Fil-C with a sandbox like SECCOMP as described in the blog post, since your Fil-C binaries are just normal executables that can access powerful Linux APIs like prctl. It took Linux twenty years to invent that interface, so you'll probably have to wait ten years to get something comparable from WASI.
reply