This, though I think other posters have pointed to a web app/site that’s backed by SQLite. It can be a perfectly reasonable approach, I think, as the application is the web server and it likely accesses SQLite on the same machine.
After 2 years in production with a small (but write heavy) web service... it's a mixed bag. It definitely does the job, but not having a DB server does have not only benefits, but also drawbacks. The biggest being (lack of) caching the file/DB in RAM. As a result I have to do my own read caching, which is fine in Rust using the mokka caching library, but it's still something you have to do yourself, which would otherwise come for free with Postgres.
This of course also makes it impossible to share the cache between instances, doing so would require employing redis/memcached at which point it would be better to use Postgres.
It has been OK so far, but definitely I will have to migrate to Postgres at one point, rather sooner than later.
How would caching on the db layer help with your web service?
In my experience, caching makes most sense on the CDN layer. Which not only caches the DB requests but the result of the rendering and everything else. So most requests do not even hit your server. And those that do need fresh data anyhow.
As I said, my app is write heavy. So there are several separate processes that constantly write to the database, but of course, often, before writing, they need to read in order to decide what/where to write. Currently they need to have their own read cache in order to not clog the database.
The "web service" is only the user facing part which bears the least load. Read caching is useful there too as users look at statistics, so calculating them once every 5-10 minutes and caching them is needed, as that requires scanning the whole database.
A CDN is something I don't even have. It's not needed for the amount of users I have.
If I was using Postgres, these writer processes + the web service would share the same read cache for free (coming from Posgres itself). The difference wouldn't be huge if I would migrate right now, but now I already have the custom caching.
It depends very much on the language/server you are using. In Rust, IMO GraphQL is still the best, easiest and fastest way to have my Rust types propagated to the frontend(s) and making sure that I will have strict and maitainable contracts throughout the whole system. This is achieved via the "async_graphql" crate which allows you to define/generate the GraphQL schema in code, by implementing the field handlers.
If you are using something which requires you to write the GraphQL schema manually and then adapt both the server and the client... it's a completely different experience and not that pleasant at all.
The answer to the final question in the article is: "Mostly, because the government forbids us from solving this problem". Start a company? That will be 2k for incorporation and 3k per year for auditing/bookkeeping. You want to build something physical in Europe? What a bummer, come here and pay the exorbitant carbon tax. You still think you are very smart? Your will pay 70% effective tax rate if you want to hire someone else to help you.
I have a Rust web project written ~5 years ago. Zero updates until a month ago, also zero problems. A month ago I upgrade the docker file to latest rust, zero problems compiling. Could have left it to run like this for 5 more years probably but I decided to experiment...
I issue a `cargo update` to upgrade the leaf dependencies and do automatic minor version updates. Issue cargo check. Some new warnings from `clippy`, but it still compiles and runs without problems. Could have deployed for 5 more years, but I decided to experiment more...
I upgrade some of the libraries to major new versions - I am experienced and I know which ones will upgrade without problem. They do upgrade without problem. Could have deployed for 5 more years but decided to walk the extra mile...
I upgrade the more problematic ones, especially actix_web, the web framework, which had a massive rewrite and a huge new release with almost completely different API surface... It's a bit difficult to understand the changes, especially some parts of the old code written for the old version (which I no longer remember), but in an hour I'm done. Afterwards `cargo outdated` does not report any outdated libraries. I deploy for the next 5 years. Zero problems since then.
Well, it's not decades yet, but I can imagine similar effort to maintain it over the next decade.
I migrated from OCaml to Rust around 2020, haven't looked back. Although Rust is quite a lot less elegant and has some unpleasant deficiencies (lambdas, closures, currying)... and I end up having to close one one eye sometimes and clone some large data-structure to make my life easier... But regardless, its huge ecosystem and great tooling allows me to build things comparatively so easily, that OCaml has no chance. As a bonus, the end result is seriously faster - I know because I rewrote one of my projects and for some time I had feature parity between the OCaml and Rust versions.
Nevertheless, I have fond memories of OCaml and a great amount of respect for the language design. Haven't checked on it since, probably should. I hope part of the problems have been solved.
Your comment makes me think the kind of people who favor OCaml over Rust wouldn't necessarily value a huge ecosystem or the most advanced tooling. They're the kind who value the elegance aspect above almost all else, and prefer to build things from the ground up, using no more than a handful of libraries and a very basic build procedure.
Yeah, I was that kind of person, then I wrote a real tool that does real work in OCaml... and then I discovered than I am no longer such a person and went to Rust.
Just the straight/naive rewrite was ~3 times faster for my benchmark (which was running the program on the real dataset) and then I went down the rabbit hole and optimized it further and ended up ~5 times faster. Then slapped Rayon on top and got another ~2-3x depending on the number of cores and disk speed (the problem wasn't embarrassingly parallel, but still got a nice speedup).
Of course, all of this was mostly unneeded, but I just wanted to find out what am I getting myself into, and I was very happy with the result. My move to Rust was mostly not because of speed, but I still needed a fast language (where OCaml qualifies). This was also before the days of multicore OCaml, so nowadays it would matter even less.
How much of that do you think comes from reduced allocations/indirections? Now I really want to try out OxCaml and see if I can approximate this speedup by picking up low hanging fruits.
Were you using the ocamlopt compiler? By default, ocaml runs in a VM, but few people figure that out because it is not screaming its name all the time like a pokemon (looking at you JVM/CLR). But ocaml can be compiled to machine code with significant performance improvements.
Having the power to deny others to mine blocks does not mean that you can obtain the tokens from their wallets. Miners can't sign transactions on users' behalf. You can rewrite all of history but then no exchange will accept your version of it to let you exchange the tokens for fiat. Also this will almost certainly crash the price of XMR substantially. And later people will be able to fork/restore the original version. The technological side of the blockchain is only part of the consensus/trust/market/popularity. People are the other part, and people will not pay the attacker for their successful attack.
The attacker doesn't need to steal tokens. They just need to short the token while they sufficiently disrupt the network to drive down the price. They get the money and your tokens become worthless.
I was completely wrong about the cost. XMR mining rewards amount to only $150k/day.
At the height of the attack, Qubic (the company) paid people up to $3 in QUBIC for every $1 of XMR they mined through QUBIC, and they achieved around 33% of XMR's hashrate which was sufficient to mine the majority of blocks for a few hours.
If they were forced to buy back all those QUBICs they paid out, this might have cost them ~$100k/day. But thanks to the media attention it's likely that they didn't need to buy anything back and actually were able to emit more than they otherwise could have.
XMR needs to adapt -- switch to PoS, or ASICs-based POW, or a hybrid of both.
Controlling 51% of XMR costs ~$30M per day, you'd have to short a huge amount of XMR to make that worthwhile. Who would be the counter party and how would you do that anonymously?
The attack itself is unprofitable, the "profit" for Qubic is the publicity they get. (or at least that's what they're betting on)
Monero has a theoretical market cap of $4.7B USD and daily volumes >$100M USD. I wouldn't recommend taking that short position in one go but over a few days and a few exchanges I wouldn't see a problem acquiring a very large short of the token.
Local as in "desktop application on the local machine" where you are the sole user.
reply