Yeah absolutely, and they shouldn't be taxed extra specifically for using a new technology. If people need a UBI they should be paid it off the back of all taxes (which should rise if automation is successful), not a specific automation tax. Saving jobs sounds good and it's an easy win but you end up with a stagnant economy where people are paid sinecures to do make-work, which is doubly harmful since the company has to pay extra for the employee, who is also deprived of being able to do some other job that would be useful to the economy.
They already have, significantly, around 25-35% in developed economies. The issue is that people often look at revenue, seeing company X earning $100 billion annually, and assume they should pay $20 billion in taxes. However, most AI companies today are not profitable and spend up to 100% more than their revenue on R&D and product development. I doubt they will turn a profit anytime soon, probably not for at least a decade.
> They already have, significantly, around 25-35% in developed economies
The thing is companies and even self-employed individuals of a certain wealth level know how to "(ab)use" it. From illegal but trivial and hard to detect tax evasion to financing personal lifestyle by having the company pay for certain luxuries (cars, computers, furniture, etc.).
If you have the wealth to have a dedicated office that dedicated office can be your man cave if you justify it with having all sorts of amenities for customers. And good luck to whoever checks taxes to find out how exactly things are used/not used.
All of that usually means that companies, company owners and high ranking managers get away with not paying taxes for a lot of things that everyone else does simply because they don't have a say within these companies.
And all of that is before you go to the tax advisor.
I am sorry, but if you do hard honest work the chances of you getting rich are beyond slim. Even worse when you do something that actually benefits society.
You tax where you can not where you should. Corporations trivially hide profits. What they couldn't hide well was labor. You know what else they can't hide? Their power bill. It even works for companies that eternally operate "at loss" (which also parallels taxing labor).
I recently did performance testing of Tigerbeetle for a financial transactions company. The key thing to understand about Tigerbeetle's speed is that it achieves very high speeds through batching transactions.
----
In our testing:
For batch transactions, Tigerbeetle delivered truly impressive speeds: ~250,000 writes/sec.
For processing transactions one-by-one individually, we found a large slowdown: ~105 writes/sec.
This is much slower than PostgreSQL, which row updates at ~5495 sec. (However, in practice PostgreSQL row updates will be way lower in real world OLTP workloads due to hot fee accounts and aggregate accounts for sub-accounts.)
One way to keep those faster speeds in Tigerbeetle for real-time workloads is microbatching incoming real-time transactions to Tigerbeetle at an interval of every second or lower, to take advantage of Tigerbeetle's blazing fast batch processing speeds. Nonetheless, this remains an important caveat to understand about its speed.
> One way to keep those faster speeds in Tigerbeetle for real-time workloads is microbatching incoming real-time transactions to Tigerbeetle at an interval of every second or lower, to take advantage of Tigerbeetle's blazing fast batch processing speeds.
We don’t recommend artificially holding transfers just for batching purposes.
René actually had to implement a batching worker API to work around a limitation in Python’s FastAPI, which handled requests per process, and he’s been very clear in suggesting that such would be better reimplemented in Go.
Unlike most connection-oriented database clients, the TigerBeetle client doesn’t use a connection pool, because there’s no concept of a “connection” in TigerBeetle’s VSR protocol.
This means that, although you can create multiple client instances, in practice less is better. You should have a single long-lived client instance per process, shared across tasks, coroutines, or threads (think of a web server handling many concurrent requests).
In such a scenario, the client can efficiently pack multiple events into the same request, while your application logic focuses solely on business-event-oriented chains of transfers. Typically, each business event involves only a handful of transfers, which isn't a problem of underutilization, as they'll be submitted together with other concurrent events as soon as possible.
However, if you’re dealing with a non-concurrent workload, for example, a batch process that bills thousands of customers for their monthly invoices, then you can simply submit all transfers at once.
> For processing transactions one-by-one individually
If you're artificially restricting the load going into TigerBeetle, by sending transactions in one-by-one individually, then I think predictable latency (and not TPS) would be a better metric.
For example, TB's multi-region/multi-AZ fault-tolerance will work around gray failure (fail slow of hardware, as opposed to fail stop) in your network links or SSDs. You're also getting significantly stronger durability guarantees with TB [0][1].
It sounds like you were benchmarking on EBS? We recommend NVMe. We have customers running extremely tight 1 second SLAs, seeing microsecond latencies, even for one at a time workloads. Before TB, they were bottlenecking on PG. After TB, they saturated their central bank limit.
I would also be curious to what scale you tested? We test TB to literally 100 billion transactions. It's going to be incredibly hard to replicate that with PG's storage engine. PG is a great string DBMS but it's simply not optimized for integers the way TB is. Granted, your scale likely won't require it, but if you're comparing TPS then you should at least compare sustained scale.
There's also the safety factor of trying to reimplement TB's debit/credit primitives over PG to consider. Rolling it yourself. For example, did you change PG's defaults away from Read-Committed to Serializable and enable checksums in your benchmarks? (PG's checksums, even if you enable them, are still not going to protect you from misdirected I/O like the recent XFS bug.) Even the business logic is deceptively hard, there are thousands of lines of complicated state machine code, and we've invested literally millions into testing and audits.
Finally, it's important that your architecture as a whole, the gateways around TB, designs for concurrency first class, and isn't "one at a time", or TigerBeetle is probably not going to be your bottleneck.
We didn't observe any automatic batching when testing Tigerbeetle with their Go client. I think we initiated a new Go client for every new transaction when benchmarking, which is typically how one uses such a client in app code. This follows with our other complaint: it handles so little you will have to roll a lot of custom logic around it to batch realtime transactions quickly.
I'm a bit worried you think instantiating a new client for every request is common practice. If you did that to Postgres or MySQL clients, you would also have degradation in performance.
PHP has created mysqli or PDO to deal with this specifically because of the known issues of it being expensive to recreate client connects per request
We shared the code with the Tigerbeetle team (who were very nice and responsive btw), and they didn't raise any issues with the script we wrote of their Tigerbeetle client. They did have many comments about the real-world performance of PostgreSQL in comparison, which is fair.
Thanks for the code and clarification. I'm surprised the TB team didn't pick it up, but your individual transfer test is a pretty poor representation. All you are testing there is how many batches you can complete per second, giving no time for the actual client to batch the transfers. This is because when you call createTransfer in GO, that will synchronously block.
For example, it is as if you created an HTTP server that only allows one concurrent request. Or having a queue where only 1 worker will ever do work. Is that your workload? Because I'm not sure I know of many workloads that are completely sync with only 1 worker.
To get a better representation for individual_transfers, I would use a waitgroup
var wg sync.WaitGroup
var mu sync.Mutex
completedCount := 0
for i := 0; i < len(transfers); i++ {
wg.Add(1)
go func(index int, transfer Transfer) {
defer wg.Done()
res, _ := client.CreateTransfers([]Transfer{transfer})
for _, err := range res {
if err.Result != 0 {
log.Printf("Error creating transfer %d: %s", err.Index, err.Result)
}
}
mu.Lock()
completedCount++
if completedCount%100 == 0 {
fmt.Printf("%d\n", completedCount)
}
mu.Unlock()
}(i, transfers[i])
}
wg.Wait()
fmt.Printf("All %d transfers completed\n", len(transfers))
This will actually allow the client to batch the request internally and be more representative of the workloads you would get. Note, the above is not the same as doing the batching manually yourself. You could call createTransfer concurrently the client in multiple call sites. That would still auto batch them
Yeah it was back in February in your community Slack, I did receive a fairly thorough response from you and others about it. However then there were no technical critiques of the Go benchmarking code, just how our PostgreSQL comparison would fall short in real OLTP workloads (which is fair).
I don’t think we reviewed your Go benchmarking code at the time—and that there were no technical critiques probably should not have been taken as explicit sign off.
IIRC we were more concerned at the deeper conceptual misunderstanding, that one could “roll your own” TB over PG with safety/performance parity, and that this would somehow be better than just using open source TB, hence the discussion focused on that.
Interesting, I thought I had heard that this is automatically done, but I guess it's only through concurrent tasks/threads. It is still necessary to batch in application code.
But nonetheless, it seems weird to test it with singular queries, because Tigerbeetle's whole point is shoving 8,189 items into the DB as fast as possible. So if you populate that buffer with only one item your're throwing away all that space and efficiency.
We certainly are losing that efficiency, but this is typically how real-time transactions work. You write real-time endpoints to send off transactions as they come in. Needing to roll more than that is a major introduction of complexity.
We concluded where Tigerbeetle really shines is if you're a large entity like a central bank or corporation sending massive transaction files between entities. Tigerbeetle is amazing for moving large numbers of batch transactions at once.
We found other quirks with Tigerbeetle that made it difficult as a drop-in replacement for handling transactions in PostgreSQL. E.g. Tigerbeetle's primary ID key isn't UUIDv7 or ULID, it's a custom id they engineered for performance. The max metadata you can save on a transaction is a 128-bit unsigned integer on the user_data_128 field. While this lets them achieve lightning fast batch transaction processing benchmarks, the database allows for the saving of so little metadata you risk getting bottlenecked by all the attributes you'll need to wrap around the transaction in PostgreSQL to make it work in a real application.
> you risk getting bottlenecked by all the attributes you'll need to wrap around the transaction in PostgreSQL to make it work in a real application.
The performance killer is contention, not writing any associated KV data—KV stores scale well!
But you do need to preserve a clean separation of concerns in your architecture. Strings in your general-purpose DBMS as "system of reference" (control plane). Integers in your transaction processing DBMS as "system of record" (data plane).
Dominik Tornow wrote a great blog post on how to get this right (and let us know if our team can accelerate you on this!):
> We didn't observe any automatic batching when testing Tigerbeetle with their Go client.
This is not accurate. All TigerBeetle's clients also auto batch under the hood, which you can verify from the docs [0] and the source [1], provided your application has at least some concurrency.
> I think we initiated a new Go client for every new transaction when benchmarking
The docs are careful to warn that you shouldn't be throwing away your client like this after each request:
The TigerBeetle client should be shared across threads (or tasks, depending on your paradigm), since it automatically groups together batches of small sizes into one request. Since TigerBeetle clients can have at most one in-flight request, the client accumulates smaller batches together while waiting for a reply to the last request.
Again, I would double check that your architecture is not accidentally serializing everything. You should be running multiple gateways and they should each be able to handle concurrent user requests. The gold standard to aim for here is a stateless layer of API servers around TigerBeetle, and then you should be able to push pretty good load.
We didn't rule out using Tigerbeetle, but the drop in non-batch performance was disappointing and a reason we haven't prioritised switching our transaction ledger from PostgreSQL to Tigerbeetle.
There was also poor Ruby support for Tigerbeetle at the time, but that has improved recently and there is now a (3rd party) Ruby client: https://github.com/antstorm/tigerbeetle-ruby/
I think the drop in non-batch performance was more a function of the PoC than of TB. Would love to see what our team could do for you here! Feel free to reach out to peter@tigerbeetle.com
It's a reverse of what you're describing, but a similar mechanism like this in Canada is their notwithstanding clause.
If the Supreme Court of Canada rules a law unconstitutional, the government in power can overrule their ruling by using the notwithstanding clause. However, the notwithstanding clause override to keep the law in effect only lasts for five years. Subsequent legislatures have to keep renewing the override or the Supreme Court's ruling of unconstitutionality takes effect again.
I miss the days when Facebook events worked well for getting people to attend a party.
Now, nobody is on Facebook so those event invitations get missed and you need to hustle much harder with individual chat messages to get people to attend.
In my social circles Partiful feels like it's becoming a good replacement for the golden era of Facebook Events. At first you had to invite people manually by sending them a Partiful link, but now they have their own internal invite system where you can invite your "mutuals" (people you've partied with) directly on the platform. It's become the clear standard for house parties in my sphere. Not quite as good as Facebook events used to be though.
Oh man this definitely makes me wax nostalgic for that golden era ... it was 2013-2016 for me. I would throw an annual holiday party w/ my roommate in SF every year and I recall being able to just go down the list of my FB friends and click "invite, invite, invite" and everyone I cared about would show up and we all had a wonderful time. Sigh.
This doesn't fit at all with how governance and politics works in reality. Rapid changes to society or a crisis can suddenly make deeply unpopular ideas very popular.
Is us-east-1 equally unstable to the other regions? My impression was that Amazon deployed changes to us-east-1 first so it's the most unstable region.
I've heard this so many times and not seen it contradicted so I started saying it myself. Even my last Ops team wanted to run some things in us-east-1 to get prior warning before they broke us-west-1.
But there are some people on Reddit who think we are all wrong but won't say anything more. So... whatever.
Nothing in the outage history really stands out as "this is the first time we tried this and oops" except for us-east-1.
It's always possible for things to succeed at a smaller scale and fail at full scale, but again none of them really stand out as that to me. Or at least, not any in the last ten years. I'm allowing that anything older than that is on the far side of substantial process changes and isn't representative anymore.
I do know from previous discussions that some companies are in us-east-1 because of business partnerships with other inhabitants and if one moves out the costs and latency goes up. So they are all stuck in this boat together.
Still, it would make a bit of sense if you can find a place in your code where crossing a region hurts less, to move some of your services to a different region.
While your business partners will understand that you’re down while they’re down, will your customers? You called yesterday to say their order was ready, and now they can’t pick it up?
Back in 2018, it wasn't widely known that Ozempic would become a blockbuster weight loss drug. So it was possible they made the business decision based on its more limited use as a diabetes drug.
Is Linux gaming on Steam actually competitive in performance and availability to what you'll get on Windows? I'm looking into building a gaming computer I'm surprised to hear I could roll with Linux for it.
Essentially, the only games that doesn't work nowadays are the ones that intentionally break it by adding Linux-incompatible anti-cheat. This is common among the big AAA-games that are multiplayer (think Fortnite).
Riot games did this on purpose too. League worked perfectly fine on Linux for years until they decided that kernel level spying on users was absolutely necessary to play a moba. For some reason my one friend thinks I'll run windows just for one game.
I'd sooner get a console, personally. The only legitimate use case I have for a console (nintendo notwithstanding) is to sandbox invasive anticheat in multiplayer games. I don't really have a ton of free time or friend group into multiplayer video games, so it's not happening for me. Smart console makers would lean into this.
Yup, I've also gone with a console for all my gaming needs, and keep my computer as just a productivity machine. As a result I don't need nearly as beefy machine and don't need to grind my teeth in bitterness using Windows.
> ones that intentionally break it by adding Linux-incompatible anti-cheat.
That's an interesting way to phrase it. It's like you're implying the company intentionally did not want to run it on anything but Windows (aka software is incompatible with non-Windows OSes) rather than trying to implement an effective anti-cheat (arguable) that works for their customers.
Pre-Wine, would you have argued that a software vendor is intentionally preventing their software from running on any non-Windows OS?
Or was it just that their audience wasn't on said non-Windows OS?
> That's an interesting way to phrase it. It's like you're implying the company intentionally did not want to run it on anything but Windows (aka software is incompatible with non-Windows OSes) rather than trying to implement an effective anti-cheat (arguable) that works for their customers.
Not OP, but this is true depending on the game. For instance, when Rockstar added BattlEye to GTA V Online, they broke Linux support, and blatantly lied about Linux not supporting BattleEye, when that's just not true - they just needed to enable that option, but they just straight up lied saying BattlEye doesn't support Linux.
> BattlEye on Proton integration has reached a point where all a developer needs to do is reach out BattlEye to enable it for their title. No additional work is required by the developer besides that communication.
So all Rockstar had to do was reach out to BattlEye to enable it, but they couldn't be bothered to do so. Their anti-Linux stance here is pretty obvious.
Rockstar aside, there are other studios/publishers that have been openly hostile against Linux, like Epic for instance - Tim Sweeny has made scathing remarks against Linux, so it's clear where he/Epic stands on that front.
I’m using Bazzite now for about 8 months, and I have a dual boot Windows drive. I haven’t used the Windows drive once. Windows was my daily driver for 3 decades.
Performance wise, there’s no degradation. I can run games at 4k or bonkers FPS just like I did on Windows, no input lag, etc.
Bazzite also has a very active discord for support with issues. I highly recommend.
> Bazzite originally was developed for the Steam Deck targeting users who used their Steam Deck as their primary PC. Bazzite is a collection of custom Fedora Atomic Desktop images built with Universal Blue's tooling (with the power of OCI) as opposed to using an Arch Linux base with A/B updates utilizing RAUC. The main advantages of Bazzite versus SteamOS is receiving system packages in updates at a much faster rate and a choice of an alternative desktop environment.
It is a Linux distribution, that aims to compete with Valve's SteamOS Linux distribution supplied with the Steam Deck (which itself is based on Arch Linux). Like SteamOS, it can be used on a regular desktop PC as well... but they are mainly aiming to run on the Steam Deck:
> The purpose of Bazzite is to be Fedora Linux, but provide a great gaming experience out of the box while also being an alternative operating system for the Steam Deck and other handheld devices.
Effectively they have taken Fedora Linux, and added to it the same sort of setup and programs that you get out-of-the-box with SteamOS as well.
For the most part, it is not the people offering Bazzite that are doing the hard job of providing security updates, etc., they are hoping that being based on Fedora will provide that assurance. They merely supply and configure some extras on top (e.g. the Steam client software)
What I meant is not "I can't find what it is", but that the landing page of Bazzite says this:
"The next generation of Linux gaming - Bazzite makes gaming and everyday use smoother and simpler across desktop PCs, handhelds, tablets, and home theater PCs.
Play your favorite games - Bazzite is designed for Linux newcomers and enthusiasts alike with Steam pre-installed, HDR & VRR support, improved CPU schedulers for responsive gameplay, and numerous community-developed tools and tweaks to streamline your gaming and streaming experience."
In the first 5 words after the 1st title there should be mentioned "Linux distribution". It's not even in the 2nd paragraph, now.
If this is the clarity of the landing page, I suspect documentation is equally user-hostile/inaccessible, which is why 2025 is still not the year of the Linux desktop... in the Linux world there's still an abundance of great developers, and a terrible lack of HCI/UX expertise.
Basically the only games that don't work are those which include anticheat which intentionally borks Linux. Check https://www.protondb.com/ for any game you're interested in to see if it'll run or not.
>Anything that has a kernel level anti check (Valorant) will always be a resounding No.
Please stop repeating this long outdated information. The two most widely used kernel anti cheat provider easyanticheat and battle eye support linux with a user space component which needs to be enabled by the developer and has been in many games.
Tools like Battle eye and EAC are not just one tool that gives a binary answer, they are tools that detect a huge range of heuristics about the device and how easy it is to interfere with the memory.
While they have been ported to Linux, an awful lot of those bits of telemetry simply don't give the desired answer, or even any answer at all, because that is very hard to so when there aren't proprietary drivers signed down to the hardware root of trust by a third party (and certainly the average Linux user on HN wouldn't want there to be!).
It's really not a matter of "enabled by the developer", it's entirely dependent on what your threat model is.
None of this is relevant to the original point of "kernel anti cheats don't work" when yes the two most widely used ACs do work despite being kernel level.
>It's really not a matter of "enabled by the developer", it's entirely dependent on what your threat model is.
I should have added “sometimes”. It worked fine that way with most games (I have the same CPU), but Cyberpunk 2099 in particular really doesn’t like that configuration.
Depends on what you like to play. Some games are heavily encumbered with either copy protection like denuvo or anti-cheat and those either don't support linux or flat out try to sniff out linux and refuse to run on anything but windows. Otherwise its great, you can check protondb and winehq for reports of compatibilty.