If this sounds like you, I highly recommend reading "The Problem of the Puer Aeternus".
You can definitely skip a lot of the tedious bits where the author essential copy-pastes other books for analysis, but this is a very common pattern where people tend to hold themselves back because doing the unambitious, rather pedestrian next step forward requires one to face these preconceived notions about oneself, e.g. "I should've done this long ago", etc.
Also interesting in this context is the PyTorch Developer Podcast [1] by the same author. Very comforting to learn about PyTorch internals while doing the dishes.
After living since forever in what they call an "outlaw state", I can assure you, Mr OP, that all of your concerns are extremely valid, and, indeed, you don't need an open war to start having REAL problems with your digital life.
Something as simple as a few bans or sanctions are more than enough to stop you, for example, from using your credit cards to pay for everything you take for granted, call it Netflix, iCloud or even a $0.99 app. And that's just the tip of the iceberg.
My advice on this? Be a pragmatic fuck. For example:
#1 Switch to open source or at least free alternatives for everything you can.
#2 If you see it coming, temporarily suspend your "good citizen" behaviour and pirate what can't be solved with #1. (assume the risks and learn to deal with them).
#3 Stay as local - your house local, not your country/union local - as possible, avoid "the cloud" if possible or create your own if it makes sense for you, a homelab or whatever.
#4 Stay as anonymous and low profile as you can; even if you support your government in a conflict situation, they can turn on you in a blink, for the right or wrong reasons.
#5 Familiarise yourself with things like Tor, VPNs and anything else that can help you bypass censorship and access blocked sites and services.
#6 Consider having additional ISPs as failover options, something as different as possible from your main ISP.
#7 Buy stuff to protect your electrical equipment: regulators, protectors, UPS and so on.
I'm not saying you have to do it all, take only what makes sense for you, but keep an open mind and consider that permanent peace and stability could be your lifelong situation if you are lucky, but maybe not. Says the guy who had it.
Trust me on this, I have become something of a cheap prepper, and not because of a war, it's far from that in here; just a few disagreements between governments, a few nationwide bans/prohibitions/sanctions, or a little bit of government incompetence/stupidity, are all you need to start having problems with international payments, problems accessing certain websites and services, problems with your Internet access, and even power outages, it doesn't take too much to start having a shitty digital life.
Tandem Health | Software Engineers | On-site in Stockholm, Sweden | Full time
We’re using generative AI to automate medical documentation so that clinicians can spend more time with patients. Founded 8 months ago, we’re already live with dozens of clinics all over Sweden, growing rapidly (onboarded >10 new clinics during the last month), and are now gearing up for expansion to new markets.
We’re a small team with an audacious mission - health care represents 10% of GDP and 30% is spent on administration, of which 43% can be automated. The impact will be huge. We believe that with our team’s strong understanding of the healthcare space in Europe and a fantastic engineering team, we will be the team that does it.
And until we can measure impact in terms of % of GDP, it’s already incredibly fulfilling to go to a care centre and meet clinicians that love our product. The impact is very real!
We’re currently looking for generalist engineers. Experience with health-tech, frontend, infrastructure, security, LLMs or ASR is especially meritorious.
If you think this sounds exciting, please reach out to me at john.moberg@tandemhealth.se and let’s explore a fit!
Related: Why philosophers should care about computational complexity by Scott Aaronson [1].
If you have even a faint interest in philisophy and have taken algorithms 101 you will find something mind-blowing in this paper. My favorite part is about how the “Chinese room” problem takes on totally different character depending on your assumptions about the type of machinery behind the black box.
I've definitely got heartache here, and they merit criticism, but it is real
We need a lot of pluggability to support diff vendor LLMs and BYO LLMs in Louie.ai, so having langchain has been nice for helping code to interfaces vs vendor lockin. It definitely has growing pains - ex: sync & multithreading is important for us so we are generally coding around langchain while that smooths out. Likewise, we ended up building much of our conversational and multitool capabilities as custom libraries vs using theirs for similar quality reasons. We can't use any of the codegen capabilities because they are massive security holes so doing our own work there too.
If anyone is into that kind of work (backend, AI & web infra, ...), definitely hiring for core platform & cool customer projects here: louie.ai / Graphistry.com/careers
In case it's new to anyone, I recommend going through the jones forth "literate program" - even if you don't understand assembly. You can try porting it to a language you know, it gives a good understanding of how forth works
Your intuition here is basically what's formalized by Piketty in "Capital in the 21st Century": that the rate of growth of capital exceeds the rate of economic growth.
What's interesting is that while intuitively this makes sense (for the reasons you gave), the implication (that the rich get richer and inequality grows) is the opposite of what was hypothized by Kuznets (https://en.wikipedia.org/wiki/Kuznets_curve); the latter hypothesis, to my limited understanding (as a non-economist), is fairly mainstream, and has influenced plenty of politics and public policy.
* As developers, we're used to thinking of services in terms of streaming TCP connections and RPCs. You send a request on a connection and get a response back on the same connection. However, distributed consensus algorithms (or at least their authors) like to think and write in terms of messages and message passing and the classic Actor pattern. For example, it's not uncommon for a consensus client to send a message to a leader but then get the ACK back from another server, a subsequently elected leader. That's at odds with the networking protocol we're used to. It's not always easy to shoehorn a consensus protocol onto a system that already has a TCP oriented design. Embrace message passing and multi-path routing.
* We're familiar with Jepsen. The network fault model is front of mind (dropped/delayed/replayed/corrupted messages, partitions, asymmetrical network topologies and performance). We're far less wary of the storage fault model: latent sector errors (EIO), silent bit rot, misdirected writes (writes written by firmware to the wrong sector), corrupt file system metadata (wrong journal file size, disappearing critical files), kernel page cache coherency issues (marking dirty pages clean after an fsync EIO), confusing journal corruption for a torn write after power failure.
* We underestimate the sheer bulk of the code we need to write to implement all the components of a practical consensus protocol correctly (a consensus replica to run the protocol at each node, a write ahead journal for storage, a message bus for in-process or remote messaging, a state machine for service up calls). The consensus protocol invariants are tough but limited, but the amount of code required to be written for all these components is brutal and there are so many pitfalls along the way. For example, when you read from your write ahead journal at startup and you find a checksum mismatch, do you assume this is because of a torn write after power failure as ZooKeeper and LogCabin do? What if it was actually just bit rot halfway through your log? How would you change your write ahead journal to disentangle these?
* We tend to think of the correctness of any given consensus function as binary, and fail to appreciate the broad spectrum of safety requirements required for specific components of the consensus algorithm. In other words, we don't always take fully to heart that some consensus messages are more critical than others. For example, we might resend an ACK to the leader if we detect (via op number) that we've already logged the prepare for that op number. However, most implementations I've seen neglect to assert and double-check that we really do have exactly what the leader is asking us to persist before we ACK. It's a simple verification check to compare checksums before skipping the journal write and acking the duplicate prepare and yet we don't.
* Another example, when we count messages from peers to establish quorum during leader election, we might count these messages without applying all the assertions we can think of on them. For example, are we asserting that all the messages we're counting are actually for the same leader election term? Or did we simply assume that we reset the array of messages being counted during the appropriate state transition sometime back in the past? The former is a much stronger guarantee, because it keeps you from double-counting stale leader election messages from past election phases, especially if these were successive (e.g. multiple rounds of elections because of split votes with no successful outcome). We should rather assume that the array we store these messages in, and that we're counting, could contain anything, and then assert that it contains exactly what we expect.
* Our intuition around fault tolerance might suggest that local storage faults cannot propagate to destroy global consensus. Yet they do (https://www.youtube.com/watch?v=fDY6Wi0GcPs). We need to be really careful how we repair local faults so that we do so correctly in the context of the global consensus protocol.
* Finally, I think what also really helps is to have a completely deterministic consensus protocol Replica abstraction that you initialize with an abstract Message Bus, Journal and State Machine instance. This Replica instance can send messages to in-process or remote Replica instances, and has on_message() handlers for the various protocol messages that either change state and/or send messages but can never fail (i.e. no error union return type) because that amplifies the dimensionality of the code paths. For timeouts, don't use the system clock because it's not deterministic. Instead, use a Timeout abstraction that you step through by calling tick() on the Replica. With these components in place, you can build an automated random fuzzing test to simulate your distributed network and local storage fault models and test invariants along the way, outputting a deterministic seed to reproduce any random failures easily.
You can definitely skip a lot of the tedious bits where the author essential copy-pastes other books for analysis, but this is a very common pattern where people tend to hold themselves back because doing the unambitious, rather pedestrian next step forward requires one to face these preconceived notions about oneself, e.g. "I should've done this long ago", etc.