I've been using Go more or less in every full-time job I've had since pre-1.0. It's simple for people on the team to pick up the basics, it generally chugs along (I'm rarely worried about updating to latest version of Go), it has most useful things built in, it compiles fast. Concurrency is tricky but if you spend some time with it, it's nice to express data flow in Go. The type system is most of the time very convenient, if sometimes a bit verbose. Just all-around a trusty tool in the belt.
But I can't help but agree with a lot of points in this article. Go was designed by some old-school folks that maybe stuck a bit too hard to their principles, losing sight of the practical conveniences. That said, it's a _feeling_ I have, and maybe Go would be much worse if it had solved all these quirks. To be fair, I see more leniency in fixing quirks in the last few years, like at some point I didn't think we'd ever see generics, or custom iterators, etc.
The points about RAM and portability seem mostly like personal grievances though. If it was better, that would be nice, of course. But the GC in Go is very unlikely to cause issues in most programs even at very large scale, and it's not that hard to debug. And Go runs on most platforms anyone could ever wish to ship their software on.
But yeah the whole error / nil situation still bothers me. I find myself wishing for Result[Ok, Err] and Optional[T] quite often.
S3 certainly saves a lot of hassle, but in certain use cases, it really is prohibitively expensive.
Has anyone tried self-hosted alternatives like MinIO or SeaweedFS? Or taken even more radical approaches?
How do you balance between stability, maintenance overhead, and cost savings?
> Will `uv` inspect my local GPU spec and decide what the best set of packages would be to pull from Pyx?
We actually support this basic idea today, even without pyx. You can run (e.g.) `uv pip install --torch-backend=auto torch` to automatically install a version of PyTorch based on your machine's GPU from the PyTorch index.
pyx takes that idea and pushes it further. Instead of "just" supporting PyTorch, the registry has a curated index for each supported hardware accelerator, and we populate that index with pre-built artifacts across a wide range of packages, versions, Python versions, PyTorch versions, etc., all with consistent and coherent metadata.
So there are two parts to it: (1) when you point to pyx, it becomes much easier to get the right, pre-built, mutually compatible versions of these things (and faster to install them); and (2) the uv client can point you to the "right" pyx index automatically (that part works regardless of whether you're using pyx, it's just more limited).
> Since this is a private, paid-for registry aimed at corporate clients, will there be an option to expose those registries externally as a public instance, but paid for by the company? That is, can I as a vendor pay for a Pyx registry for my own set of packages, and then provide that registry as an entrypoint for my customers?
We don't support this yet but it's come up a few times with users. If you're interested in it concretely feel free to email me (charlie@).
Does anyone have a comparison between this and OpenAI Codex? I find OpenAI's thing really good actually (vastly better workflow that Windsurf). Maybe I am missing out however.
Similarly to the sibling I also use both. I let mise manage my uv version (and other tools) and let uv handle Python + PyPI Packages for me. Works great!
This is a nice setup. It's got tmux and fzf and rg and zoxide and clean-looking nvim. I'd recommend atuin, starship, bat, glow, duf, dogdns, viddy, gum/sesh, dust, btop et all if you don't have them, there's a long tail. The Awesome Terminal XYZ lists on Github have them all.
atuin is make-or-break, its a bigger deal than zoxide and being a coder without zoxide is like being an athlete with shoes for a different sport.
asciinema is a better way to do terminal videos.
Its weird that this is weird now: having your tools wired in used to be called "being a programmer". VSCode and Zed and Cursor and shit are useful additions to the toolbox, you gotta know that stuff by heart now too and you have to know which LLM to use for what, but these things are the new minimum, they aren't a replacement for anything. Even with Claude Code running hot at 4am when the PID controller is wide open, sometimes its going to trash your tree (and if it doesnt youve got it on too short a leash to be faster than gptel) and without magit? gl.
If you think you're faster than OP with stock Cursor? Get them to make a video of how to use an LLM with chops.
Erlang's semantics are deeply intertwined with the unique things the Erlang runtime does.
For example, the Erlang abstract machine does straight-line non-preemptable atomic execution within bytecode basic-blocks, with reduction-checking for yield exactly/only at stack-frame manipulation points (i.e. call/ret/tail-call.)
Those points are guaranteed to occur after O(1) reductions, because of an ISA design that contains no unbounded local looping primitives — i.e. no way to encode relative jumps with negative offsets. (Note that this design requirement — and not any functional-programming ideal — is why Erlang uses tail-calls for looping. It has to; there's no other way to do loops given the ISA constraints!)
This atomicity of bytecode basic-blocks is what guarantees that actors can be hard-killed without corrupting the abstract-machine scheduler they run on (they die at their next yield-point, with the scheduler in a coherent state). It's a fundamental difference between Erlang scheduling and JVM scheduling.
The JVM doesn't have this atomicity, and so you can't hard-kill a Java thread without corrupting the JVM. Instead, you can only softly "interrupt" threads — sending them a special "please die" signal they have to explicitly check for. This means that JVM languages can't support anything like Erlang's process links — i.e. JVM concurrency frameworks can't propagate failure downwards through a supervision hierarchy in a way that actually releases resources from long-running CPU-bound sub-tasks. This in turn means you can't reliably bound resource usage under high-concurrency scenarios; which means that, essentially, all the things that people get excited about adding to Java with Akka, Loom, etc. don't actually do much to help the use-cases they attempt to address.
This last is personal experience, by the way. My company develops backend server software in both Erlang (Elixir) and Java. We actually tried Loom as a way of fixing some of the robustness-under-concurrency problems with the JVM; but the problems are much more fundamental than just adding features like virtual threads can resolve.
It's "tier 5", I've had an account since the 3.0 days so I can't be positive I'm not grandfathered in, but, my understanding is as long as you have a non-trivial amount of spend for a few months you'll have that access.
(fwiw for anyone curious how to implement it, it's the 'moderation' parameter in the JSON request you'll send, I missed it for a few hours because it wasn't in Dalle-3)
> I can feel my usage of Google search taking a nosedive already.
Conveniently Gemini is the best frontier model for everything else, they’re very interested and well positioned (if not best?) to also be the best in deep research. Let’s check back in 3-6 months.
People always say that, but I don't see why would I want that. I like the idea of a browser engine in Rust, but I actually hope it might theoretically replace Chrome in 10 years, because it's better. Browser Engine is not an opinionated thing, or shouldn't be, anyway, why would I want any "alternatives" for that? I would rather have 1 engine and several good browsers, which are ultimately opinionated. Meanwhile, we do have more than 1 solid engine, and, uh, let;s say 0.8 good browsers.
I like when repos say "not implemented yet" or "to-do" or "working on" and the last commit was years ago. Makes me feel better about not going back to my to-dos I drop through my code. (Not meaning to throw shade on this author, just finding it comforting)
Well, I had to use one when I was at school (I'm 53). I do note that the headmaster of my prep school (British - aged 10-13) advocated those smart new fibre tipped things.
He (head) was formally a WWII Artillery officer - Major - lost an eye in action in Africa.
He (and the rest of the staff) also taught us lot how to use cutlery etc but still he insisted that a fibre tipped pen was the future. As it turns out, Biros (ball point pens), replaced most ink related writing and not fibre tips.
I doubt you use a "sharpie" in favour of an ink pen.
I do enjoy using an edged pen and calligraphy in general but mostly can't be arsed these days. It might become a lost art but I'm not too sure we will have lost too much. It is simply an art form and art is art or arse.
I actually spent quite a while in grad school thinking about the last parsec problem, and although I'm not in the field anymore I still think about it from time to time. (My thesis was on gravitational dynamics.)
My perception of the field (which is now about a decade out of date, so take it with a grain of salt), is that there is quite a bit of skepticism about invoking exotic physics to solve the last parsec problem. Galaxies are generally pretty messy places, and the centers of galaxies are especially messy, so it's hard to know if you've correctly modeled all the relevant physics. A lot of astronomers aren't convinced that there really is a last parsec problem.
The main "standard" approach to solve the last parsec problem is from scattering stars (which the article mentions). Basically, every now and then stars from the galaxy wander close to the orbit of the black hole binary and then get slingshotted out of the system. This removes energy from the orbit, and causes the black hole binary to shrink. The problem with this approach if you do a naive calculation is that the stars have to come from a particular set of directions, called the "loss cone" in the jargon. And since the orbits of stars in galaxies are probably fairly static, once a star gets kicked out of the loss cone, it doesn't come back. So over time the loss cone empties and the black hole orbit stops shrinking. The question is, does the orbit shrink far enough before the loss cone empties, and the answer to this question has generally been "no."
The way around this is to question how static the orbits of stars in galaxies really are. One of the more important papers on the topic found that if an elliptical galaxy is sufficiently triaxial (that is, sufficiently non-spherical), then interactions between stars in the galaxy can repopulate the loss cone and cause the orbit to keep shrinking. But as I vaguely recall, not everyone was convinced by that result.
I personally have had some ideas that galactic tides might contribute, especially right after the merger before all the orbits have had time to thermally relax. But I'm not in the field anymore and haven't really had time to really model this idea and see if it would work.
By using JSON mode, GPT-4{o} has been able to do this reliably for months (100k+ calls).
We use GPT-4o to build dynamic UI+code[0], and almost all of our calls are using JSON mode. Previously it mostly worked, but we had to do some massaging on our end (backtick removal, etc.).
With that said, this will be great for GPT-4o-mini, as it often struggles/forgets to format things as we ask.
Note: we haven't had the same success rate with function calling compared to pure JSON mode, as the function calling seems to add a level of indirection that can reduce the quality of the LLMs output YMMV.
CPU temperatures can swing from 40C to 90C and back in a matter of seconds as loads come and go. Modern fan control algorithms have delays and smoothing for this reason.
If you had a steady state load so stable that you could tune around it, setting a fan curve is rather easy and PID is overkill. For normal use where you’re going between idle with the occasional spike and back, trying to PID target a specific temperature doesn’t really give you anything useful and could introduce unnecessary delays in cooling if tuned incorrectly.
I suspect the conversations went something like this:
Film industry lawyer: Tell us everyone who pirated our movies.
Frontier lawyer: We've told you everyone we know of.
Film industry lawyer: What about those people on reddit bragging about getting away with it?
Frontier lawyer: We don't have any way of knowing if those are actual
customers, let alone if they're telling the truth. If you have a specific IP you want us to look into, we'd be happy to help.
(later)
Film industry lawyer: Give us the IPs of these users who made comments bragging about getting away with pirating.
The other anomaly is that Jensen talks all the time with ICs doing the work. I was only a couple of months into working at the company before I got to have a face to face discussion with him about a project I was working on. I have seen many mid-level engineers (IC4-IC5) give him deep dives in these group meetings. It can be very stressful being under Jensen's microscope, but it dramatically reduces the "let's show pretty slides to the CEO to show him everything is good" BS. I was previously at a startup 1/100th of this size where the CEO was far less connected to rank and file engineers, so it has been a really nice change.
So are they increasing or are we just figuring them out more and living longer and keeping more kids alive? I've got a spinal arthritis condition that didn't start until I was mid 30s and it's not a new condition but the thinking around it has changed substantially.
Edit: the consultant I saw said I probably wouldn't get diagnosed 20 years ago when she started. I had trouble convincing my doctor it could be this condition. I'd never have known about it unless I'd mentioned it to my mum who told me my sister (who I don't have contact with) has this quite severely and it took her 10 years to get a diagnosis and access to helpful drugs.
MRSK is like something pulled straight from my dreams. Longtime Capistrano user, loves Docker for reproducible environments but still hasn’t mastered using it in prod because I’m a dinosaur who just wants servers and a load balancer.
I’m gearing up to deploy a Nextjs project in the next few months and while I appreciate the claims of Function this and Edge that, I feel infinitely more comfortable with a couple small server instances, especially early on when my focus should be testing my product, not optimizing my cloud infrastructure for scale that might never be needed. I’m excited to give MRSK a shot and feel grateful to live in a time when there are so many good choices.
"Any sufficiently complicated concurrent program in another language contains an ad hoc informally-specified bug-ridden slow implementation of half of Erlang." - Virding's Law
> Most of the people I saw get successful were people were people doing like... extra stuff outside of the classes. They were taking what they learned and applying it.
Is this a knock on Lambda School, though?
You can go to MIT but if you never apply your learnings you won't be a good engineer.
From my recent experience with WebAssembly developing a cryptographic library for Nodejs and the browser [1], I have to say that once someone needs to use memory allocation, typed arrays from JS to WASM (I did not manage to make the opposite work) etc. it quickly becomes obvious that there is lack of documentation and build system fragmentation that only hurts community growth IMO. If I was less motivated to finish the undertaking, I would just give up and go with libsodium-wrappers or tweetnacljs.
I started with clang targeting wasm32-unknown-unknown-wasm as my build system but this just did not work with malloc/free, unless I was targeting WASI, but if I targeted WASI I would not be able to run the module in the browser except with a polyfill that was hard to set up with C/TS stack. I ended up with emscripten because it was importing the module with all the right helper functions but there I was getting memory errors on debug mode but not in production. I needed to pass the Uint8Arrays from JS to WASM in a very specific way (with HEAP8), otherwise the pointers were not working properly, but I was not able to find this in the documentation. I only found out from a stackoverflow comment somewhere after two weeks of brain melting (why would Uint8Array(memory.buffer, offset, len).byteOffset not work?).
After I compiled the project successfully and the JS was giving the correct results, I decided to compile with -s SINGLE_FILE command in order to make the package as portable as possible, but this increased the size significantly because it translates the bytes into base64 that are then converted into WASM module from JS. A package manager of a compiled language that outputs cross-env JS that solves these problems automagically would be, IMO again, a game changer for the ecosystem. I believe this is what AssemblyScript tries to achieve but I honestly could not make it work for my project after experimenting with it for one or two days.
I get that a lot of the problems come from the incompatibility of browser and Nodejs APIs and different agendas from the various stakeholders, but I would very much like to see these differences be reconciled so that we can have a good developer experience for cross-platform WASM modules, which will lead to more high-performance components for JS, which is a programming language that affects so many people.
There's a trend that writing is for readers, rather than dictated by company PR. If you are writing for readers, then follow the usual rules for proper nouns where the word is a not-really-initialism which was never meant to be read as its individual letters. PAM™ was always Pam, not P.A.M.
The building and launching approach makes intuitive sense, but it runs against the feudal nature of large tech companies.
This point is perhaps clearer if we take Google as an example instead. Mid-level engineers attach themselves to a project or initiate it, then guide it past a certain point. Once promotion is secured, the project is dropped by its initiators. Staffed with lower-status engineers who were enrolled without the prospect of promotion, the project begins a gradual rot and the Google graveyard deepens.
The key concept here is that the projects are launched and fueled at the start explicitly for the chance of promotion. In the strategy you describe, with Facebook higher-ups mandating the top-down creation of various projects as part of the global strategy of the company, it would become risky for mid-level engineers to ever get involved. They would not have their names connected with the success of the projects as higher-ups would have imprinted themselves on it beforehand, but would be connected to its failure, whereas in the Google scene the 'moonshot' culture is well-established and limits this sort of blame. Due to massive opportunity costs, Facebook employees would be better off trying to get promoted through the traditional channels, or to quit and pursue the promising ideas themselves and hope to get bought out eventually.
But I can't help but agree with a lot of points in this article. Go was designed by some old-school folks that maybe stuck a bit too hard to their principles, losing sight of the practical conveniences. That said, it's a _feeling_ I have, and maybe Go would be much worse if it had solved all these quirks. To be fair, I see more leniency in fixing quirks in the last few years, like at some point I didn't think we'd ever see generics, or custom iterators, etc.
The points about RAM and portability seem mostly like personal grievances though. If it was better, that would be nice, of course. But the GC in Go is very unlikely to cause issues in most programs even at very large scale, and it's not that hard to debug. And Go runs on most platforms anyone could ever wish to ship their software on.
But yeah the whole error / nil situation still bothers me. I find myself wishing for Result[Ok, Err] and Optional[T] quite often.