Honestly, I feel like Julia might as well beat Mojo or sommat to the punch, sooner or later. It has some facilities and supporting infrastructure for a lot of scientific and data-handling tasks surrounding ML, if not for compiling and dispatching kernels (where XLA reins supreme to anything in the CUDA ecosystem!) For example, Bayesian programming like Turing.jl is virtually unmatched in Python. It's been a while since I looked at Lux.jl for XLA integration, but I reckon it could be incredibly useful. As long as LLM's and RLVR training thereof should continue to improve, we may be able to translate loads of exiting Pytorch code eventually.
But the first thing that gave me pause about Julia? They sort of pivoted to say "we're general purpose" but the whole index-starting-at-one thing really belies that -- these days, that's pretty much the province of specialty languages.
This 100% concerning Julia. Until recently at least, there was a ton of fundamental correctness issues. The issues may have been fixed, but it certainly does not inspire trust.
You're not supposed to admit it, but I never cared for Dijkstra's arguments on the matter. The same goes for his GOTO tirade, although that has been distorted by time somewhat. Pascal is using 1-ord, Fortran, R, Mathematica. If anything, it seems there's a longer tradition of 1-ord in scientific computing. In this view, I must agree insofar I don't think Julia people are serious about their "general purpose" stance whatsoever. But hey, these are merely idiosyncrasies. People say multiple dispatch is the shit, but it's just one bit of the puzzle with Julia. How they managed to engineer a solid foundation, semantics like that, without unnecessarily sacrificing performance—I don't think they get enough credit for that from programming guys.
> I never cared for Dijkstra's arguments on the matter.
It resonates with me, as does little-endian. There are evergreen arguments about both of these, but they stand out to me as two of the few places where I appear to be in sync with the majority.
> Pascal is using 1-ord, Fortran, R, Mathematica. If anything, it seems there's a longer tradition of 1-ord in scientific computing.
Actually, Pascal let you define the starting point, but, historically, COBOL (business) and FORTRAN started at one, and these days, it's Matlab (engineering), as well as R, Mathematica, and Julia. But, really, my point was that Julia is hyped to take over from Python, and it seems if that's the focus, then you should think long and hard about what features of Python to change, and, well, obviously that wasn't really the direction they came from.
> semantics like that, without unnecessarily sacrificing performance—I don't think they get enough credit for that from programming guys.
It's good work, but at the same time, a bit overblown IMO. For example, the creators claim homoiconicity, but that stretches the definition of that word to where Python would be homoiconic as well, and it's obviously not.
As far as the multiple dispatch goes, that's actually not that difficult of a puzzle piece when you are compiling ahead of time with unboxed objects.
Honestly, I'm not familiar with it. I had only played with RxInfer, if only to try the so-called "message-passing" paradigm. My grasp on probability is really lacking, in fact I picked up prediction markets and Julia to get better at it.
If you don't mind me asking, what's the deal with NumPyro why you chose to work it?
reply