Although it's possible for someone with type 1 to have an A1C below 6%, it's very difficult. I've known a few people like that, and they are all super users. It's also going to depend somewhat on the lab running the A1C test, personal biology (A1c is not only affected by blood glucose levels) etc. 6-6.5% is superb control! Parent should be very proud. 6.5-7% is still very good, I haven't looked at the distribution of A1c's for T1D recently, but that would be much better than median which I think is above 8%.
Especially with kids, it's difficult since you don't control how much they decide to eat making pre-bolusing meals challenging (part of why reducing carbs tends to be helpful for people is it reduces the need to pre-bolus and makes it less risky since you need less up front meal insulin).
I didn't mean to say it's not superb control for someone with T1D, only that there are likely still some negative health consequences at 6-7%, and that exercise after carbohydrates is one mechanism of potentially getting some additional marginal improvement.
This is good advice for pre-diabetics and type 2 diabetics but in type 1 diabetes exercise after meal often makes things worse. It makes insulin dosing less predictable.
You can die on the order of hours to days not of high blood sugar per se, but of the low insulin causing diabetic ketoacidosis parent comment mentions.
It would be odd for a faulty sensor to cause an otherwise bad day into dka and death though. The sensor would need to be wildly off for hours and the user to not notice. Insulin delivery would need to be paused or greatly reduced for many hours. There are additional therapies like SGLT-2 that could make this more likely but they usually aren’t used with T1D precisely because they break the normally very strong correlation between inadequate insulin levels (leading to dka) and high blood glucose.
Even though I can’t think of an easy way for a false low(s) to turn into lethal DKA, that doesn’t mean it didn’t happen. Abbott sells a lot of CGMs. It could have been a contributing factor to several deaths even if the fault would almost always not be a significant issue.
My experience has been there is no correlation between skill at teaching and skill at research; maybe the two are even anti-correlated. To some extent, this is an artifact of the selection process for professors, but I think it's partly because there's a real tradeoff between spending effort on research vs teaching.
In some cases, an excellent researcher even has cogent papers but is absolutely abysmal at lecturing and in person teaching skills.
Peers are very important, but from talking to others, it's harder to know where you will get good peers than you would think. Even 1st tier universities will have majors dominated by students whose primary interest is in maximum grades with minimum work and where cheating is rampant. You've got to either get lucky (I did) or put in some leg work to find smart students who are actually interested in learning and doing things right.
I think how much rote memorization is encouraged or required is strongly dependent on the field. Pre-med students will sometimes memorize their way through calculus; a professor I knew once described it as "grimly impressive".
Teaching is a skill like any other. While I don't think the two are anti-correlated, you're going to find good teachers and bad teachers, no matter how good they may be at their other duties.
And I would gather you find more bad teachers than good, but that's true of many spaces from IT to sports.
Making invalid states unrepresentable may be a great idea or terrible idea depending on what you are doing. My experience is all in scientific simulation, data analysis, and embedded software in medical devices.
For scientific simulations, I almost always want invalid state to immediately result in a program crash. Invalid state is usually due to a bug. And invalid state is often the type of bug which may invalidate any conclusions you'd want to draw from the simulation.
For data analysis, things are looser. I'll split data up into data which is successfully cleaned to where invalid state is unrepresentable and dirty data which I then inspect manually to see if I am wrong about what is "invalid" or if I'm missing a cleaning step.
I don't write embedded software (although I've written control algorithms to be deployed on it and have been involved in testing that the design and implementation are equivalent), but while you can't exactly make every invalid state unrepresentable you definitely don't punch giant holes in your state machine. A good design has clean state machines, never has an uncovered case, and should pretty much only reach a failure state due to outside physical events or hardware failure. Even then, if possible the software should be able to provide information to intervene to fix certain physical issues. I've seen devices RMA's where the root cause was the FPU failed; when your software detects the sort of error that might be hardware failure, sometimes the best you can do is bail out very carefully. But you want to make these unknown failures be a once per thousands or millions of device years event.
Sean is writing mostly about distributed systems where it sounds like it's not a big deal if certain things are wrong or there's not a single well defined problem being solved. That's very different than the domains I'm used to, so the correct engineering in that situation may more often be to allow invalid state. (EDIT: and it also seems very relevant that there may be multiple live systems updated independently so you can't just force upgrade everything at once. You have to handle more software incompatibilities gracefully.)
> _impossible_ for your program to transition into an invalid state at runtime
Not the case for scientific computing/HPC. Often HPC codebases will use numerical schemes which are mathematically proven to 'blow up' (produce infs/nans) under certain conditions even with a perfect implementation - see for instance the CFL condition [1].
The solution to that is typically changing to a numerical scheme more suited for your problem or tweaking the current scheme's parameters (temporal step size, mesh, formula coefficients...). It is not trivial to find what the correct settings will be before starting. Encountering situations like a job which runs fine for 2 days and then suddenly blows up is not particularly rare.
In those situations, the functions that contain a singularity should return an Either monad, and in order to bring the resulting data back into the bloodstream of the program, you have to deal with that potential singularity. Unfortunately, scientific computing seems like it's stuck in the age of the dinosaurs with tooling choices and much of the advancements in type systems of the last 40 years are nowhere to be seen. I always found that curious, given its adjacency to academia.
Why? You're just going to unwrap that monad and promptly crash. It's unrecoverable; it doesn't need to go back into "the bloodstream of the program." No amount of static typing will turn this runtime error into a compile-time error, so it doesn't really matter how you express it with types.
> You're just going to unwrap that monad and promptly crash. It's unrecoverable
You handle the error gracefully. It's not "unrecoverable" in the sense of an incorrect memory read that arises from a logic error in the program. It's an anticipatable behavior in a well-defined system of computations that should be treated as such. Simply crashing is extremely sloppy programming in this case, it's not a formal equivalent to discarding an unusable input.
> No amount of static typing will turn this runtime error into a compile-time error
On the contrary. Anything capable of statically checking dependent types can turn any runtime issue into a compile-time issue. Up to and including requiring proof that a function's domain is respected according to all paths that call it as a facet of the type system, and this domain can be inferred by the operations within the function itself.
> Does such a proof exist in this context? Or are we writing fanfiction about the problem domain now?
"Often HPC codebases will use numerical schemes which are mathematically proven to 'blow up' (produce infs/nans) under certain conditions even with a perfect implementation"
Yes? Function domains are trivially enforceable through the type system when you have dependent types. Even the n-dimensional case of the CFL condition is a simple constraint you can express over a type.
Have you ever actually done any work with dependent types? I'm not sure why you would think something so basic as enforcing a function domain (which isn't the same thing as a problem domain, by the way) would be "fanfiction" otherwise. I highly recommend spending a few months actually working with them, there are plenty of good languages floating around these days.
> The correct behaviour in that context is to terminate the program
At worst it's to leave the thread of execution, which is distinct from crashing, as you asserted above and as my core point revolves around.
> Function domains are trivially enforceable through the type system when you have dependent types.
> It is not trivial to find what the correct settings will be before starting. Encountering situations like a job which runs fine for 2 days and then suddenly blows up is not particularly rare.
Somehow I doubt that the "not trivial" problem of finding correct settings before starting suddenly becomes "trivial" when you throw dependent types at it.
> (which isn't the same thing as a problem domain)
yeah bud i'm aware. I meant what I said. You're supposing that it's trivial to determine what the domain of the function in question is when the original post explicitly said otherwise. This is a falsehood about the problem domain.
> At worst it's to leave the thread of execution
Leave the thread and then do what? The stated solution to the problem, according to the original post, is to restart the program with new, manually-tweaked parameters, or to straight-up modify the code:
> The solution to that is typically changing to a numerical scheme more suited for your problem or tweaking the current scheme's parameters
I don't think the article is referring to that sort of issue, which sounds fundamental to the task at hand (calculations etc). To me it's about making the code flexible with regards to future changes/requirements/adaptions/etc. I guess you could consider Y2K as an example of this issue, because the problem with 6 digit date codes wasn't with their practicality at handling dates in the 80's/90's, but about dates that "spanned" beyond 991231, ie 000101.
I knew some who were bad at math. Asian immigrant test scores on math are ~1/2-1 standard deviation higher than white Americans. That’s noticeable comparing groups of people but still leaves a lot of Asian immigrants who are not good at math.
There is no royal road. If all your kids are biologically yours and you and all your family are good at math and you marry someone from a similar family, you can stack the deck maybe 95/5 in favor or your kid being good at math? But that option is already off the table if you lack that talent. And there are other things you should probably prioritize first!
Also, being far enough from Europe that a huge amount of talent decided the U.S. was a better bet for getting away from the Nazis. And then taking a large number of former Nazi scientist's post-war as well.
The article mentions but underrates the fact that post-war the British shot themselves in the foot economically.
As far as I'm aware, the article is kind of wrong that there wasn't a successful British computing industry post war, or at least it's not obvious that it's eventual failure has much to do with differences in basic research structure. There was a successful British computing industry at first, and it failed a few decades later.
Fair point! That's a great technical success; I didn't realize Arm was British.
If the main failure of British companies is that they don't have U.S. company market caps, it seems more off base to blame this on government science funding policy instead of something else. In almost every part of the economy, U.S. companies are going to be larger.
My understanding is that the British "Arm" is just a patent holder now. I don't think they actually make anything. Companies outside the UK are the ones that actually make the chips licensed from the Arm designs.
At work, I find type hints useful as basically enforced documentation and as a weak sort of test, but few type systems offer decent basic support for the sort of things you would need to do type driven programming in scientific/numerical work. Things like making sure matrices have compatible dimensions, handling units, and constraining the range of a numerical variable would be a solid minimum.
I've read that F# has units, Ada and Pascal have ranges as types (my understanding is these are runtime enforced mostly), Rust will land const generics that might be useful for matrix type stuff some time soon. Does any language support all 3 of these things well together? Do you basically need fully dependent types for this?
Obviously, with discipline you can work to enforce all these things at runtime, but I'd like it if there was a language that made all 3 of these things straightforward.
I suspect C++ still comes the closest to what you’re asking for today, at least among mainstream programming languages.
Matrix dimensions are certainly doable, for example, because templates representing mathematical types like matrices and vectors can be parametrised by integers defining their dimension(s) as well as the type of an individual element.
You can also use template wizardry to write libraries like mp-units¹ or units² that provide explicit representations for numerical values with units. You can even get fancy with user-defined literals so you can write things like 0.5_m and have a suitably-typed value created (though that particular trick does get less useful once you need arbitrary compound units like kg·m·s⁻²).
Both of those are fairly well-defined problems, and the available solutions do provide a good degree of static checking at compile time.
IMHO, the range question is the trickiest one of your three examples, because in real mathematical code there are so many different things you might want to constrain. You could define a parametrised type representing open or closed ranges of integers between X and Y easily enough, but how far down the rabbit hole do you go? Fractional values with attached precision/error metadata? The 572 specific varieties of matrix that get defined in a linear algebra textbook, and which variety you get back when you compute a product of any two of them?
I'd be happy for just ranges on floats being quick and easy to specify even if the checking is at runtime (which it seems like it almost will have to be). I can imagine how to attach precision error/metadata when I need it with custom types as long as operator overloading is supported. I think similarly for specialized matrices, normal user defined types and operator overloading gets tolerably far. Although I can understand how different languages may be better or worse at it. Multiple dispatch might be more convenient than single dispatch, operator overloading is way more convenient than not having operator overloading, etc.
A lot of my frustration it is that the ergonomics of these things tend to be not great even when they are available. Or the different pieces (units, shape checking, ranges) don't necessarily compose together easily because they end up as 3 separate libraries or something.
Crystal certainly supports that kind of typing, and being able to restrict bounds based on dynamic elements recently landed in GCC making it simple in plain C as well.
That's a hard one because it depends on what sort of details you let into types and maybe even on the specific type T. Not saying what I'm asking for is easy! Units and shape would be preserved in all cases I can think of. But with subranges (x - x) may have a super-type of x... or if the type system is very clever the type of (x - x) maybe be narrowed to a value :p
And then there's a subtlety where units might be preserved, but x may be "absolute" where as (x - x) is relative and you can do operations with relative units you can't with absolute units and vice versa. Like the difference between x being a position on a map and delta_x being movement from a position. You can subtract two positions on a map in a standard mathematical sense but not add them.
I use both DuckDB and SQLite at work. SQLite is better when the database gets lots of little writes, when the database is going to be stored long term as the primary source of truth, when you don't need to do lots of complicated analytical queries etc. Very useful for storing both real and simulated data long term.
DuckDB is much nicer in terms of types, built in functions, and syntax extensions for analytical queries and also happens to be faster for big analytical queries although most of our data is small enough that it doesn't make a big difference over SQLite (although it still is a big improvement over using Pandas even for small data). DuckDB only just released versions with backwards compatible data storage though, so we don't yet use it as a source of truth. DuckDB has really improved in general over the last couple years as well and finally hit 1.0 3 months ago. So depending what version you tried it out on tiny data, it may be better now. It's also possible to use DuckDB to read and write from SQLite if you're concerned about interop and long term storage although I haven't done that myself so don't know what the rough edges are.
Agreed. I work in medical device engineering, and >50% of our time relates to simulation in some way. A big part of our responsibilities is designing, implementing, or reimplementing models of various subsystems relevant to our devices so we can do preliminary estimates of safety or efficacy. Analyzing real world outcomes is also important, although I haven't been at the company long enough yet for that to catch up with simulation in terms of how much time we spend on it.
I'd say the closest description to us in the article is the practical research team. We have fairly clear business goals we are fulfilling with our work.
Jupyter notebooks can be executed roughly like scripts by papermill. You can also save a .py version of the notebook without outputs using jupytext. We use these packages together where I work to basically auto-generate the start of notebooks for exploratory work after certain batch jobs. For dashboards only used by a small number of users, we’ve found voila occasionally useful. Voila turns notebooks into dashboards or web apps basically.
You generally shouldn’t put code you want to reuse in notebooks, but we haven’t found this to be much of a problem. And >3/4 of the people who’ve worked on our team don’t have software engineering backgrounds. If you set clear expectations, you can avoid the really bad tar pits even if most of your co-workers are (non-software) engineers or scientists 0-3 years out of college.
Especially with kids, it's difficult since you don't control how much they decide to eat making pre-bolusing meals challenging (part of why reducing carbs tends to be helpful for people is it reduces the need to pre-bolus and makes it less risky since you need less up front meal insulin).
reply