I'm missing the nuance or perhaps the difference between the first scenario where sending inaccurate time was worse than sending no time, versus the present where they are sending inaccurate time. Sorry if it's obvious.
The 5us inaccuracy is basically irrelevant to NTP users, from the second update to the Internet Time Service mailing list[1]:
To put a deviation of a few microseconds in context, the NIST time scale usually performs about five thousand times better than this at the nanosecond scale by composing a special statistical average of many clocks. Such precision is important for scientific applications, telecommunications, critical infrastructure, and integrity monitoring of positioning systems. But this precision is not achievable with time transfer over the public Internet; uncertainties on the order of 1 millisecond (one thousandth of one second) are more typical due to asymmetry and fluctuations in packet delay.
> Such precision is important for scientific applications, telecommunications, critical infrastructure, and integrity monitoring of positioning systems. But this precision is not achievable with time transfer over the public Internet
How do those other applications obtain the precise value they need without encountering the Internet issue?
> If those other applications use their own local GPS clocks, what is the significance of NIST (and the 5μs inaccuracy) in their scenario?
Verification and traceability is one reason: it's all very well to claim you're with-in ±x seconds, but your logs may have to say how close you are to the 'legal reality' that is the official time of NIST.
NIST may also send out time via 'private fibre' for certain purposes:
That is not correct at all. How did you arrive at that conclusion?
GPS has its own independent timescale called GPS Time. GPS Time is generated and maintained by Atomic clocks onboard the GPS satellites (cesium and rubidium).
It has its own timescale, but that still traces back to NIST.
In particular, the atomic clocks on board the GPS satellites are not sufficient to maintain a time standard because of relativistic variations and Doppler effects, both of which can be corrected, but only if the exact orbit is known to within exceeding tight tolerances. Those orbital elements are created by reference to NIST. Essentially, the satellite motions are computed using inverse GPS and then we use normal GPS based on those values.
I think GP might’ve been referring to the part of Jeff’s post that references GPS, which I think may be a slight misunderstanding of the NIST email (saying “people using NIST + GPS for time transfer failed over to other sites” rather than “GPS failed over to another site”).
The GPS satellite clocks are steered to the US Naval Observatory’s UTC as opposed to NIST’s, and GPS fails over to the USNO’s Alternate Master Clock [0] in Colorado.
Yes, either Rb, Cs, or H standards depending on which GNSS system you're using.
For the most critical applications, you can license a system like Fugro AtomiChron that provides enhanced GNSS timing down to the level of a few nanoseconds. There are a couple of products that do similar things, all based on providing better ephemerides than your receiver can obtain from the satellites themselves.
A lot of organizations also colocate timing equipment near the actual clocks, and then have 'dark fiber' between their equipment and the main clock signals.
Then they disperse and use the time as needed.
According to jrronimo, they even had one place splice fiber direct between machines because couplers were causing problems! [1]
If I put my machine near the main clock signal, I have one clock signal to read from. The comment above was asking about how to average across many different clocks, presumably all in different places in the globe? Unless there's one physical location with all of the ones you're averaging, you're close to one and far from all the others so how is it done without the internet?
Can you do PTP over the internet? I have only seen it in internal environments. GPS is probably the best solution for external users to get time signals with sub-µs uncertainties.
It's a good question, and I wondered the same. I don't know, but I'd postulate:
As it stands at the minute, the clocks are a mere 5 microseconds out and will slowly get better over time. This isn't even in the error measurement range and so they know it's not going to have a major effect on anything.
When the event started and they lost power and access to the site, they also lost their management access to the clocks as well. At this point they don't know how wrong the clocks are, or how more wrong they're going to get.
If someone restores power to the campus, the clocks are going to be online (all the switches and routers connecting them to the internet suddenly boot up), before they've had a chance to get admin control back. If something happened when they were offline and the clocks drifted significantly, then when they came online half the world might decide to believe them and suddenly step change to follow them. This could cause absolute havoc.
Potentially safer to scram something than have it come back online in an unknown state, especially if (lots of) other things are are going to react to it.
In the last NIST post, someone linked to The Time Rift of 2100: How We lost the Future --- and Gained the Past. It's a short story that highlights some of the dangers of fractured time in a world that uses high precision timing to let things talk to each other: https://tech.slashdot.org/comments.pl?sid=7132077&cid=493082...
> […] where sending inaccurate time was worse than sending no time […]
When you ask a question, it is sometimes better to not get an answer—and know you have not-gotten an answer—then to get the wrong answer. If you know that a 'bad' situation has arisen, you can start contingency measures to deal with it.
If you have a fire alarm: would you rather have it fail in such a way that it gives no answer, or fail in a way where it says "things are okay" even if it doesn't know?
I can appreciate the steady syntactic sugar that c# has been introducing these past years, they never feel like an abrupt departure from the consistency throughout. I often think that this is what java could have been if it hadn't been mangled by oracle's influence, unfortunately as much I like java it's pretty obvious that different parts have been designed by disjointed committee members focused on just one thing.
This started long before Oracle and the favouring of verbose, ritualistic boiler code was set back at Sun. James Gosling has been staunchly against overloading operators, properties and value types (almost out of spite from Microsoft's success with providing this in C#), the aftermath of which the language and run-time still struggle today and forever will. It's unfortunate that the original inventor, while a brilliant programmer himself, thought so little of others that such features were not to be included, because other programmers might mess up their use.
That's assuming everyone learns the same way, which isn't true. Watching a streamer beat a dark souls boss won't automatically make you competent at the game. Reading through gobs of code generated for you without knowing why various things were needed won't help either. A middle approach could be to get the LLM to guide you through the steps.
It's not just assuming that everyone learns the same way. It's assuming that everyone learns the way that all of the research literature on learning claims does not work.
Learning requires active recall/synthesis. Looking at solved examples instead of working them yourself does not suffice in math, physics, chemistry, or CS, but somehow it is supposed to work in this situation?
Not property rights, regulation. Advertising is limited to regulated areas. Graffiti is not. The comparison is well intentioned or meant to be thought provoking, and has some validity, but isn't the same thing.
The distinction breaks down in places where the "regulated areas" are "wherever a private property chooses to put an ad". Which is more or less the case in large parts of the US.
I agree, privacy still means a lot. It's a term that's been co-opted by the large tech companies which operate with impunity. It will has meaning that cannot change.
The post also misunderstands privacy
> Privacy is when they promise to protect your data.
Privacy is about you controlling your data. Promises are simply social contracts.
Same here. Modern angular is pretty nice to work with.
Yes It has a "learning curve" but so does everything (even React).
Also Angular is now about twenty thousand times simpler than it was in the past as you can use Signals for reactivity, and basically ignore Observables for 95% of things.
Angular also removes the a lot of the negatives outlined in the page - no npm, no node_modules, no ecosystem fatigue, no debates on state management etc etc. Everything you need is included in one dependency you can load from a CDN.
I never liked that in react you are mangling the presentation and business logic in one tsx file (separation of concerns? React ignores that lesson for some reason). Htmx feels even worse in this way because now you also have html snippets and templates in your backend code too! Nightmare! Angular let's you leave the templates as standalone files and not mushed into your typescript like react (although you can inline them into the typescript if you want to, but obviously no one does that for anything apart from the most trivial of components)
Given the wealth and productivity creation that they're responsible for enabling across the industry, they deserve to be paid for it. There is no way for them to have achieved this with zero friction.
I totally support companies charging for things which cost money to make but I think the strategy of saying something is free and later reneging is a very risky strategy. You’ll get some license sales after cold-calling people’s bosses or breaking builds but they won’t thank you for it.
reply