Hacker Newsnew | past | comments | ask | show | jobs | submit | anonymousDan's commentslogin

I will always fly Ryanair ahead of other low cost carriers in Europe as unlike easyJet for example they don't overbook. The most painful experience I've had was to arrive at an airport with a young family and get all the way to the easyJet flight gate to be told the flight is overbooked. And unlike the US where this starts an auction it's basically tough luck. Should be outright fraud in my opinion.

Interesting, are you at least entitled to rule 261 compensation?

I honestly don't know would I be able to keep it together if something like that happened to me and my family. Definitely should be fraud and compensated VERY HEAVILY if it happens to someone due to a technical glitch or something similar.

Some kind of casual or eventual consistency related invariants perhaps?

I don't understand why the nuclear industry wouldn't pile in to fund research into this area (as a potential way to clean up nuclear waste). Probably I don't understand how this fungus actually works and it is impossible!


As mentioned elsethread, it doesn't actually clean up anything, since it doesn't affect the waste at all, just turns some of the radiation into metabolism in the same way that plants turn solar radiation into metabolism.

Even if it did somehow accelerate the decay, it wouldn't be that useful, since (Chernobyl aside), all the waste from the typical civilian nuclear reactor can fit in a side lot on the site of the reactor complex itself (and often does!). There just isn't that much radioactive waste to clean up!


Yeah waste has been a red herring that anti nuclear people like to bring up. Yes it’s nasty stuff but there isn’t that much of it and it can be buried or reprocessed it’s not a real problem.


Low level waste is an expensive pain in the buttocks. I toured a local medical and research reactor back in highschool, and they were running out of space to store their discarded PPE and other minimally contaminated waste. You could probably empty most of the barrels on the floor and roll in the contents without any noticeable effect, and yet they still needed to be treated like real waste, just in case.

Not to disagree with you, just to say that even though it's a minor nuisance it nevertheless occupies a lot of mental space because of how annoying it is.


I don’t see a straightforward way this would actually help with the cleanup. A hypothetical microbe that “eats” oil would be useful in an oil spill as would chemically break down the oil and harvest its carbon.

A radiotropic fungus that’s in TFA can’t meaningfully affect the rate at which nuclear decay is happening. What it can do, supposedly, is to harvest the energy that the nuclear decay is releasing; normally there’s too much energy for an organism to safely handle.

At the risk of vastly oversimplifying, you can’t plug your phone into high voltage transmission lines. These fungi are using melanin to moderate the extra energy, stepping it down into a range that’s useful (or at least minimally harmful).


clean it up how? by having fungus grow near it?


almost nobody cares about solving actual problems :C


for sure noone understand fully how a living organism "works" but it could be possible... at least to learn something.


I don't understand when people blame AI for buying DDR5 DRAM - aren't they mostly interested in HBM? Or is the fab space being diverted to manufacture more HBM than DDR DRAM previously?


Inference, don't need gpu's for inference. Frontier labs are eking out progress by scaling up inference-time compute. Pre-training scaling has kind of stalled / giving diminishing returns (for now).


Not a dumb question. The links to mesh networking etc seem interesting. It sounds like the insights from descriptive set theory could yield new hardness/impossibility results in computational complexity, distributed algorithms etc.


are there any current or foreseeable practical applications of those results?

the math of infinity isn't very relevant to the finite programs that humans use. Even today's astronomically large computing systems have size approximately equal to 3 compared to infinity.


Cool paper. Their modeling of the temperature response curve seems a more elegant (albeit non-trivial) solution than burning CPU.


Couldn't you model the effect of temperature on clock drift and try to factor that in dynamically (e.g. using a temperature sensor) instead of burning CPU unnecessarily?


Sure, that's what the chrony closed loop is already doing (the estimated residual frequency is pretty linear with temperature), but no matter how robust your closed loop is, it's strictly better to not have disturbances in the first place.


That's what the chrony tempcomp directive is for. But you would have to figure out the coefficients, it's not automatic.

An advantage of constantly loading at least one core of the CPU might be preventing the deeper power states from kicking in, which should make the RX timestamping latency more stable and improve stability of synchronization of NTP clients.


Chrony does have ability to do temperature compensation. I've done this and need to do a write up on it. It's not super easy to keep all the parts working together. Basically you feed chrony a table of temperatures and expected clock frequency and it subtracts it out.


Interesting. It feels like once you have the features defined this is basically dead code elimination. To solve the transitive dependency issue could you not execute a dead code elimination pass once and cache the results?


Yes, I do think it resembles dead-code elimination at a high-level. I don't think that doing it after the fact is particularly desirable though, even with the results cached. I went into more details in my response to a sibling comment, but I think there are actually quite a lot of reasons why someone in practice might still care about the experience when doing a completely clean build rather than an incremental one that can re-use the results from a previous one. A lot of it might be overfitting from my own personal experience, but I'm really not convinced that fixing this issue purely by adding additional steps to building that assume the earlier ones completed successfully will end up with a better experience in the long term; all it takes is one link in the chain to break in order to invalidate all of the later ones, and I'd feel much more confident in the toolchain if it were designed so that each link was strengthened as much as possible instead of extending the chain further to try to mitigate issues with the entire thing. Adding new links in the chain might improve the happy path, but it also increases the cost in the unhappy path if the chain breaks, and arguably adds more potential places where that can happen. I'm worried that focusing too much on the happy path has led to an experience where the risk of getting stuck on the unhappy path has gotten too high precisely because of how much easier it's been for us to keep adding links to the chain than to address the structural integrity of it.


I'm interested to understand how this works from an IP perspective. This guy is still employed by Meta but is actively fundraising for a new competing startup. Presumably he will have negotiated that Meta forfeits all rights to anything related to his new business? Would be interesting to hear of people's experience/advice for doing this. Or are there some legal entitlements he can avail of?


Even if it’s Meta, they don’t want to antagonize LeCun. Also they all know it’s a small circle of people that create value. I will not be surprised if meta itself invests in his company and get a share.


I've always really struggled to understand the purpose of defining the 'semantics' of a programming language and how it differs from syntax. Explanations that involve 'giving a precise mathematical meaning' just seem almost circular to me. As I understand it now it's about saying what the value of a particular language construct should be (e.g. when evaluated), as opposed to whether the construct is allowed/part of the language (syntax). Is that intuition wrong?


I think the problem is that you don't get "syntax". What you think "syntax" is is actually semantics.

"Syntax" just means what strings are valid programs. For example, x=3 should be valid in C, while =+x shouldn't. Note that this doesn't say anything about what x=3 actually means in practice. The fact that there is a variable called x, and that it has a value at some point in time, and that after the execution of x=3 that value becomes 3 are all semantics.


> "Syntax" just means what strings are valid programs.

Strings are not objects in the syntactic domain, only terms are. And parsers operate on tokens, recognizing valid compositions of tokens (at least syntactically). Syntax concerns relations between tokens and thus their form and composition.

Semantics concerns the meanings of terms, where “meaning” is considered from various perspectives, like the denotational or operational (what the OP has in mind w.r.t. evaluation effectively concerns the denotational semantics). While computation is purely syntactic, denotational semantics is not circular, because the correspondence we assign (in our minds) to terms is with models that already possess a semantic content of their own.


There does exist a subtle middle ground between proper syntax and proper semantics, namely well-formedness. Well-formedness is technically a part of syntax rules (i.e. syntactic correctness) but not a part of formal grammars and other similar stuffs, making it harder to classify. For example, XML's opening tag and closing tag should be matched like <foo></foo>, but this syntactic rule is not described in the formal context-free grammar. It is possible to make a formal context-sensitive grammar that only accepts a well-formed syntax, but that would make the specification unnecessarily complex, hence the introduction of informal rules. Some still may argue that it is actually kind of semantics, however.


I think you misread what GP thinks syntax is. GP's definition

"whether the construct is allowed/part of the language (syntax)"

is the same as yours

""Syntax" just means what strings are valid programs"


> As I understand it now it's about saying what the value of a particular language construct should be (e.g. when evaluated), as opposed to whether the construct is allowed/part of the language (syntax). Is that intuition wrong?

Your intuition is right. It falls under what is called "Operational Semantics" (https://en.wikipedia.org/wiki/Operational_semantics) There are other aspects of looking at it i.e. "Denotational Semantics", "Axiomatic Semantics", "Algebraic Semantics" etc. which are more mathematical. The submitted book talks about all of them.

For more background, you might want to look at Alan Parkes book Introduction to Languages, Machines, and Logic.

The basic idea is that Symbolic Logic allows you to express Strings (sentences containing words) constructed from an Alphabet (set of symbols for that language) as "Programs" (which are valid i.e. meaningful strings in that language) which can then be interpreted by a Abstract Machine we call a Computer.


In short: Syntax defines the textual forms that the language allows. Semantics define how each form is interpreted.

Consider a simple calculator language that lets you add positive integers.

The syntax might give grammar rules like:

    expr ::= digits (EOF | ' + ' expr)
    digits ::= ('0' | '1' | ... | '9')+
This grammar admits expressions like 33 and 2 + 88 + 344. That's syntax.

But how the language interprets those expressions is semantics. Continuing our calculator example:

1. Every expr evaluates to an integer value.

2. An expr of the form `<digits> EOF` evaluates to the integer given by interpreting the digits in <digits> as an integer in base 10.

3. If <expr> evaluates to the value x, then an expr of the form `<digits> + <expr>` evaluates to the value y + x, where y is the integer given by interpreting the digits in <digits> as an integer in base 10.

Of course, specifying semantics in human language is tedious and hard to make precise. For this reason, most programming languages give their semantics in a more formalized language. See, for a classic example, the R5RS specs for Scheme:

https://conservatory.scheme.org/schemers/Documents/Standards...


> as opposed to whether the construct is allowed/part of the language

Arguably this is also semantics. Type checking and reporting type errors decides whether a construct is allowed or not, yet belongs squarely in the semantic analysis phase of a language (as opposed to the syntactic analysis phase).

> how it differs from syntax

Consider a language like C, which allows code like this:

    if (condition) {
        doWhenTrue();
    }
And consider a language like Python, which allows code like this:

    if condition:
        doWhenTrue()
The syntax and the semantics are both different here.

The syntax is different: C requires parens around the condition, allows curly braces around the body, and requires `;` at the end of statements. Python allows but does not require parens around the condition, requires a `:`, requires indenting the body, and does not allow a `;` at the end of statements.

Also, the semantics are different: in C, `doWhenTrue()` only executes if `condition` either is a non-zero integer, or can be implicitly coerced to a non-zero integer.

In Python, `doWhenTrue` executes if `condition` is "truthy," which is defined as whether calling `condition.__bool__()` returns `True`. Values like `True`, non-zero numbers, non-empty containers, etc. are all truthy, which is far more values than in C.

But you could imagine a dialect of Python that used the exact same syntax from C, but the semantics from Python. e.g., a language where

    if (condition) {
        doWhenTrue();
    }
has the exact same meaning as the Python snippet above: that `doWhenTrue()` executes when `condition` is truthy, according to some internal `__bool__()` method.


In addition to the other comments here, note that in PL circles ‘syntax’ typically denotes _everything_ that happens before translation/execution, importantly including type checking. ‘Semantics’ is then about explaining what happens when the program is run, which can equivalently be described as deciding when two programs are equal, mapping programs to mathematical objects (whose ‘meaning’, or at least equality, is considered to be well understood), specifying a set of transformations the syntax goes through, et cetera.

In pure functional languages saying what value an expression will evaluate to (equivalently, explaining the program as a function of its inputs) is a sufficient explanation of the meaning of a program, and semantics for these languages is roughly considered to be ‘solved’. Open areas of study in semantics tend to be more about doing the same thing for languages that have more complicated effects when run, like imperative state update or non-local control (exceptions, async, concurrency).

There's some overlap in study: typically syntax is trying to reflect semantics in some way, by proving that programs accepted by the syntactic analysis will behave or not behave a certain way when run. E.G. Rust's borrow checker is a syntactic check that the program under scrutiny will not dereference an invalid pointer, even though that's a thing that is possible by Rust's runtime semantics. Compare to Java, which has no syntactic check for this because dereferencing invalid pointers is simply impossible according to the semantics of the JVM.


Syntax, semantics, and pragmatics all define meaning at different scopes/scales.

Syntax is the smallest scale (words, punctuation, grammar), semantics is sentence or small function level, and pragmatics is paragraph-essay and program level.

For example when training early smaller scale LLMs they noticed that syntax was the first property for the LLM to reproduce reliably. They got proper grammar and punctuation but the sentences made no sense. When they scaled up they got semantics but not pragmatics. The sentences made sense but paragraphs didn't. Eventually the systems could output whole essays that made sense.

Even though you're asking about programming specifically, these concepts are universal to language, and are maybe a bit more intuitively applied to English (or your native language).

I suspect that a computer scientist could give a different mathematical explanation about how these concepts compile into binary or machine code in different ways, and I can't explain that. Generally I think of syntax as being very language specific but semantics and pragmatics can be translated across languages with similar capabilities.


The syntax is the set of rules that define what are valid expressions in the language. Valid as in the token sequence you wrote does not violate the grammar, nothing more that that. It can state that an "if" token must be followed by a "condition clause" that must be followed by "then" token etc.

The semantics is the definition of what is supposed to computationally happen if you execute a valid expression. It would state that the code block under the "then" will be executed if the condition attached to the "if" evaluated to true and skipped otherwise.


Your intuition is on the right track. The distinction may become clearer if you consider a classic language implementation design:

1. There's a lexer which breaks source text up into a stream of tokens.

2. A parser which converts a stream of tokens into an abstract syntax tree (AST).

3. An interpreter or compiler that traverses the AST in order to either execute it (interpreter) or transform it into some other form (compiler).

Points 1 & 2 are syntax - the mapping between the textual form of a program and its meaning.

Point 3 is semantics - how a program actually behaves, or as you say, what its terms evaluate to.

Looking at it like this can give a sharp line between syntax and semantics. But when you get deeper into it, it turns out that with some languages at least, you can get from source syntax to something that actually executes - has behavior, i.e. semantics - with nothing but a series of syntactic transformations. From that perspective, you can say that semantics can be defined as a sequence of syntactic transformations.

This doesn't erase the distinction between syntax and semantics, though. The syntax of the source language is the first stage in a chain of transformations, with each stage involving a different (albeit closely related) language with its own syntactic structure.

> Explanations that involve 'giving a precise mathematical meaning' just seem almost circular to me.

Formal semantics covers this, but the syntax/semantics distinction isn't necessary just formal - it's a useful distinction even in an informal sense.

As for circularity, it's absolutely the case that formal semantics is nothing more than defining one language in terms of another. But the idea is that the target language is supposed to be one with well-defined semantics of its own, which is why "mathematical" comes up so often in this context. Mathematical abstractions such as set theory, lattice theory, lambda calculus and so on can provide a rigorous foundation that's more tractable when it comes to proofs.

That kind of circularity pervades our knowledge in general. Words in the dictionary are given meaning in terms of other words. You can't explain something without explaining it in terms of something else.


Great explanation. Also, if I understand things correctly, type-checking is where things get really interesting. Runtime errors occur in #3. Type checkers identify these errors (to varying degrees) and they show the errors in the compile phase. If we think of parsing and type-checking as a unit together, then type-checking sort of pushes the line of syntax further into semantic territory. The stronger the type-checker, the more you can make semantic errors look like syntax errors.

This is similar to what Chomsky did in "Aspects of the Theory of Syntax" when he tried to show how we can build more thorough systems to evaluate semantics of (human) language, like what kinds of nouns can go with what kinds of verbs. He pushes the line of syntax further into the semantic territory and tries to create more comprehensive sets of rules that better guarantee the generation of syntactically and semantically correct sentences. I think this is perfectly analogous to the type-checking enterprise.


> If we think of parsing and type-checking as a unit together

This certainly leads to a blurring of the distinction, but that's a result of the choice of this as a premise.

Parsing will give you an AST that tells you that a term has a type, say, `Int -> Bool`, which might be represented e.g. as a 3-node tree with the arrow at the root and the input and output types as leaves. But falling back to the conceptual definition of syntax, this tells you nothing about what that tree means.

To add type checking into the picture, we need another stage between 2 and 3, which is where meaning is assigned to types in the AST and the semantics of the types are handled.

You'll often see people saying things about how types are syntactic, but this is a slightly different use of the word: basically, it refers to how types categorize syntactic terms. So types apply to syntax, but their behavior when it comes to actual checking is still semantics - it involves applying logic, inference etc. that go well beyond syntactic analysis in the parsing sense.

Really, it boils down to what you choose as your definitions. If you define syntax to be, essentially, the kind of thing that's modeled by an AST, then there's not really any ambiguity, which makes it a good definition, I think. Semantics is then assigning meaning to the nodes and relationships in such a tree.

Re Chomsky, I think the discovery that PL semantics can be entirely implemented in terms of a series of syntactic transformation is quite relevant to what Chomsky was getting at. But that doesn't negate semantics, it just gives us a more precise model for semantics. In fact, I have a sneaking suspicion that this formally-supported view might help clarify Chomsky's work, but a bit more work would be needed to demonstrate that. :)


Correction:

> Points 1 & 2 are syntax - the mapping between the textual form of a program and its structure.


Syntax is being able to say whether this is a sequence of textual tokens such that we can prove that this a C program that compiles.

   int c = 042;
   printf("%d", c);
Semantics is what makes it print 34.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: