Hacker Newsnew | past | comments | ask | show | jobs | submit | jules's commentslogin

The are fining his company 213% of yearly Italian revenue. He is not the one escalating.

He took a risk in ignoring a law instead of exiting the market. They did not escalate, they applied the law.

What we need is an international legal framework for the Internet. And that includes compromises on all sides. China, EU, Russia, US and others have very different understanding on what is right. But hey, I think US politics is America first and cancel all international treaties. Sounds like more problems like this are incoming.


Sure, next we should implement global communism and live in peace and harmony forever! That is most definitely feasible

If Italians have no influence over AGCOM, then who does?

This is good, but they're now charging authors a publishing fee of over $1000 per article (and they say that that is the discounted price). It is unclear whether this is justified. In my experience publishing scientific articles with ACM, all the real work (such as peer review) is done by volunteers. From what I can tell, ACM just hosts the exact PDF + metadata that authors supply. I suspect that in the future, more journals and conferences will switch to an arXiv-overlay model.

I have volunteered in various roles for ACM conferences and thus have some insight into ACM's path towards Open Access over the past years.

Just a few things to consider:

- ACM is not a for-profit publisher like Springer or Elsevier. Any profits made from their/our publishing activities subsidize e.g., outreach activities, travel stipends for developing countries, and potential losses from e.g. conferences. - In my experience, ACM is one of the very few publishers in computer science where you can generally trust the published papers. - Keeping a long-term digital library is not just "putting PDFs on a server" but involves a lot of additional costs. The ACM HQ is rather lean IMHO, but there are multiple people involved in developing the Digital Library, handling cases of copyright infringement and plagiarism, supporting volunteers, etc. Also, the ACM DL contains a rising number of video recordings of conference talks, etc. Additionally, there are several contractors to be paid. For example, authors no longer generate their own PDFs but submit the LaTeX/Word manuscripts to a central service (TAPS), developed and operated for ACM by an Indian company, Aptara. - In the past, subscriptions to the ACM Digital Library were a major, stable source of income for ACM. ACM has to be careful to not get into financial trouble by giving away their crown jewels without generating sufficiently stable alternative income sources.


I don’t see what the benefit of most of that is, and why publication fees are a good way to pay for it. Take video recordings: why should we pay for them to develop a video hosting platform when perfectly good ones exist? In fact, conferences that I am familiar with put the recordings on YouTube, and this works great.

> ACM has to be careful to not get into financial trouble by giving away their crown jewels without generating sufficiently stable alternative income sources.

The attitude that the work of science belongs to these publishers is what grates me the most. Yes, ACM is not as bad as Elsevier, but this attitude is still fundamentally wrong. They are in the position they are mostly by historical accident, able to extract rents because it requires a lot of coordination to switch.

Why do I call it rent extraction despite the ACM doing stuff? Suppose the ACM charged separately for using their video platform. Would anyone pay for that?

> For example, authors no longer generate their own PDFs but submit the LaTeX/Word manuscripts to a central service (TAPS), developed and operated for ACM by an Indian company, Aptara

And what good is that? Why should we pay for a separate company to run pdflatex for us? The system exists primarily to check that we’ve put ACM branding in the paper.

Sometimes people also say that the real service is long term storage of pdfs, but let me preempt that right now: there are government sponsored long term storage facilities like Zenodo that are likely to outlast ACM. Second, commercial storage paid for indefinitely using an annuity would cost less than $1 in present value for hosting the pdfs of a conference, about 0.0001% of ACM publication fees.


How does the arXiv manage the same feat for one tenth the cost?

It doesn't. arXiv is exclusively a pre-print service. The ACM digital library is for peer-reviewed, published papers. All of the peer-review happens through the ACM, as well as the physical conferences where people present and publish their papers.

The peer review is all done by volunteers of conferences, not ACM.

Yes, and that peer review happens through the ACM. It serves an organizing function. The conferences themselves are also in-person events, and most of the important research papers come out of those conferences.

I'm pretty sure the primary purpose of the $1000 is just to create some small gate to avoid overloading reviewers/ACM. There are probably other mechanisms that could be used - such as having "recommendations" for from already approved researchers - I think arXiv has something like that.

That isn't the case. Conferences organize their own website to submit articles for review. Volunteers from the conference pre-filter submitted articles for spam, the rest is handled by the review committee. There is no cost to submit. In fact, the eventual cost is often not even mentioned at that point. When the article is accepted for publication, the conference gives authors a link to an ACM website where the authors upload their PDFs. Only after that will the authors be asked to pay the fee (and if you wanted, you could refuse at that point, which presumably means that the conference will eat the loss, or maybe they'll un-publish your article).

I don't think spam is a huge issue. The conference websites and submission portals are niche and random people don't tend to find them or care enough to go through the trouble.


Not at all; the charge happens at the end of the proccess, after the article was reviewed and accepted for publication.

They charge that much because they can.


iPhone 17 pro max is balanced with their standard case.


3blue1brown actually shows the usefulness of formalism. The videos are great, but by avoiding formalism, they are at least for me harder to understand than traditional sources. It is true that you need to get over the hump of understanding the formalism first, but that formalism is a very useful tool of thought. Consider algebraic notation with plus and times and so on. That makes things way easier to understand than writing out equations in words (as mathematicians used to do!). It is the same for more advanced formalisms.


For another comparison: this is about 4 years worth of UK App Store net revenue.


What they are buying is support of the French.


With a new French CEO, such a coincidence


What does this predict about LLMs ability to win gold at the International Mathematical Olympiad?


Same thing it does about their ability to drive cars.


So, nothing.


It's definitely something but it might not be apparent to those who do not understand the distinctions between intensionallity & extensionallity.


Depends which question you're asking.

Ability to win a gold medal as if they were scored similarly to how humans are scored?

or

Ability to win a gold medal as determined by getting the "correct answer" to all the questions?

These are subtly two very different questions. In these kinds of math exams how you get to the answer matters more than the answer itself. i.e. You could not get high marks through divination. To add some clarity, the latter would be like testing someone's ability to code by only looking at their results to some test functions (oh wait... that's how we evaluate LLMs...). It's a good signal but it is far from a complete answer. It very much matters how the code generates the answer. Certainly you wouldn't accept code if it does a bunch of random computations before divining an answer.

The paper's answer to your question (assuming scored similarly to humans) is "Don’t count on it". Not a definitive "no" but they strongly suspect not.


The type of reasoning by the OP and the linked paper obviously does not work. The observable reality is that LLMs can do mathematical reasoning. A cursory interaction with state of the art LLMs makes this evident, as does their IMO gold medal scored like humans are. You cannot counter observable reality with generic theoretical considerations about Markov chains or pretraining scaling laws or floating point precision. The irony is that LLMs can explain why that type of reasoning is faulty:

> Any discrete-time computation (including backtracking search) becomes Markov if you define the state as the full machine configuration. Thus “Markov ⇒ no reasoning/backtracking” is a non sequitur. Moreover, LLMs can simulate backtracking in their reasoning chains. -- GPT-5


  > The observable reality is that LLMs can do mathematical reasoning
I still can't get these machines to reliably perform basic subtraction[0]. The result is stochastic, so I can get the right answer, but have yet to reproduce one where the actual logic is correct[1,2]. Both [1,2] perform the same mistake and in [2] you see it just say "fuck it, skip to the answer"

  > You cannot counter observable reality
I'd call [0,1,2] "observable". These types of errors are quite common, so maybe I'm not the one with lying eyes.

[0] https://chatgpt.com/share/68b95bf5-562c-8013-8535-b61a80bada...

[1] https://chatgpt.com/share/68b95c95-808c-8013-b4ae-87a3a5a42b...

[2] https://chatgpt.com/share/68b95cae-0414-8013-aaf0-11acd0edeb...


Why don't you use a state of the art model? Are you scared it will get it right? Or are you just not aware of reasoning models in which case you should get to know the field


Careful there, without a /s people might think you're being serious.


I am being serious, why don't you use a SOTA model?


Sorry, I've just been hearing this response for years now... GPT-5 not SOTA enough for you all now? I remember when people told me to just use 3.5

  - Gemini 2.5 Pro[0], the top model on LLM Arena. This SOTA enough for you? It even hallucinated Python code!

  - Claude Opus 4.1, sharing that chat shares my name, so here's a screenshot[1]. I'll leave that one for you to check. 

  - Grok4 getting the right answer but using bad logic[2]

  - Kimi K2[3]

  - Mistral[4]
I'm sorry, but you can fuck off with your goal post moving. They all do it. Check yourself.

  > I am being serious
Don't lie to yourself, you never were

People like you have been using that copy-paste piss-poor logic since the GPT-3 days. The same exact error existed since those days on all those models just as it does today. You all were highly disingenuous then, and still are now. I know this comment isn't going to change your mind because you never cared about the evidence. You could have checked yourself! So you and your paperclip cult can just fuck off

[0] https://g.co/gemini/share/259b33fb64cc

[1] https://0x0.st/KXWf.png

[2] https://grok.com/s/c2hhcmQtNA%3D%3D_e15bb008-d252-4b4d-8233-...

[3] http://0x0.st/KXWv.png

[4] https://chat.mistral.ai/chat/8e94be15-61f4-4f74-be26-3a4289d...


That's very weird, before I wrote my comment I asked gpt5-thinking (yes, once) and it nailed it. I just assumed the rest would get it as well, gemini-2.5 is shocking (the code!) I hereby give you leave to be a curmudgeon for another year...


Try a few times and it'll happen. I don't think it took me more than 3 tries on any platform.

To convince me it is "reasoning", it needs to get the answer right consistently. Most attempts were actually about getting it to show its results. But pay close attention. GPT got the answer right several times but through incorrect calculations. Go check the "thinking" and see if it does a 11-9=2 calculation somewhere, I saw this >50% of the attempts. You should be able to reproduce my results in <5 minutes.

Forgive my annoyance, but we've been hearing the same argument you've made for years[0,1,2,3,4]. We're talking about models that have been reported as operating at "PhD Level" since the previous generation. People have constantly been saying "But I get the right answer" or "if you use X model it'll get it right" while missing the entire point. It never mattered if it got the answer right once, it matters that it can do it consistently. It matters how it gets the answer if you want to claim reasoning. There is still no evidence that LLMs can perform even simple math consistently, despite years of such claims[5]

[0] https://news.ycombinator.com/item?id=34113657

[1] https://news.ycombinator.com/item?id=36288834

[2] https://news.ycombinator.com/item?id=36089362

[3] https://news.ycombinator.com/item?id=37825219

[4] https://news.ycombinator.com/item?id=37825059

[5] Don't let your eyes trick you, not all those green squares are 100%... You'll also see many "look X model got it right!" in response to something tested multiple times... https://x.com/yuntiandeng/status/1889704768135905332


Have you tried to get google ai studio (nano-banana) to draw a 9-sided polygon? Just that.

https://ibb.co/Qj8hv76h


There should be no need whatsoever to convince your competitors and/or bureaucrats that allowing your new connector to be produced is in their interest. Only one should be convinced: the person buying the device.


If Apple made both USB-C and Lightning variants and let people choose: then sure, let the market decide.

In reality an oligopoly was stuck in a crappy stalemate and people had only compromised options. Carrying two sets of wires everywhere sucked.


We tried that for 40 years. The result is drawers full of chargers.

But clearly there is a price for the standardisation, it makes progress slower. On the other hand it makes everyone's lifes easier. Just as with e.g electrical outlets in the house there is a time for exploration and innovation, and there is a time for standardisation. And we are ready for standardisation now, USB-c is good enough.


USB-c is absolutely not good enough. The connectors are often incompatible due to tiny manufacturing tolerances, cables from different manufacturers often fall out of the port after longer term use, don't make good connection so you have flaky charging, the cables and connectors look the same but are actually incompatible due to supporting only USB 2/3/4 or thunderbolt, whether displayport/hdmi alt mode is supported, etc. This small short-term gain at the cost of locking in USB-c forever was a terrible idea, brought to you by the same hypercompetent group that mandated cookie banners.


Cookie-banners were never mandated. It's just a fucking stupid way by the website operators, trying to circumvent data privacy regulation.

And when it comes to USB-C. Sure, it's far from perfect, but it's a great foundation to built upon and improve.


That's the point, the regulation effectively locks in USB-C as it is.


They were mandated by the EU. You don't get to pass crap laws of the form "show a banner or do {vague/impossible/unacceptable thing}" and then complain when 100% of people show a banner. That kind of inane immaturity is why the EU is so far behind and falling further.


Please don't fulminate on HN. You may not owe cookie banners better, but we're trying for a better style of conversation here. Please make an effort to observe the guidelines, which seek to make HN a place for curious conversation, not rage.

https://news.ycombinator.com/newsguidelines.html


> We tried that for 40 years. The result is drawers full of chargers.

Which is a fine? The industry eventually converged to just a handful of common standards on its own.

You can’t innovate without being able to experiment. Which is only possible if there are actual people using your product. Thinking that a committee of bureaucrats can replace that is silly.


A handful of common standards is useless.

One standard for chargers is the only acceptable outcome and it wouldn't have gotten there without regulation.

What need is there to experiment with chargers? Wire go in, power go through - it's really not that complicated, the only important thing is standardization.


> What need is there to experiment with chargers?

That’s the point, I have no clue. But we might still be stuck with floppy drives with a mindset like that.

Although as a physical connector usb-c is far from perfect. IMHO lighting seemed nicer in some ways.


> But we might still be stuck with floppy drives with a mindset like that.

That seems like a false equivalency to me. It seems quite obvious that storage media have more potential for development than charging wires.

Wire go in - power go through, is literally all they need to do and USB-C does that pretty well.


No, it's cable go in, power go through, *cable doesn't fall out* and usb-c does that terribly after a few months.

I'm extremely pro standardisation, but the next revision needs to do a lot better.


MagSafe is a superior power connector in every way.


The "bureaucrats" are a proxy for the person buying the device. That's literally the point of representative democracy. The average person doesn't want to make a million decisions on technical standards, so they elect somebody they trust to make them for them.


This visualization is wildly inaccurate. The supposed 1000 pixels are actually 100x100 pixels, which is 10,000 not 1000. Secondly, on many screens they are not actually pixels. For example, on a macbook pro you're likely seeing 40,000 pixels in actuality.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: