Hacker Newsnew | past | comments | ask | show | jobs | submit | jpalomaki's commentslogin

I think the killer is when the platform handles the transaction all the way. Instead of charging per click, platform can then take a cut of the sale.

Some people are complaining the matte finish on the LG ruins part of the experience.

Depends on if you are stuck with the subscription for life, or if there's actually a reasonable way to unsubscribe.

You're never free to unsubscribe because you become accustomed to the tools, and use the file formats, etc. (That's why I don't do subscription, ever.)

Agents are not yet very good at figuring out how things look on the screen.

Or at least in my experience this is where they need most human guidance. They can take screenshots and study those, but I’m not sure how well they can spot when things are a bit off.


Just docker compose and spin up 10 stacks? Should not be too much for modern laptop. But it would be great if tool like this could manage the ports (allocate unique set for each worktree, add those to .env)

For some cases test-containers [1] is an option as well. I’m using them for integration tests that need Postgres.

[1] https://testcontainers.com/


That’s what our setup/teardown scripts are for but we plan on making the generation of them automatic


Learned once the hard way that it makes sense to use "flock" to prevent overlapping executions of frequently running jobs. Server started to slow down, my monitoring jobs started piling, causing server to slow down even more.

  */5 * * * * flock -n /var/lock/myjob.lock /usr/local/bin/myjob.sh


We can also use systemd timer and it ensures there is no overlap.

https://avilpage.com/2024/08/guide-systemd-timer-cronjob.htm...


Have you tested how this behaves on eventually consistent cloud storage?


I'm confused, is EBS eventually consistent? I assume that it's strongly consistent as otherwise a lot of other linux things would break

If you're thinking about using NFS, why would you want to distribute your locks across other machines?


Why would anyone want a distributed lock?

Sometimes certain containerized processes need to run according to a schedule, but maintainers also need a way to run them manually without the scheduled processing running or starting concurrently. A shared FS seems like the ”simplest thing that could possibly work” distribution method for locks intended for that purpose, but unfortunately not all cloud storage volumes are strongly consistent, even to the same user, and may take several ms for the lock to take hold.


Wouldn't a database give you better consistency guarantees in that case? NFS locking semantics are a lot more complicated than just a `SELECT .. FOR UPDATE`


Sure, but that would require a separate database for this one use case. Mixing infra concerns into an app db doesn’t sound kosher, either, and a shared volume is already available.

Seems easier to have a managed lockfile for each process, diligently checking that the lock has actually been acquired. Performance is not a concern anyway, as long as acquire takes just a few ms we’re golden.

FWIW, it’s not NFS.


If a file system implements lock/unlock functions precisely to the spec, it should be fully consistent for the file/directory that is being locked. Does not matter if the file system is local or remote.

In other words, it's not the author's problem. It's the problem of a particular storage that may decide to throw the spec out of the window. But even in an eventually consistent file system, the manufacturer is better off ensuring that the locking semantics is fully consistent as per the spec.


It might be a question of where the seniors put their time: coaching juniors or working with AI tools.


We have articles that are very skeptical about whether AI companies will ever make any money.

And then we have others claiming that AI is already having such a significant impact on hiring that the effects are clearly visible in the statistics.


Those two phenomena can be true at the same time.


Those two are not contradictory.

AI companies could never make any money (statement about the future, and about AI companies, and finances). And AI could be having a visible effect on hiring today (statement about now, and about non-AI companies, and about employment).

They don't have to both be true, but they do not inherently contradict each other.


In 2000 a lot of internet companies went under while the internet had a huge impact on business and wider society.


Can you give some concrete example of programming problem task GPT fails to solve?

Interested, because I’ve been getting pretty good results with different tasks using the Codex.


Try to ask it to write some GLSL shaders. Just describe what you want to see and then try to run the shaders it outputs. It can output a UV-map or the simple gradient right, but when it comes to shaders a bit more complex it most of the time will not compile or run properly, sometimes mix GLSL versions, sometimes just straight make up things which don't work or output what you want.


Library/API conflicts are the biggest pain point for me usually. Especially breaking changes. RLlib (currently 2.41.0) and Gymnasium (currently 0.29.0+) have ended in circles many times for me because they tend to be out of sync (for multi-agent environments). My go to test now is a simple hello world type card game like war, competitive multi-agent with rllib and gymnasium (pettingzoo tends to cause even more issues).

Claude Sonnet 4.5 was able to figure out a way to resolve it eventually (around 7 fixes) and I let it create an rllib.md with all the fixes and pitfalls and am curious if feeding this file to the next experiment will lead to a one-shot. GPT-5 struggled more but haven't tried Codex on this yet so it's not exactly fair.

All done with Copilot in agent mode, just prompting, no specs or anything.


I posted this example before but academic papers on algorithms often have pseudo code but no actual code.

I thought it would be handy to use AI to make the code from the paper so a few months ago I tried to use Claude (not GPT, because I only have access to Claude) to recreate C++ code to implement the algorithms in this paper as practice for me in LLM use and it didn’t go well.

https://users.cs.duke.edu/~reif/paper/chen/graph/graph.pdf


I just tried it with GPT-5.1-Codex. The compression ratio is not amazing, so not sure if it really worked, but at least it ran without errors.

A few ideas how to make it work for you:

1. You gave a link to a PDF, but you did not describe how you provided the content of the PDF to the model. It might only have read the text with something like pdftotext, which for this PDF results in a garbled mess. It is safer to convert the pages to PNG (e.g. with pdftoppm) and let the model read it from the pages. A prompt like "Transcribe these pages as markdown." should be sufficient. If you can not see what the model did, there is a chance it made things up.

2. You used C++, but Python is much easier to write. You can tell the model to translate the code to C++ once it works in Python.

3. Tell the model to write unit tests to verify that the individual components work as intended.

4. Use Agent Mode and tell the model to print something and to judge whether the output is sensible, so it can debug the code.


Interesting. Thanks for the suggestions.


Completely failed for me running the code it changed in a docker container i keep running. Claude did it flawlessly. It absolutely rocks at code reviews but ir‘s terrible in comparison generating code


It really depends on what kind of code. I've found it incredible for frontend dev, and for scripts. It falls apart in more complex projects and monorepos


If OpenAI manages to get the agentic buying going, that could be big. They could tie the ad bidding to the user actually making the purchase, instead of just paying for clicks.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: