Hacker Newsnew | past | comments | ask | show | jobs | submit | jarjoura's commentslogin

Aren't toll roads the norm? It was radical in the 1940s and 1950s to create public freeways.

Toll roads do have real consequences and, do, raise the cost of everything that needs to travel over it. It also means things that could exist on one side of a bridge or tolled section will relocate to other areas to avoid tolls.

Not against them, but I also don't like them. I personally think it's a failure of a state managing its roads where the cost has to become disproporiationally spread.


>Aren't toll roads the norm?

No. I won't say they're rare but they're not especially common in the US.


Do you perhaps live in Florida or Oklahoma? They are quite rare in CA, the southwestern states in general, and the upper midwest.

> "For now, drivers pay to access just 6,300 miles of America’s 160,000 or so miles of highway"

It's a shame that my company tied itself to claude-code way too fast. It was like a single week last summer of, "oh what's everyone's favorite? claude? okay, let's go!"

OpenCode has been truely innovating in this space and is actually open source, and would naturally fit into custom corporate LLM proxies. Yet, now we've built so many unrulely wrappers and tools around claude-code's proprietary binary just to sandbox it, and use it with our proxy, that now I fear it's too late to walk back.

Not sure how OpenCode can break through this barrier, but I'm an internal advocate for it. For hobby projects, it's definitely my goto tool.


As of Dec 2025, Sonnet/Opus and GPTCodex are both trained and most good agent tools (ie. opencode, claude-code, codex) have prompts to fire off subagents during an exploration (use the word explore) and you should be able to Research without needing the extra steps of writing plans and resetting context. I'd save that expense unless you need some huge multi-step verifiable plan implemented.

The biggest gotcha I found is that these LLMs love to assume that code is C/Python but just in your favorite language of choice. Instead of considering that something should be written encapsulated into an object to maintain state, it will instead write 5 functions, passing the state as parameters between each function. It will also consistently ignore most of the code around it, even if it could benefit from reading it to know what specifically could be reused. So you end up with copy-pasta code, and unstructured copy-pasta at best.

The other gotcha is that claude usually ignores CLAUDE.md. So for me, I first prompt it to read it and then I prompt it to next explore. Then, with those two rules, it usually does a good job following my request to fix, or add a new feature, or whatever, all within a single context. These recent agents do a much better job of throwing away useless context.

I do think the older models and agents get better results when writing things to a plan document, but I've noticed recent opus and sonnet usually end up just writing the same code to the plan document anyway. That usually ends up confusing itself because it can't connect it to the code around the changes as easily.


>Instead of considering that something should be written encapsulated into an object to maintain state, it will instead write 5 functions, passing the state as parameters between each function.

Sounds very functional, testable, and clean. Sign me up.


I know this is tongue in cheek, but writing functional code in an object oriented language, or even worse just taking a giant procedural trail of tears and spreading it across a few files like a roomba through a pile of dog doo is ... well.. a code smell at best.

I have a user prompt saved called clean code to make a pass through the changes and remove unused, DRY and refactor - literally the high points of uncle bob's Clean Code. It works shockingly well at taking AI code and making it somewhat maintainable.


>I know this is tongue in cheek, but writing functional code in an object oriented language, or even worse just taking a giant procedural trail of tears and spreading it across a few files like a roomba through a pile of dog doo is ... well.. a code smell at best.

After forcing myself over years to apply various OOP principles using multiple languages, I believe OOP has truly been the worst thing to happen to me personally as engineer. Now, I believe what you actually see is just an "aesthetics" issue, moreover it's purely learned aesthetics.


Does its output follow the "no comments needed" principle of the uncle Bob?

Not so much tongue in cheek, but a little on the light side, sure.

I'd argue writing functional code in C++ (which is multi-paradigm anyway), or Java, or Typescript is fine!


Care to share the prompt? Sounds useful!

Sure. Please improve it and come back around to let me know.

https://gist.github.com/prostko/5cf33aba05680b722017fdc0937f...


> As of Dec 2025, Sonnet/Opus and GPTCodex are both trained and most good agent tools (ie. opencode, claude-code, codex) have prompts to fire off subagents during an exploration (use the word explore) and you should be able to Research without needing the extra steps of writing plans and resetting context. I'd save that expense unless you need some huge multi-step verifiable plan implemented.

Does the UI shows clearly what portion was done by a subagent?


Yes it will, this is almost verbatim (redacted product) claude-code output from my current session:

   ● I'll explore the codebase to understand the current <redacted> architecture, testing patterns, and integration points. This will help me formulate effective strategies for reducing QA burden.

   ● 3 Explore agents finished (ctrl+o to expand)
      ├─ Explore <redacted> architecture · 57 tool uses · 60.0k tokens
      │  ⎿  Done
      ├─ Explore current testing approach · 29 tool uses · 51.7k tokens
      │  ⎿  Done
      └─ Explore API integration patterns · 44 tool uses · 71.7k tokens
         ⎿  Done

During agent execution, it also shows what each sub-agent is up to. In ctrl+o mode it'll show the prompts it passed to each sub-agent.

The UI (terminal) in Claude code will tell you if it has launched a subagent to research a particular file or problem. But it will not be highlighted for you, simply displayed in its record of prompts and actions.

If you use the vscode extension you can click to view the sub-agent prompts and see all tool calls.

If claude ignores your claude.md you can force it to read via settings to cat it every session start for example.

AI can be an FP absolutist too.

Interesting, for me they almost always assume/write TS.

Anecdotally, the common theme I'm starting to hear more often now is that people who use “AI” at work despise it when it replaces humans outright, but love it when it saves them from mundane, repetitive crap that they have to do.

These companies are not selling the world on a vision where LLMs are a companion tool, instead, they are selling the world on some idea that this is the new AI coworker. That 80/20 rule you're calling out is explained away with words like “junior employee.”


I think it's also important to see that even IF there are those selling it as a companion tool, it's only in the meantime. That is, it's your companion now, but because we need you next to it to make it better so it can be an "AI employee" once it's trained from your companionship.


There are hundreds of thousands of software engineers who, given FU amounts of money, would absolutely keep writing software and do it only for the love of it. The companies that hire us usually make us sign promises that we won't work on side projects. Even if there are legal workarounds to that, it's not quite so simple.

Even still, whatever high salaries they do give us just flow right back into the neighborhoods through insane property values and other cost-of-living expenses that negate any gains. So, it’s always just the few of us who can win that lottery and truly break out of the cycle.


> whatever high salaries they do give us just flow right back into the neighborhoods through insane property values and other cost-of-living expenses that negate any gains. So, it’s always just the few of us who can win that lottery and truly break out of the cycle.

You break out of the cycle by selling your HCOL home and moving to LCOL after a few years. That HCOL home will have appreciated fast enough given the original purchase price that the growth alone would easily pay for a comparable home in a LCOL area. This is the story of my village in Texas, where Cali people have been buying literal mansions after moving out of their shitboxes in LA and the Bay Area.


moonlighting is permitted by law in California (companies legally can't prevent you from doing it, iiuc), as long as there's no conflict of interest with your main job...


"no conflict of interest" is basically meaningless if your day job is writing software. These clauses you sign are quite broad in what that scope of conflict could be.

Every company I've worked for has had very explicit rules that say, you must get written permission from someone at some director or VP level sign off on your "side project," open source or not.

You might want to check your company guidelines around this just to make sure you're safe.


Side projects that aren't a conflict of interest when working at Google is rather limiting. Likely less so for small companies.


Not really, in my personal experience and per my friends, most of big companies are pretty lenient about it, except for Apple.


No, they're pretty strict. It just changes what you are allowed to do, with Apple being very restrictive in not letting you do it at all.


As long as you don't use their hardware to do it.


that goes without saying, but it's still not free permission when you use your own stuff.


Sad, because before COVID, no one at Meta cared where you worked as long as you were getting your shit done. There was never available meeting rooms, and the open floor plans were so loud, that people would spread out all over the campus and use single person VC rooms to communicate in.

Basically, everyone trusted everyone.

This is 100% just a soft layoff.


I notice US tech companies have also become really tough on white collar workers in order to suck up to Trump and his country goons.

No more diversity programs, work life balance no longer promoted, that kinda stuff. This fits in with that trend.


Diversity programs do not universally benefit white collar workers.


Not directly but they do create an open and fair working environment for all.

Once you leave room for discrimination and bullying, everyone suffers because it makes company culture harder.

And it's not just about "quotas". That's an extreme-right talking point. Diversity done properly doesn't involve quotas. Those are just a way for companies that don't actually care about it to have an easy 'fix' to get their numbers to look ok but it's not actual diversity.

I'm part of a diversity team myself as a side role. In Europe luckily.


> And it's not just about "quotas". That's an extreme-right talking point. Diversity done properly doesn't involve quotas

At Microsoft, Google, Apple, and Meta, diversity programs were implemented as soft quotas. All this talk about "diversity done properly" is just so much noise when approximately all the largest companies aren't doing it that way.


Soft quotas are not great but at my work (also a huge multinational but headquartered in Europe) we just use stats as a guide. Obviously if a country has 30% people of a certain ethnicity and in your employ it's 2% you're doing something wrong. We use that to measure hoe effective we are at combating bias and prejudice, what works and what doesn't. I wonder if that's sometimes regarded as a 'soft quota' but it shouldn't be.

We don't fix this with hiring targets. We hit the root cause with training for HR and management (and also some for all employees in the yearly mandatory training package). Recognising hidden bias, challenging people to bconsider their reactions.

And then measure the performance with stats, but not just force them. That's lazy and only fixes the problem on paper. Window dressing. Diversity is more than the hiring process anyway, a huge part is discrimination on the work floor. Often not by managers but co-workers, so we give management skills to deal with that.

Maybe in US big tech this is common but those are all pretty immoral companies anyway. See how quickly they pivoted to sucking up to Trump. The world is much bigger than the US and big tech.


>Obviously if a country has 30% people of a certain ethnicity and in your employ it's 2% you're doing something wrong.

Do you apply this to everything? Like say, a sports team?

>The world is much bigger than the US and big tech.

Anything impressive to show for it? Because it really seems like all this focus on diversity is your downfall not your strength.


> Do you apply this to everything? Like say, a sports team?

No, at work where we have tens of thousands of people.

Sport is a voluntary thing, people just join it when they want (I guess, I'm not into sports, not watching nor playing).

> Anything impressive to show for it? Because it really seems like all this focus on diversity is your downfall not your strength.

Yes we have great quality of life. It's not all about money.

In fact I asked to move to a country where the wage levels are much lower, to have a better quality of life. Here around the Mediterranean the weather is better, people enjoy life more and take it slower. There's much more things to do in my free time that I enjoy. When I'm back in Holland I hate it, people are so materialistic. Always talking about their new car, how big their TV is lol. I don't even own any car or motor and my TV is tiny but I'm much happier here.

Also diversity is just a thing we do, we're not all about that. I am because I voluntarily spend part of my work time on it (LGBTIQ in particular). For most people in the company it's a message here or there, one little training per year and maybe a talk from one of us at the town hall meetings which are optional.

There's other similar programs in the company about sustainability and ethics.


> Sport is a voluntary thing, people just join it when they want (I guess, I'm not into sports, not watching nor playing).

Sports are one of the highest paying jobs in the world. Professional athletes in popular sports leagues are in the 0.01%.


[flagged]


They are merit based. Especially with DEI. It just aims to remove the common human bias to trust people more when they are more like us. With training and hiding names/photos during CV preselection. So it is only based on merit.


> We're back to sane times thankfully, but it will take a long time to undo the rot caused by DEI.

Don't be counting your chickens too soon. Once the current clown show is voted out, we can hopefully get back to a more reality-based situation.


So that a different clown show can take over, great, cool way to run a country.

People are never going to tolerate the nonsense that followed 2020 again, mark my words.


As clown shows go, the current admin will be hard to beat.


meh, you're just conflating two different things.

Tech companies spent a decade (since 2010), driving towards some belief that the entire world was going to go online and stay there. They also ammassed an insane amount of wealth in that time. Wealth that is now structurally tied to the stability of the entire financial system.

For whatever reason, investors get bored and want to move money around and so the tech companies, that built healthy, stable businesses, needed to keep that dopamine hit coming with new mega annoucements. What else is there to build? "Efficiency" is the current corporate white collar trend, because that's what investors are woo'd with. AI is the other new-new thing, but instead of a that next reason to reverse hiring trends, AI itself is built and sold as an employee replacement.

Anyway, the fact that there is an entire class of people in the US who feel and believe, it can't get any worse, are geniunely suffering in ways many of here on this forum can't even imagine. Definitely think it's unfair to put these two concerns in the same bucket.


It's kind of annoying, right now at least, when an agent can see all the LSP noise and it decides to go off on a tangent to address the LSP noise in the middle of running a task that the LSP is responding to.

For this to work, the LLM has to be trained on the LSP and the LSP has to know when to wait reporing changes and when to resume.


Gemini 2.5 and now 3 seem to continue their trend of being horrific in agentic tasks, but almost always impress me with the single first shot request.

Claude Sonnet is way better about following up and making continuous improvements during a long running session.

For some reason Gemini will hard freeze-up on the most random queries, and when it is able to successfully continue past the first call, it only keeps a weird summarized version of its previous run available to itself, even though it's in the payload. It's a weird model.

My take is that, it's world-class at one-shotting, and if a task benefits from that, absolutely use it.


I ended up using container service on azure for a small rust project that I built in a docker container and published to GitHub. GitHub actions publishes to the azure service and in the 3 years I have been running it, it's basically been almost entirely free.


I have a similar experience except I use Go+GitHub Actions+Google Cloud Run.


I thought WASM was no_std since there's no built in allocator?

Regardless, not sure why a Rust engineer would choose this path. The whole point to writing a service in Rust is that you would trade 10x time build complexity and developer ovearhead for getting a service that can run in a low memory, low CPU VM. Seems like the wrong tools for the job.


> Seems like the wrong tools for the job.

Thanks for the confirmation. I was confused as well. I always thought that the real use of WASM is to run exotic native binaries in a browser, for example, running Tesseract (for OCR) in the browser.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: