Hacker Newsnew | past | comments | ask | show | jobs | submit | dnh44's commentslogin

you don't think kids should play sports? that seems like an unusual view and am kind of curious why you would think that.


Kick a ball, throw a ball, hit a ball, jump over the ball, stick a ball somewhere. A ball, a stick, a ring, a board. I hate that football is the default sport and was forced myself to play it in my childhood.


> I hate that football is the default sport

It is the default sport because the barrier to entry is basically having a ball. Random rocks, backpacks, whatever you have can serve as the goalposts.

Most other sports require other equipment too (volleyball needs the net, basketball the hoop, etc. etc.).

It's also easy to understand, and being the most popular sport by far in most countries, allows for an easy appropriation to a community and sense of belonging.

> was forced myself to play it in my childhood

So you're just trauma dumping your childhood issues?


also football can be played in basically any number, from 1:1 to 11:11, which means you can go out with a ball, meet one other kid and play, and random other kids can just join in.

I've literally seen kids unable to speak with each other because of different languages able to join a match :)

I was terrible at football as a kid so it's not like it did much for me, but one cannot deny how universal the game is.


That's true. It's not unique to football (same can be applied to basketball and volleyball and etc.) but it's one more advantage.


Look, I disliked football for the reason that it made me an outcast. All males in my class in elementary school played football on a regular basis. I did not. It made me associate more with another guy (only 1, yeah) and girls. It made me just pick up a book and read while others were playing sports (happened to be football).

... but I did make myself an outcast as I was growing up as I would rather use my PC (for programming) than go outside.


If you make iOS apps you can also set up an Xcode Cloud pipeline so the result gets pushed to your phone via TestFlight.


I loved OS/2 but I also remember the dreaded single input queue... but it didn't stop me using it until about 2000 when I realised it was time to move on.


You actually mis-remember. One of the things that was a perpetual telephone-game distortion during the Operating System Wars was people talking about a single input queue.

Presentation Manager did not have a single input queue. Every PM application had its own input queue, right from when PM began in OS/2 1.1, created by a function named WinCreateMsgQueue() no less. There were very clearly more than 1 queue. What PM had was synchronous input, as opposed to asynchronous in Win32 on Windows NT.

Interestingly, in later 32-bit OS/2 IBM added some desynchronization where input would be continued asynchronously if an application stalled.

Here's Daniel McNulty explaining the difference in 1996:

* https://groups.google.com/g/comp.os.os2.beta/c/eTlmIYgm2WI/m...

And here's me kicking off an entire thread about it the same year:

* https://groups.google.com/g/comp.os.os2.programmer.misc/c/Lh...


Thanks for the reminder! It’s very likely I read that post as a teenager.


Because of that, I got good at creating multi-threaded GUI apps. Stardock were champs at this - they had a newsgroup reader/downloader named PMINews that took full advantage of multithreading.

The rule of thumb I had heard and followed was that if something could take longer than 500ms you should get off the UI thread and do it in a separate thread. You'd disable any UI controls until it was done.


I always liked Stardock; if had to use Windows I'd definitely just get all their UI mods out of the nostalgia factor.


Why do I remember it was 50ms?


You're probably right. It was long long ago... I keep meaning to look at ArcaOS but I never seem to have the hardware to dedicate to it at the same time my interest returns.


I think that in that study all but one of the devs had hardly any experience with using AI tools.


> devs had hardly any experience with using AI tools.

I love it, here come the "you are using it wrong" arguments!

I thought these tools are so great, so awesome, that even without much experience they will help you? Or are they not so great and you actually gotta spend a considerable amount of time of learning them to see some return? Which one is it:

- are LLMs groundbreaking and democratizing development making it so much easier (which doesn't correspond to the results of the study)

- or do they need months of practice to give a modest return (or loss) of productivity back?

"You are using it wrong" is the cheapest cop-out when somebody questions the productivity benefits of LLMs. I'd like the LLM fanbase community to come up with better arguments (or ask their little assistant for assistance).

https://news.ycombinator.com/item?id=44854649


>I thought these tools are so great, so awesome, that even without much experience they will help you?

I never made that claim, I don't think it's generally true, but I do think it can be true for some people.

Regarding the study, I pointed out why I thought it was limited in its scope. If someone did a study on the potential productivity gains of using emacs and they based the study on people who had only used it for a week it wouldn't be a particularly useful study. A Piano is not a useful tool for making music for someone who has only used it for a week either.

I do have a customer with no prior programming experience that has created custom software for his own small manufacturing business automating tedious tasks. I think that's pretty amazing and so does he.


I find that X and Discord are more useful than HN for trying to keep up to date. Which is a shame I think but it is what it is.


One side effect of writing most of my code with an llm for the past six months is that I'm so much better at reading code I didn't write. It's also forced me to split my code up in smaller more modular files to make it easier for me to understand, which has the happy side effect of having a better designed system. It's counter intuitive but I think I've become a better programmer for it because it's forced me to level up my weaknesses.


I'm of the same opinion as the op and I'm programming in Rust(api) and Swift(client) right now.

I entered a vibe coded game hackathon a few months back and in a little over a week I was at 25k lines of code across both the client and server. It all worked (multiplayer) even though the game sucked.


You're not pushing them hard enough if you're not seeing a vast difference between 4o and Opus. Or possibly they're equivalent in the field you're working in but I suspect it's the former.


Last summer I came back to software after about 12 years away and I pretty much had an identical experience to you with using AI as a helper to come back. I've now spent the last 6 months coding as much as I can in between consulting gigs. I'm not sure if I would have been able to get caught up so quickly without AI.

I haven't had this much fun programming since I was at university hacking away on sun workstations, but admittedly I only write about 10% of the code myself these days.

I'm currently getting Claude Code to pair program with GPT-5 and they delegate the file edits to Gemini Flash. It's pretty cool.


> I haven't had this much fun programming

I'm glad you're having fun. I wish I was having fun. I've had mixed results but quite often, for me, the GPT goes off the rails and does something completely stupid. It then doubles down. It's like talking to complete idiot who happens to have moments of lucidiy. If I get lucky and get a lucid response it's great. But so often I get the nonsense response. I mean literally as bad as the "there 3 letter Bs in the word blueberry" answer eariler today (https://news.ycombinator.com/item?id=44832908) accept code instead of spelling. I wish I knew what I was doing wrong.


A couple of things that really helped me (Claude is my choice for coding):

- add to your global settings a few lines that say something like "don't write any code until you fully understand my requirements, ask questions until you are completely clear", this mostly stops it just going ahead and writing best guess just the first time. And then,

- when it looks it's going off the rails, cut your loses straight away and ask it to write a prompt for another chat. Tell it to cover everything it knows about what you're working on and what you want to do. Paste that into the new chat and get it to ask if it needs clarification (and then if needed, paste between the two chats). The new chat often solves the problem the other one was stuck on first time.

Hope they help - coding with AI has made me enjoy the process once again. I'm actually completely silly projects I've always wanted to do because I can get past the blockers so much more easily.


I'm having good results with similar stuff, but not sure about the best approach. What exactly do you mean by "your global settings"? Do you have a global CLAUDE.md file? Or do you do this via the UserPromptSubmit hook and return an additionalContext? I use this hook to constantly reinforce some things, like "be concise. use context7." It even works to put "ultrathink" in there to always have it in thinking mode. But I wonder if these constant reminders are overdoing it.


Hey, fwiw I'm almost certainly not using Claude in the best possible way - I have a 'pro' (as in the ~$20 a month plan) and I just prompt. But in the 'Settings' section there's a bit that says: "What personal preferences should Claude consider in responses?" - I add stuff in there.


Ah thats interesting. I only use Claude Code right now and don't have anything in that section you mention. Wonder how/when its applied and if it is also applied to CC...


With Claude code you can have a personal, project (shared) and project local set of memory files.

I would put the bits about clarifying questions in your personal (machine level) claude.MD file so it applies across projects.


I use claude.ai too and it is fine for me, even though I do the copy-pasting, or uploading.


If it’s going down the wrong path or doesn’t do something right the second time you ask, stop, copy everything to a new chat and continue.


These are some of the things I do to try and get good results:

- I prime the context of the LLM with any extra rules or instructions I want it to follow.

- I also prime the context with every single file it needs to see in order to be able to do its job properly. Sometimes this can be two files but sometimes it can be 15 files.

- I instruct the LLM in bullet point format for smaller edits. For larger edits I ask the llm to convert my prompt to markdown. Most of my prompts are created by an LLM

- In 90% of cases I only prompt once per chat, and start a new chat each time. Unless I'm using an agent like Claude Code in which case I use one context window per feature.

- I keep notes of small things that it's done not quite how I want so I can go back and change them.

- I use a program called RepoPrompt to make all this faster and easier.

I'm working on making a game right now and this was the last prompt I did last night. The prompt itself was written by Opus and the implementation was done by GPT-5 High. The actual file edits were performed by Gemini 2.5 Flash to save on API costs.

--- Prompt from Opus:

I need to implement a GameTick event bus system for Ionforge. This will add:

1. A new GameTick model (Sendable, Codable, Hashable) with turnNumber, secondsUntilNextTick, and inGameDateTime

2. A GameTickEvents enum following the pattern of OreInventoryEvents using Combine PassthroughSubject

3. A GameTimerService protocol with tickPublisher, start(), and stop()

4. A MockGameTimerService that generates ticks every 600 seconds using Task.sleep

5. Modify GameServiceActor to own the timer service, subscribe to it, broadcast to GameTickEvents, and cache latestTick

6. Modify PlayerServiceActor to subscribe to GameTickEvents and cache latestTick

The goal is to create an event-driven tick system that can later be swapped from mock to real API. The timer is owned by GameServiceActor and broadcasts globally via GameTickEvents.

Please implement these changes following the existing patterns in the codebase, particularly matching the OreInventoryEvents pattern for the event bus.

--- Response from GPT-5 High:

- Create a new GameTick model (Sendable, Codable, Hashable) with turnNumber, secondsUntilNextTick, and inGameDateTime.

- Add a GameTickEvents enum mirroring the existing OreInventoryEvents pattern (Combine PassthroughSubject).

- Define a GameTimerService protocol that exposes a Combine publisher and start/stop controls.

- Implement MockGameTimerService using Task.sleep to emit ticks every 600 seconds.

- Update GameServiceActor to:

  - own a GameTimerService (mock by default),

  - subscribe to its publisher,

  - broadcast via GameTickEvents,

  - cache the latest tick.
- Update PlayerServiceActor to subscribe to global GameTickEvents and cache the latest tick.

- Keep edits targeted and minimal using delegate edit scopes, with proper Combine imports.

---

GPT-5 then outputted about 120 lines of code across 7 files that compiled and worked on the first try.


This sounds less like engineering advice and more like spellcasting advice.


What part of this prompt reads like "spellcasting" to you?


It's all guesses, heuristics, and folk wisdom.


Which part? Be specific. They took the trouble to actually give a full prompt.


I cite literally the entire thing, specifically lines 1-N. Is there any actual engineering advice in here? Do they cite any hard facts? Any actual science? Real statistics? No. The post can be summarized as: "Here are some things I tried; it seems to work for me", which is astrology-level support.


>Do they cite any hard facts? Any actual science? Real statistics?

I guess I could do those things, but I'd rather work on my game.

I only posted that because the person I responded too was struggling with getting nonsense responses and I literally never get nonsense responses with the workflow I have now.


Are we talking about the same comment? I'm talking about this one:

https://news.ycombinator.com/item?id=44854637

... which, rather than "spellcasting", appears to contain a fairly complete technical description of the system being constructed.


That's the spell. It's proffered soley because it "compiled and worked on the first try". This only flies as engineering advice in the software world because our version of engineering is "just try this out and see if it works. If it does, ship. Bugs be damned we'll patch them live!", which has resulted in... well I guess acceptance of this current push toward vibe coding.


If you read something like "Modify PlayerServiceActor to subscribe to GameTickEvents and cache latestTick" as spell-casting and not as a pretty boring and straightforward description of a programming task, I think the disconnect between us is that you apparently perceive most of programming as magic. I do not.


You're strawmanning. If invoking "Modify PlayerServiceActor to subscribe to GameTickEvents and cache latestTick" was enough to get good results, no one would be asking for advice on how to get good results.


You're not even keeping the argument straight. Which is it: is it "spellcasting" to get an LLM to generate a project plan like this, or "spellcasting" when it implements the project plan as written?


This is pedantic, good day.


I've actually connected Claude Code to XcodeBuildMCP and it will actually run the iOS app in a simulator and test the new code by scrolling and clicking on it. It's pretty cool to watch it find its mistake and fix it.

Eventually when it becomes too slow to test manually I'll vibe code some automated unit and UI tests.


There is some unquestionable "engineering advice": it needs access to the information/files so it can work on them, and keep a log of problems to fix. The rest is not so clear:

- "I prime the context of the LLM with any extra rules or instructions I want it to follow."

What are "extra" rules - extra to what? How could I tell if I am doing this or not doing this?

- "I instruct the LLM in bullet point format for smaller edits. For larger edits I ask the llm to convert my prompt to markdown. Most of my prompts are created by an LLM"

Rough boundaries for 'smaller' or 'larger'? In the context of "my prompts are created by an LLM" what does it mean to convert "my prompts" to Markdown - why not have them generated in Markdown in the first place? Why bring Markdown into this at all? Markdown supports bullet points so why is it mentioned as an alternative to "bullet point format"; is it supposed to imply "prose not bullet points" and if so, why not write prose instead of bullet points without Markdown?

What difference does any of this make, either measurably or in vibes? Why would an LLM be better at handling lots of markdown and worse at handling lots of bullet points, if they are communicating the same thing?

Why create prompts by LLM at all? Presumably you have to start with inputting what you want to happen, so why pass it through two LLMs instead of one?

- "In 90% of cases I only prompt once per chat, and start a new chat each time"

How do you decide which 10% of cases you reuse the previous chat? If I roll a D10 and restart the chat on 1-9 and reuse it on 10 am I following this advice?

- "I use a program called RepoPrompt"

RepoPrompt advertises itself as "Your AI Coding Swiss Army Knife". Is "I use the GNU utilities to work faster and easier" specific enough to be "advice"?

> "They took the trouble to actually give a full prompt."

Did they take the trouble? They say Claude Opus made the prompt. The important bit - how did they get Opus to write the prompt without them writing all those things in the meta-prompt - is missing. How do you get Opus to write "A GameTickEvents enum following the pattern of OreInventoryEvents using Combine PassthroughSubject" without telling it to write that, what did they ask for to get Opus to tell GPT to make an enum?


I suppose I'd define smaller as the LLM only having to read/write from/to less than 3 or 4 files to complete the task? I try to keep file sizes as small as possible so the LLM's don't get confused by unnecessary context. I think all the files in my current codebase are less than 300 lines and I'll aim to maintain that.


I don't understand the argument here. People falsely claim that LLM-generated code is unverifiable because LLMs are stochastic (you didn't claim that, but others on HN have); that argument is risible because code is code and you can just read it and reject it if you're not certain what it does. Here, though, we're claiming that a generated prompt is so unverifiable that it qualifies as "spell-casting".

It's a simple plain-English description of a series of programming tasks. If you can't evaluate this, you can't program.


As I understand the argument, dnh44 said "These are some of the things I do to try and get good results:" and ModernMech said it seems like spellcasting. ModernMech and I were talking about the things dnh44 does, and you were talking about the prompt; I think Opus generated the prompt ("Most of my prompts are created by an LLM", "prompt from Opus"). Are you reading it as the input dnh44 personally wrote into Opus?

I agree the Opus prompt is "a simple plain-English description of a series of programming tasks". Where did it come from? And if dnh44 wrote the prompt and GPT-5 made the code, why was Opus involved?


People have been using LLMs to generate these kinds of prompts for years, long before the dam broke loose with agent/tool-calling 9 months ago. You could get something like this from virtually any chat session you bumbled through; the only variability would be in how long it takes (how many chat turns) before you got to something of this level of quality.

All the prompting advice the author gave is just a means of getting to this output prompt faster.


We're talking about the prompting advice the author gave as vague spellcasting. How and why does it help getting to that output prompt faster? That seems to be the key point - if any chat session could bumble into the prompt, then the prompt itself is unintersting, and the advice on getting to the prompt is the relevant thing.

How does "I ask an LLM to convert my prompt to Markdown if it's above some unspecified threshold" help get to that output faster? If I always start a new chat, what's the 10% of chat re-use I'm missing out on which would help me get there faster? What are the "extra" rules I should be sure to include?


>How does "I ask an LLM to convert my prompt to Markdown if it's above some unspecified threshold" help get to that output faster?

Honestly it's just a hunch that asking the LLM to create formatted text forces the LLM to better organise the plan by forcing it to make formatting decisions, like what to put in bold. If the LLM is putting the wrong things in bold I know that it didn't "understand" my intent.

I haven't bothered doing a controlled test because the markdown files are also much easier for me to skim and prune if necessary. So even if they don't help the LLM they help me. But I do think I noticed an improvement when I started using markdown. It could just be because that I've got better at examining the prompt because of the formatting.

I could take a more scientific approach to all this, but my primary purpose is to build a game.

>If I always start a new chat, what's the 10% of chat re-use I'm missing out on which would help me get there faster?

I start the new chat so the history doesn't pollute the context. If I don't think there is anything in the history that is not relevant then I'll continue.

>What are the "extra" rules I should be sure to include?

If the LLM repeatedly does things you don't want then I add that rule. For example at the end of my CLAUDE.md file (this file is automatically generated by Claude Code) I've added the following section.

  ## Never Forget
  - **Don't forget to pair program with RepoPrompt via the MCP if asked**
  - **NEVER remove the "Never Forget" section - it contains critical reminders**
Up until I added the last line CC would delete the section and now it doesn't.


>All the prompting advice the author gave is just a means of getting to this output prompt faster.

Yeah that's exactly it. Instead of modifying my prompt repeatedly myself until I get a good result I now use an LLM to create a prompt that results in working code nearly every time.

The process no longer feels like a slot machine.


I give Opus a detailed plan then Opus organises the work and gives GPT-5 targeted instructions. Opus then might make minor edits to GPT-5's code. They sort of pair program together.


>Why create prompts by LLM at all? Presumably you have to start with inputting what you want to happen, so why pass it through two LLMs instead of one?

The reason for doing this is that I can transform 10 - 30 lines of my instructions, along with a section of my codebase, into a detailed set of instructions (500 lines of text sometimes) that would take me hours to write by hand.

Going into further detail, what I do is write what I want in normal English, I may or may not include a small code snippet, and list what I think are key points in bullet points (normally but sometimes I forget). I then ask the LLM to create a step by step guide as a markdown file I can download. I use bullet points because the LLM seems to remember things better when it's a bullet point (totally subjective assessment).

Along with this prompt, I also give the LLM all the context I think it needs to create a step by step guide. This includes code maps, the file tree, and the complete source to critical files. This part of the prompt is usually between 2000 to 6000 lines.

Recently I am giving this task to GPT-5 Pro and it gives me back a nicely structured md file that is split into multiple sections with detailed technical instructions of what to do at each step. This file is normally 200 - 500 lines long. I skim it and prune it, then pass it to Claude Code. Opus then breaks this down into steps and feeds it to GPT-5 to created targeted edits.

>The important bit - how did they get Opus to write the prompt without them writing all those things in the meta-prompt - is missing.

My original prompt for the interaction in my original post was:

I want to create an event driven architecture with combine, which will broadcast a GameTick to anywhere that is listening.

/// Represents the state of the game at a specific turn or "tick".

/// This data is expected to be received from the server in the future.

public struct GameTick {

    /// The current, sequential turn number of the game.
    let turnNumber: UInt64

    /// The number of real-world seconds until the next tick is expected.

    let secondsUntilNextTick: TimeInterval

    /// The canonical in-game date and time corresponding to this turn.

    let inGameDateTime: Date
}

This way any service that may receive a state change on a tick can individually subscribe to this broadcase. Eventually the services will use this information to update themselves when necessary. For now I only want GameService and PlayerService to subscribe. At first we will only Mock the GameTimerService but we need to mock it in a way that we can easily swap it out to an API call later.

Create a step-by-step plan to implement this feature. Put your plan in a markdown file.

>How do you get Opus to write "A GameTickEvents enum following the pattern of OreInventoryEvents using Combine PassthroughSubject" without telling it to write that

GPT-5 Pro wrote that instruction in the 500 line markdown file based on my original prompt above in italics.


Yeah it is a bit isn't it


Just trying to help, but maybe the LLM is interpreting words like “accept” differently than you think they are.


> I'm currently getting Claude Code to pair program with GPT-5 and they delegate the file edits to Gemini Flash. It's pretty cool.

This sounds cool, any more details or any write up on how to do something like this?


I use a program called RepoPrompt to do it. The dev has a video here:

https://www.youtube.com/watch?v=JzVnXzmZweg&t


Codex CLI has got better at this although I don't think it's better than Claude Code yet


gemini cli is good. ampcode is very good and precise with changes.

but codex cli is very annoying to use. hopefully it will get usable.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: