Hacker Newsnew | past | comments | ask | show | jobs | submit | b4ckup's commentslogin

I'd say on average about 50% faster but it really depends on the task at hand. On problems that can be isolated pretty well like a new feature that is relatively isolated (for example building a file export in a specific format) it's easily a 10x speed up. One thing that generally gets less talked about is exploration of the solution space during manual implementation. I work in a very small company and we build a custom ERP solution. Our development process is very stripped down (a good thing IMO). Often times when we get new requirements we brain storm and make a rough design. Then I try implement it and during that phase new questions and edge cases arise, and at any time this happens we adjust the design. In my opinion this is very productive as the details of the design are worked out when I already know the related code very well as I already got down to implementing. This leads to a better fitting design and implementation. Unfortunately this exploration workflow is incompatible with llms if you use them to do the implementation for you. Which means that you have put more effort in the design up front. From my experience that means the gain in speed in such task is nullified and also results in code that fits worse into the rest of the codebase.

That's awesome! I feel similar, I drew a lot back in the days because growing up in a small town I was bored so often. I did portrait art only but today I struggle because I just don't know what to draw and I'm just not good at doodling. Best of luck to you!

I once wrote a formatter for powerquery that's still in use today. It's a much simpler language and I took a simpler approach. It was a really fun problem to solve.


I tried to use toml in as config format for the app I am building at my day job. I ditched it for json because it's representation is not unambiguous, meaning the same object tree has many many different valid representations in toml. This makes it really hard to implement roundtripping of the config string to object tree and then back to config string without losing the specific toml representation chosen by the config author and no library that I've encountered supports it properly. For me this was a very important use case (config migrations). I implemented my own json parser that preserves json formatting and comments in half a day. Maybe json is harder to read and write but imo it's simplicity is a good feature.


I think you meant JSON5 since JSON doesn't have comments.


Douglas Crockford himself recommended using comments if you like then piping it through JSMin. Unfortunately the original post on Google+ no longer exists but it's referenced in a HN thread[0].

    I removed comments from JSON because I saw people were using them to hold parsing directives, a practice which would have
    destroyed interoperability. I know that the lack of comments makes some people sad, but it shouldn’t.

    Suppose you are using JSON to keep configuration files, which you would like to annotate. Go ahead and insert all 
    the comments you like. Then pipe it through JSMin before handing it to your JSON parser.
Which of course doesn't help because you could still just add parsing directives into the comments anyway. But as far as I'm concerned that means the spec implicitly allows comments as long as they're stripped out in transport.

[0]https://news.ycombinator.com/item?id=3912149


It doesn't matter that if he allowed comments initially if the final (current) specification didn't allow them.


Well since it's my own parser I support both inline and line ending comments. So I guess it's technically jsonc (json with comments) but whatever really.

Clarification: when speaking of json formatting I handle 2 distinct cases that make sense for me: inline objects and inline arrays (where all properties/ elements are on the same line) which make Configs more readable when the objects / arrays are small.


Really interesting! I'm very much interested in pychedelic graphics. I played around with shadertoy a little bit maybe I should give it another go. For anyone interested I made some cool visuals by interpolating prompts in stable diffusion 1.5 like https://m.youtube.com/watch?v=ajfMlJuDswc. I found that the older diffusion models are better for abstract graphics as it looks more "raw" and creative.


This is mostly how I indent my code. I don't know why so many people hate it. We have huge screens and spacing conveys structure so I use spacing when appropriate, like a ')' in its own line. I work in a *very* small team though and I write most of the code.


This sounds like a good use case for using a service worker. All tabs talk to the service worker and the worker is the single instance that talks to the backend and can use only one connection. Maybe there are some trade offs for using SSE in web workers, I'm not sure.


BroadcastChannel is a better solution for a couple of reasons. Service Workers are better at intercepting network requests and returning items from a cache, there’s some amount of additional effort to do work outside of that. The other thing is they’re a little more difficult to set up. A broadcast channel can be handled in a couple lines of code, easily debuggable as they run on the main thread, and they’re more suited to the purpose.


Weblocks (https://developer.mozilla.org/en-US/docs/Web/API/Web_Locks_A...) are an even better way to do this than Broadcast Channel


I disagree. You can just postMessage to communicate with the service worker and therefore I imagine the code using broadcast channel to be actually quite similar. About debugging, service workers are easily debuggable, though not on the main thread as you already mentioned.


agreed. Workers was one of my first thoughts but I think BroadcastChannel delivers with much lower LOE


I use vscode (with omnisharp not c# devkit) for c# every day. Did you try it out?


I did not! Rider is extremely satisfying to use.


Is devkit better than Rider ?


I tried to use vscode c# devkit but it is horribly unstable and has severe bugs that make it unusable for my projects. But I'm really happy with vscode + omnisharp. I never used rider so I cannot really say which is better but I tend to not like jetbrains IDEs (from experience with IntelliJ and webstorm).


Didn't day say in their incident report that they have a batched rollout strategy for software updates but this was a config update and the update path for configs does not have such a mechanism in place.


Ya, so hopefully it's obvious to them that every rollout needs some kind of batching. I get that all devices within one org might need to have the same config, but in that case batch it out to different orgs over 2-3 days.

Maybe the more critical infrastructure and health care orgs are at the end of that rollout plan so they are at lower risk. It's not ideal if one sandwich shop in Idaho can't run their reports that day, but that's far better than shutting down the hospital next door. CrowdStrike could even compensate those one system shops that are on the front line when something goes down.

Again, better to pay a sandwich shop a few thousand dollars for their lost day of sales than get sued by the people in the hospital who couldn't get their meds, x-rays, etc in time.


There's a lot of things the file explorer needs. Git is not one of them.


Millions (or however many) devs and managers installing TortoiseGit in Windows-first shops would beg to differ.


Maybe this will break tortoisegit AND your local git.

Years ago I would be happy for such a feature, now I'm worried.


They don't implement old requirements. They only implement the new ones. /s


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: