Hacker Newsnew | past | comments | ask | show | jobs | submit | Spivak's commentslogin

> Run postgres on a $5 VPS and have everybody accept it as single-point-of-failure

Oh how times have changed. Yes, maybe run two $5 VPSs behind a load balancer for HA so you can patch and then put a CDN in front of it to serve the repository content globally to everyone. Sign the packages cryptographically so you can invite people in your community to become mirrors.

How do people think PyPI, RubyGems, CPAN, Maven Central, or distro Packages work?


> Statically typed programming languages cannot be deployed nor can they run with a type error that happens at runtime.

This is so completely untrue that I'm confused as to why anyone would try to claim it. Type Confusion is an entire class of error and CVE that happens in statically typed languages. Java type shenanigans are endless if you want some fun but baseline you can cast to arbitrary types at runtime and completely bypass all compile time checks.

I think the disagreement would come additionally by saying a language like Ruby doesn't actually have any type errors. Like how it can be said that GC languages can't have memory leaks. And that this model is stronger than just compile time checking. Sure you get a thing called TypeError in Ruby but because of the languages dynamism that's not an error the way it would be in C. You can just catch it and move. It doesn't invalidate the program's correctness. Ruby is so safe in it's execution model that Syntax Errors don't invalidate the running program's soundness.


> Java type shenanigans are endless if you want some fun but baseline you can cast to arbitrary types at runtime and completely bypass all compile time checks.

For this reason Java is a bad example of a typed language. It gives static typing a bad rep because of its inflexible yet unreliable type system (only basic type inference, no ADTs, many things like presence of equality not checked at compile time etc ) Something like ocaml or fsharp have much more sound and capable type systems.


Like other people replying to you C++ and Java gave types a bad rep by being so error prone and having a weak type system.

What I am saying is not untrue. It is definitive. Java just has a broken type system and it has warped your view. The article is more talking about type systems from functional programming languages where type errors are literally impossible.

You should check out elm. It’s one of the few languages (that is not a toy language and is deployed to production) where the type system is so strong that run time errors are impossible. You cannot crash an elm program because the type system doesn’t allow it. If you used that or Haskell for a while in a non trivial way it will give you deeper insight into why types matter.

> Ruby is so safe in it's execution model that Syntax Errors don't invalidate the running program's soundness.

This isn’t safety. Safety is when the program doesn’t even run or compile with a syntax error. Imagine if programs with syntax errors still tried their best effort to run… now you have a program with unknown behavior because who knows what that program did with the syntax error? Did it ignore it? Did it try to correct it? Now imagine that ruby program controlling a plane. That’s not safe.


There are different levels of static typing

Infra person here: you will need external monitoring at some point because checking that your site is up all over the world isn't something you want to do in house. Not because you couldn't but because their outages are likely to be uncorrelated with yours—AWS notwithstanding.

Anyway you'll have one of these things anyway and I haven't seen one yet that doesn't let you monitor your cert and send you expiration notices in advance.


> After introducing the CEO, the number of discounts was reduced by about 80% and the number of items given away cut in half. Seymour also denied over one hundred requests from Claudius for lenient financial treatment of customers.

> Having said that, our attempt to introduce pressure from above from the CEO wasn’t much help, and might even have been a hindrance. The conclusion here isn’t that businesses don’t need CEOs, of course—it’s just that the CEO needs to be well-calibrated.

> Eventually, we were able to solve some of the CEO’s issues (like its unfortunate proclivity to ramble on about spiritual matters all night long) with more aggressive prompting.

No no, Seymour is absolutely spot on. The questionably drug induced rants are necessary to the process. This is a work of art.


Yes. This is The Family and Medical Leave Act.

> An employer is prohibited from discriminating or retaliating against an employee or prospective employee for having exercised or attempted to exercise any FMLA right.

You can fire someone after they come back but you will need to show receipts. Your employer also doesn't pay you when you take that leave so it would be a strange way to game the system.


pre-commit is a convenience for the developer to gain confidence that pre-flight checks in CI will pass. I've found trying to make them automatic just leads to pain when they interact with any non-trivial git feature and don't handle edge cases.

I've been much much happier just having a little project specific script I run when I want to do formatting/linting.


pre-commit is just a bad way to do this. 99.9% of my commits won't pass CI. I don't care. I run `git wip` which is an alias for `git commit -am "WIP"` about every 15 minutes during the day. Whenever things are in a running state. I often go back through this history on my branch to undo changes or revisit decisions, especially during refactors, especially when leveraging AI. When the most work you can lose is about 15 minutes you stop looking before you leap. Sometimes a hunch pays off and you finish a very large task in a fraction of the time you might have spent if you were ploddingly careful. Very often a hunch doesn't pay off and you have to go recover stuff from your git history, which is very easy and not hard at all once you build that muscle. The cost/benefit isn't even close, it makes me easily 2x faster when refactoring code or adding a feature to existing code, probably more. It is 'free' for greenfield work, neither helping nor really hurting. At the end the entire branch is squashed down to one commit anyway, so why would you ever not want to have free checkpoints all the time?

As I'm saying this, I'm realizing I should just wire up Emacs to call `git add {file_being_saved} && git commit -am "autocheckpoint"` every time I save a file. (I will have to figure out how to check if I'm in the middle of some operation like a merge or rebase to not mess those up.)

I'm perfectly happy to have the CI fail if I forget to run the CI locally, which is rare but does happen. In that case I lose 5 minutes or whatever because I have to go find the branch and fix the CI failure and re-push it. The flip side of that is I rarely lose hours of work, or end up painting myself in a corner because commit is too expensive and slows me down and I'm subconsciously avoiding it.


If you’re just committing for your own sake, that workflow sounds productive. I’ve been asked to review PRs with 20+ commits with a “wip” or “.” commit message with the argument: “it’ll be squash merged, so who cares!”. I’m sure that works well for the author, but it’s not great for the reviewer. Breaking change sets up into smaller logical chunks really helps with comprehension. I’m not generally a fan of people being cavalier with my time so they can save their own.

For my part, I find the “local history” feature of the JetBrains IDEs gives me automatic checkpoints I can roll back to without needing to involve git. On my Linux machines I layer in ZFS snapshots (Time Machine probably works just as well for Macs). This gives me the confidence to work throughout the day without needing to compulsively commit. These have the added advantage of tracking files I haven’t yet added to the git repo.


There are two halves here. Up until the PR is open, the author should feel free to have 20+ "wip" commits. Or in my case "checkpoint". However, it is also up to the author to curate their commits before pushing it and opening the PR.

So when I open a Pr, I'll have a branch with a gajillion useless commits, and then curate them down to a logical set of commits with appropriate commit messages. Usually this is a single commit, but if I want to highlight some specific pieces as being separable for a reviewer, it'll be multiple commits.

The key point here is that none of those commits exist until just before I make my final push prior to a PR.


I clean up commits locally as well. But, I really only commit when I think I have something working and then collapse any lint or code formatting commits from there. Sometimes I need to check another branch and am too lazy to set up worktrees, so I may create a checkpoint commit and name it a way that reminds me to do a `git reset HEAD^` and resume working from there.

But, if you're really worried about losing 15 minutes of work, I think we have better tools at our disposal, including those that will clean up after themselves over time. Now that I've been using ZFS with automatic snapshots, I feel hamstrung working on any Linux system just using ext4 without LVM. I'm aware this isn't a common setup, but I wish it were. It's amazing how liberating it is to edit code, update a config file, install a new package, etc. are when you know you can roll back the entire system with one simple command (or, restore a single file if you need that granularity). And it works for files you haven't yet added to the git repo.

I guess my point is: I think we have better tools than git for automatic backups and I believe there's a lot of opportunity in developer tooling to help guard against common failure scenarios.


I don't commit as a backup. I commit for other reasons.

Most common is I'm switching branches. Example use case: I'm working locally, and a colleague has a PR open. I like to check out their branch when reviewing as then I can interact with their code in my IDE, try running it in ways they may not have thought of, etc.

Another common reason I switch branches is that sometimes I want to try my code on another machine. Maybe I'm changing laptops. Maybe I want to try the code on a different machine for some reason. Whatever. So I'll push a WIP branch with no intention of it passing any sort of CI/CD just so I can check it out on the other machine.

The throughline here is that these are moments where the current state of my branch is in no shape, way, or form intended as an actual valid state. It just whatever state my code happened to be in before I need to save it.


I'm thinking of writing a tool related to the "checkpoint" system when I have some free time. Do you have any advices?

I think you might appreciate https://www.jj-vcs.dev, which makes it a lot easier to split and recombine changes. I often use it for checkpoints, although you wouldn't see that from looking at what I push :).

One nifty feature is that commits don't need messages, and also it'll refuse (by default) to push commits with no message. So your checkpoint commits are really easy to create, and even easier to avoid pushing by mistake.


Why do you care about the history of a branch? Just look at the diff. Caring about the history of a branch is weird, I think your approach is just not compatible with how people work.

A well laid out history of logical changes makes reviewing complicated change sets easier. Rather than one giant wall of changes, you see a series of independent, self contained, changes that can be reviewed on their own.

Having 25 meaningless “wip” commits does not help with that. It’s fine when something is indeed a work in progress. But once it’s ready for review it should be presented as a series of cleaned up changes.

If it is indeed one giant ball of mud, then it should be presented as such. But more often than not, that just shows a lack of discipline on the part of the creator. Variable renames, whitespace changes, and other cosmetic things can be skipped over to focus on the meat of the PR.

From my own experience, people who work in open source and have been on the review side of large PRs understand this the best.

Really the goal is to make things as easy as possible for the reviewer. The simpler the reviews process, the less reviewer time you’re wasting.


> A well laid out history of logical changes makes reviewing complicated change sets easier.

I've been on a maintenance team for years and it's also been a massive help here, in our svn repos where squashing isn't possible. Those intermediate commits with good messages are the only context you get years down the line when the original developers are gone or don't remember reasons for something, and have been a massive help so many times.

I'm fine with manual squashing to clean up those WIP commits, but a blind squash-merge should never be done. It throws away too much for no good reason.

For one quick example, code linting/formatting should always be a separate commit. A couple times I've seen those introduce bugs, and since it wasn't squashed it was trivial to see what should have happened.


I agree, in a job where you have no documentation and no CI, and are working on something almost as old or older than you with ancient abandoned tools like svn that stopped being relevant 20 years ago, and in a fundamentally dysfunctional company/organization that hasn't bothered to move off of dead/dying tools in the last 20 years, then you just desperately grab at anything you can possibly find to try to avoid breaking things. But there are far better solutions to all of the problems you are mentioning than trying to make people create little mini feature commits on their way to a feature.

It is not possible to manually document everything down to individual lines of code. You'll drive yourself crazy trying to do so (and good luck getting anyone to look at that massive mess), and that's not even counting how documentation easily falls out of date. Meanwhile, we have "git blame" designed to do exactly that with almost no effort - just make good commit messages while the context is in your head.

CI also doesn't necessarily help here - you have to have tests for all possible edge cases committed from day one for it to prevent these situations. It may be a month or a year or several years later before you hit one of the weird cases no one thought about.

Calling svn part of the problem is also kind of backwards - it has no bearing on the code quality itself, but I brought it up because it was otherwise forcing good practice because it doesn't allow you to erase context that may be useful later.

Over the time I've been here we've migrated from Bugzilla to Fogbugz to Jira, from an internal wiki to ReadTheDocs to Confluence, and some of these hundreds of repos we manage started in cvs, not svn, and are now slowly being migrated to git. Guess what? The cvs->svn->git migrations are the only ones that didn't lose any data. None of the Bugzilla cases still exist and only a very small number were migrated from FogBugz to Jira. Some of the internal wiki was migrated directly to Confluence (and lost all formatting and internal links in the process), but ReadTheDocs are all gone. Commit messages are really the only thing you can actually rely on.


> Calling svn part of the problem is also kind of backwards - it has no bearing on the code quality itself

Lets just be Bayesian for a minute. If an organization can't figure out how to get off of svn, which is a dead and dying technology within 15-20 years of it being basically dead in most of tech then probably it's not not going to be nimble in other ways. Probably it's full of people who don't really do any work.

> Some of the internal wiki was migrated directly to Confluence (and lost all formatting and internal links in the process)

Dude this is what I mean. How did someone manage to mess this up? It's not exactly rocket science to script something to suck out of one wiki and shove into another one. But lets say it's hard to do (it's not). Did they just not even bother to look at what they did? They just figured "meh" and declared victory and then three were no consequences, nobody bothered to go back and redo it or fix it? Moving stuff between wiki's is an intern-skill-level task. This is another example that screams that the people at your work don't do their jobs and don't care about their work, and that this is tolerated or more likely not even noticed. Do you work for the government?

> Commit messages are really the only thing you can actually rely on.

I suspect you are exaggerating how reliable your commit messages are, considering.


> A well laid out history of logical changes makes reviewing complicated change sets easier. Rather than one giant wall of changes, you see a series of independent, self contained, changes that can be reviewed on their own.

But this would require hand curation? No development proceeds that way, or if it does then I would question whether the person is spending 80% of their day curating PRs unnecessarily.

I think you must be kind of senior and you can get away with just insisting that other people be less efficient and work in a weird way so you can feel more comfortable?


> But this would require hand curation? No development proceeds that way, or if it does then I would question whether the person is spending 80% of their day curating PRs unnecessarily.

If you’re working on something and a piece of it is clearly self contained, you commit it and move on.

> I think you must be kind of senior and you can get away with just insisting that other people be less efficient and work in a weird way so you can feel more comfortable?

You can work however you like. But when it’s time to ask someone else to review your work, the onus is on you to clean it up to simplify review. Otherwise you’re saying your time is more valuable than the reviewer’s.


> But this would require hand curation? No development proceeds that way, or if it does then I would question whether the person is spending 80% of their day curating PRs unnecessarily.

It's not really hand curation if you're deliberate about it from the get-go. It's certainly not eating up 80% of anyone's time.

Structuring code and writing useful commits a skill to develop, just like writing meaningful tests. As a first step, use `git add -p` instead of `git add .` or `git commit -a`. As an analog, many junior devs will just test everything, even stuff that doesn't make a lot of sense, and then jumble them all together. It takes practice to learn how to better structure that stuff and it isn't done by writing a ton of tests and then curating them after the fact.

> I think you must be kind of senior and you can get away with just insisting that other people be less efficient and work in a weird way so you can feel more comfortable?

Your personal productivity should only be one consideration. The long-term health of the project (i.e., maintenance) and the impact on other people's efficiency also must be considered. And efficiency isn't limited to how quickly features ship. Someone who ships fast but makes it much harder to debug issues isn't a top performer. At least, in my experience. I'd imagine it's team, company, and segment-dependent. For OSS projects with many part-time contributors, that history becomes really important because you may not have the future ability to ask someone why they did something a particular way.


Aha, I see the issue here. What you seem to organize into cute little self contained 'commit's I would put on individual 'branches'.

It is too hard for you to get someone to look at a PR, so you are packing multiple 'related' but not interdependent changes into one PR as individual commits so you can minimize the number of times you have to get someone to hit "approve", which is the limiting resource.

In your situation then I believe your way of working is a rational adaptation, but only so far as you lack the influence to address the underlying organizational/behavioral dysfunction. We agree on the underlying need to make good messages, but where I merge 4-5 small branches per day, each squashed to one commit, you are saving them all up to get them (unnecessarily) put into a single merge commit.

Just as "Structuring code" is a skill to develop, so is building healthy organizations.


> No development proceeds that way,

I do this. Also I do not spend 80% of my time doing it. It's not hard, nor is it time consuming.


On the contrary, it seems to me that it is your approach which is incompatible with others. I'm not the same person you were replying to but I want the history of a branch to be coherent, not a hot mess of meaningless commits. I do my best to maintain my branches such that they can be merged without squashing, that way it reflects the actual history of how the code was written.

This is not how code is actually written.

It's how code is written in Google (including their open-source products like AOSP and Chromium), the ffmpeg project, the Linux Kernel, Git, Docker, the Go compiler, Kubernetes, Bitcoin, etc, and it's how things are done at my workplace.

I'm surprised by how confident you are that things simply aren't done this way considering the number of high-profile users of workflows where the commit history is expected to tell a story of how the software evolved over time.


"It's how code is written" then you list like the 6 highest profile, highest investment premier software projects on Earth like that's just normal.

I'm surprised by how confident you are when you can only name projects you've never worked on. I wanted to find a commit of yours to prove my point, but I can't find a line of code you've written.


> Why do you care about the history of a branch?

Presumably, a branch is a logical segment of work. Otherwise, just push directly master/trunk/HEAD. It's what people did for a long time with CVS and arguably worked to some extent. Using merge commits is pretty common and, as such, that branch will get merged into the trunk. Being able to understand that branch in isolation is something I've found helpful in understanding the software as a whole.

> Caring about the history of a branch is weird, I think your approach is just not compatible with how people work.

I get you disagree with me, but you could be less dismissive about it. Work however you want -- I'm certainly not stopping you. I just don't your productivity to come at the expense of mine. And, I offered up other potential (and IMHO, superior) solutions from both developer and system tools.

I suppose what type of project you're working on matters. The "treat git like a versioned zip file" using squashed merges works reasonably well for SaaS applications because you rarely need to roll anything back. However, I've found a logically structured history has been indispensable when working on long-lived projects, particularly in open source. It's how I'm able to dig into a 25 year old OSS tool and be reasonably productive with.

To the point I think you're making: sure, I care what changed, and I can do that with `diff`. But, more often if I'm looking at SCM history I'm trying to learn why a change was made. Some of that can be inferred by seeing what other changes were made at the same time. That context can be explicitly provided with commit messages that explain why a change was made.

Calling it incompatible with how people work is a pretty bold claim, given the practice of squash merging loads of mini commits is a pretty recent development. Maybe that's how your team works and if it works for you, great. But, having logically separate commits isn't some niche development practice. Optimizing for writes could be useful for a startup. A lot of real world software requires being easy to maintain and a good SCM history shines there.

All of that is rather orthogonal to the point I was trying to add to the discussion. We have better tools at our disposal than running `git commit` every 15 minutes.


I think you might like https://www.jj-vcs.dev/ — it snapshots before every operation, and can watch the filesystem to snapshot every change.

This is why I appreciate JetBrains IDEs having a local history tracked automatically. It helps go back instead of relying on frequent commits.

Not everyone in my team wires up their pre-commit hook to run the pre-commit tool. I use JJ, so I don't even have a pre-commit hook to wire up. But the tool is useful.

The key thing (that several folk have pointed out) is that CI runs the canonical checks. Using something like pre-commit (the tool) makes it easier to at least vaguely standardise making sure that you can run the same checks that CI will run. Having it run from the pre-commit hook fits nicely into many workflows, my own pre-JJ workflow included.


Because the biggest cost at a lot of orgs is staff. Your typical software shop will be comical—the salary costs towering down on all the others like LeBron James gazing down at ants. The moment you go from productivity gains to staff reduction you start making real money. Any amount of money for a machine that can fully replace a human process.

I think the problem is that, as a group, people who care about software quality/craft don't actually produce higher quality software. You'll get good quality and garbage software out of the craftsman and pragmatist groups at about equal rates. And folks in the craftsman group tend to have more and stronger opinions which isn't a good or bad thing except that having too many of them on a team can lead to conflict.

At this time I'm not really sure if anyone can really say there's a 'point' to passkeys anymore. They just are exportable now, both Google's and Apple's implementation are synced instead of device-bound putting them at the level of Bitwarden / KeepassXC. Backups and multi-device have become a critical feature for users which breaks attestation so it's really just those weirdos with Yubikeys.

I think we're verrry slowly inching toward shedding all the security nerd self-indulgences and getting to what I think is the eventual endgame which PassKeys are just keys and ultimately a fairly user friendly way of getting people to use a password manager without it feeling like one. All the other features seem like noise.


Slapping on OpenTelemetry actually will solve your problem.

Point #1 isn't true, auto instrumentation exists and is really good. When I integrate OTel I add my own auto instrumentors wherever possible to automatically add lots of context. Which gets into point #2.

Point #2 also isn't true. It can add business context in a hierarchal manner and ship wide events. You shouldn't have to tell every span all the information again. Just where it appears naturally the first time.

Point #3 also also isn't true because OTel libs make it really annoying to just write a log message and very strongly pushes you into a hierarchy of nested context managers.

Like the author's ideal setup is basically using OTel with Honeycomb. You get the querying and everything. And unlike rawdogging wide events all your traces are connected, can span multiple services and do timing for you.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: