Hacker Newsnew | past | comments | ask | show | jobs | submit | _ikke_'s commentslogin

As long as what it says is reliable and not made up.

That's true for internet searching. How many times have you gone to SO, seen a confident answer, tried it, and it failed to do what you needed?

Then you write a comment, maybe even figure out the correct solution and fix the answer. If you're lucky, somebody already did. Everybody wins.

That's what LLMs take away. Nothing is given back to the community, nothing is added to shared knowledge, no differing opinions are exchanged. It just steals other people's work from a time when work was still shared and discussed, removes any indication of its source, claims it's a new thing, and gives you no way to contribute back, or even discuss it and maybe get confronted with different opinions of even discovering a better way.

Let's not forget that one of the main reasons why LLMs are useful for coding in the first place, is that they scraped SO from the time where people still used it.


I feel like we are just covering whataboutism tropes now.

You can absolutely learn from an LLM. Sometimes.documentation sucks and the LLM has learned how to put stuff together feom examples found in unusual places, and it works, and shows what the documentation failed to demonstrate.

And with the people above, I agree - sometimes the fun is in the end process, and sometimes it is just filling in the complexity we do not have time or capacity to grab. I for one just cannot keep up with front end development. Its an insurmountable nightmare of epic proportions. Im pretty skilled at my back end deep dive data and connecting APIs, however. So - AI to help put together a coherent interface over my connectors, and off we go for my side project. It doesnt need to be SOC2 compliant and OWASP proof, nor does it need ISO27001 compliance testing, because after all this is just for fun, for me.


The ipv4 routing table contains many individual /24 subnets that cannot be summarized, causing bloat in the routing tables.

With ipv6, that can be simplified with just a couple of /32 or /48 prefixes per AS.


This, because a bunch of random /24s were sold off to different ISPs, because of address scarcity.

With IPv6, it's common to have multiple addresses on an interface.

So on options is to assign yourself an [RFC 4193](https://datatracker.ietf.org/doc/html/rfc4193) fc00::/7 random prefix that you use for local routing that is stable, while the ISP prefix can be used for global routing.

Then you don't need to renumber your local network regardless of what your ISP does.


What if I want my devices visible on the public internet? Then I'm tied to my ISP's addresses. Or, I have to maintain both addressing schemes

That's why I mentioned multiple addresses. The public addresses (assigned using SLAAC or DHCPv6) are for global reachability, while you use the local prefix for stable addresses within your network.

If you want stable global addresses, you should request an AS number and prefix, and choose a provider that allows you to announce it with BGP.


> and choose a provider

Lots of people don't have much choice.

Frankly, my IoT washing machine having a public IP address sounds like it'll get shut off when I don't let it online or don't pay my subscription fee.


> Lots of people don't have much choice.

Yeah but it's not like IPv4 is any better at giving you a stable public address.


Funfact my washing machine has a public ipv6 address, but egress/ingress conns to the WAN are blocked. works great.

This is also the case with IPv4.

Instead, you get to hunt the nouns.


It affects many open source projects as well, they just scrape everything repeatedly without abandon.

First from known networks, then from residential IPs. First with dumb http clients, now with full blown headless chrome browsers.


Well I can parse my nginx logs and don't see that happening, so I'm not convinced. I suppose my websites aren't the most discoverable, but the number of bogus connections sshd rejects is an order of magnitude or three higher than the number of unknown connections I get to my web server. Today I received requests from two whole clients from US data centers, so scrapers must be far more selective than you claim, or they are nowhere near the indie web killer OP purports them to be.

I've worked with a company that has had to invest in scraper traffic mitigation, so I'm not disputing that it happens in high enough volume to be problematic for content aggregators, but as for small independent non-commercial websites I'll stick with my original hypothesis unless I come across contradictory evidence.


Only if that AI does not hallucinate, otherwise it would be useless.


Systems that use shamirs secret sharing, like openbao, require multiple operators to unlock the secret engine.

Gitlab premium can also require multiple approvals before a merge request can be merged.


Yeah I’ve used HSMs that have those. But I’m talking about things like running deployments, changing production configurations, or rebooting servers.

Approvals don’t work, and I’ve already said as much. If approvals worked so well we wouldn’t have dual key systems at all.


If a TUI is an option, then `tig blame` is a good interface.

https://github.com/jonas/tig


"git gui blame" is pretty good, too. and git-gui comes with git.


> OpenSSH sshd on musl-based systems is not vulnerable to RCE via CVE-2024-6387 (regreSSHion).

https://fosstodon.org/@musl/112711796005712271


They have a blogpost about it: https://tailscale.com/blog/free-plan

> TL;DR: Tailscale’s free plan is free because we keep our scaling costs low relative to typical SaaS companies. We care about privacy, so unlike some other freemium models, you and your data are not the product. Rather, increased word-of-mouth from free plans sells the more valuable corporate plans. I know, it sounds too good to be true. Let’s see some details.


Thank you for the link.

So it's a weighed choice between "if something seems like it's too good to be true, it often is", and "the explanations they give make good sense, and it's a way of doing business that some ethical company could choose to take".

We probably won't know in the short-to-medium term, so we'll have to take their word for it..

But I must admit, their products look pretty impressive. I'll have to have a closer look at them.


For what it's worth the scaling costs for their service are quite low. Tailscale connections are almost entirely peer to peer after an initial NAT busting operation. They can afford to do a loss leader like this and the product is actually so good that I've recommended it to a number of places. It's literally the first VPN that I think is worth paying for. I wouldn't have known that if the free tier didn't exist. Using is believing in their case. It's not uncommon to be literally angry at how easy it is to set up/manage/deploy given how much of a trash fire most vpn software is.


> Tailscale connections are almost entirely peer to peer after an initial NAT busting operation.

Ah, interesting, thanks. That would indeed make it a lot less costly. I would need to dive into it to get a better understanding how their service works.

Would you happen to have some good resources you found useful?


They have tons of great documentation — https://tailscale.com/blog/how-tailscale-works


Well, this addresses the sniffing concern. From the link:

    Note that the private key never, ever leaves its node. This is important because the private key is the only thing that could potentially be used to impersonate that node when negotiating a WireGuard session. As a result, only that node can encrypt packets addressed from itself, or decrypt packets addressed to itself. It’s important to keep that in mind: Tailscale node connections are end-to-end encrypted (a concept called “zero trust networking”).
Thanks!


That actually sounds rather nice, I might try them out because of this.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: