Hacker Newsnew | past | comments | ask | show | jobs | submit | dogcomplex's commentslogin

For those watching this stuff, there are two other promising paths using ZK-proofs which might disarm the tradeoff situation we've been stuck in. Banking apps etc aren't willing to eat the liability of devices that are rooted or running alternate OSes, and Google's been banking on the exclusivity that brings from being both hardware and security provider.

Path 1: a ZK-proof attestation certificate marketplace implemented by GrapheneOS (or similar) to prove safety in a privacy-securing way enough for 3rd party liability insurance markets to buy in. Banks etc can be indifferent, and wouldn't ignore the market if it got big enough. This would mean we could root any device with aggressive hacking and then apologize for it with ZK-proof certs that prove it's still in good hands - and banking apps don't need to care. No need for hard chains of custody like the Google security model.

Path 2: Don't even worry too hard about 3rd party devices or full OSes, we just need to make the option viable enough to shame Google into adopting the same ZK certificate schemes defensively. If they're reading all user data through ZK-proof certs instead of just downloading EVERYTHING then they're significantly neutered as a Big Brother force and for once we're able to actually trust them. They'd still have app marketplace centrality, but if and when phones are being subdivided with ZK-proof security it would make 3rd party monitoring of the dynamics of how those decisions get made very public (we'd see the same things google sees), so we could similarly shame them via alternatives into adopting reasonable default behaviors. Similar to Linux/Windows - Windows woulda been a lot more evil without the alternative next door.

Longer discussion (opinion not sourced from AI though): https://chatgpt.com/share/68ad1084-eb74-8003-8f10-ca324b5ea8...


lol yep we've never had codebases hacked together by juniors before running major companies in production - nope, never


> Ahh, sweet summer child, if I had a nickel for every time I've heard "just hack something together quickly, that's throwaway code", that ended up being a critical lynchpin of a production system - well, I'd probably have at least like a buck or so.

Because this is the first pass on any project, any component, ever. Design is done with iterations. One can and should throw out the original rough lynchpin and replace it with a more robust solution once it becomes evident that it is essential.

If you know that ahead of time and want to make it robust early, the answer is still rarely a single diligent one-shot to perfection - you absolutely should take multiple quick rough iterations to think through the possibility space before settling on your choice. Even that is quite conducive to LLM coding - and the resulting synthesis after attacking it from multiple angles is usually the strongest of all. Should still go over it all with a fine toothed comb at the end, and understand exactly why each choice was made, but the AI helps immensely in narrowing down the possibility space.

Not to rag on you though - you were being tongue in cheek - but we're kidding ourselves if we don't accept that like 90% of the code we write is rough throwaway code at first and only a small portion gets polished into critical form. That's just how all design works though.


I would love to work at the places you have been where you are given enough time to throw out the prototype and do it properly. In my almost 20 years of professional experience this has never been the case and prototype and exploratory code has only been given minimal polishing time before reaching production and in use state.


We are all too well aware of the tragedy that is modern software engineering lol. Sadly I too have never seen that situation where I was given enough time to do the requisite multiple passes for proper design...

I have been reprimanded and tediously spent collectively combing over said quick prototype code for far longer than the time originally provided to work on it though, as a proof of my incompetence! Does that count?


Hah my bad, I misread your original comment as you saying you usually get the chance to do multiple passes on a prototype to productionalize it :)


I'm not sure if I could've said this better


The Supreme Court is eroding the credibility of the institution of law faster than they can make laws. They really want to see how the public reacts to overreach?


Anyone interested in this from a history / semiotics / language-theory perspective should look into the triad concepts of:

Sign (Signum) - The thing which points Locus - The thing being pointed to Sense (Sensus) - The effect/sense in the interpreter

Also known by: Representation/Object/Interpretation, Symbol/Referent/Thought, Signal/Data/User, Symbol/State/Update. Same pattern has been independently identified many many times through history, always ending up with the triplet, renamed many many times.

What you're describing above is the "Locus" essential object being pointed to, fulfilled by different contracts/LLMs/systems but the same essential thing always being eluded to. There's an elegant stability to it from a systems design pov. It makes strong sense to build around those as the indexes/keys being pointed towards, and then various implementations (Signs) attempting to achieve them. I'm building a similar system atm.


Thanks for bringing this up. I'm fairly familiar with Peirce's triadic semiotics and Montague's semantics, and they show up in some of my notes. I haven't turned those sketches into anything applied yet, but the design space feels *huge* and quite promising intuitively.


Agreed. This is a very interesting discussion! Thanks for bringing it to light.

Have you read Escher, Bach, Gödel: the Eternal Golden Braid?


Of course! And yes, a Locus appears to be very close in concept to a strange attractor. I am especially interested in the idea of the holographic principle, where each node has its own low-fidelity map of the rest of the (graph?) system and can self-direct its own growth and positioning. Becomes more of a marketplace of meaning, and useful for the fuzzier edges of entity relationships that we're working with now.


If anything we now need to unlearn the rigidity - being too formal can make the AI overly focused on certain aspects, and is in general poor UX. You can always tell legacy man-made code because it is extremely inflexible and requires the user to know terminology and usage implicitly lest it break, hard.

For once, as developers we are actually using computers how normal people always wished they worked and were turned away frustratedly. We now need to blend our precise formal approach with these capabilities to make it all actually work the way it always should have.


"Mech suit" is apt. Gonna use that now.

Having plenty of initial discussion and distilling that into requirements documents aimed for modularized components which can all be easily tackled separately is key.


This. Except one should also disillusion themselves of the idea that there will always be a higher quality to the 'hand made' versions. AI will almost certainly outpace us in every way, including the ability to make something beautiful that looks 'hand-made', even with artificial flaws and illusions of the history and natural rugged beauty of the piece.

The only discernable difference that won't be replicable is a cryptographic signature "Certified 100% Human-Made!" sticker, which will probably become the mark of the niche industry.

Somewhat more accurate analogy would be the custom car market. Beautiful collectible convertibles with fine detailing everywhere, priced thousands of times higher than normal cars, that actually run far worse and basically break apart after a few thousand miles and are impossible to find parts for. Automated factories certainly could churn them out but they don't because they're impractical poorly-designed status items kept artificially scarce for the very rich to peacock with.

Except AI will probably still produce equivalent impractical stuff anyway, just because production (digital and physical) will eventually be easy enough that resources are negligible, and everyone can have flashy impractical stuff. So again, only that "100% Human!" seal will distinguish, eventually.


If you took a forklift to the gym, you'd come out of the experience not only very good at "lifting weights", but having learned a whole lot more about the nature and physics of weightlifting from a very different angle.

Sure, you should lift them yourself too. But using an AI teaches you a shit-ton more about any field than your own tired brain was going to uncover. It's a very different but powerful educational experience.


> But using an AI teaches you a shit-ton more about any field than your own tired brain was going to uncover.

If you never learn to research, sure. Otherwise, you should be worried about accuracy, up to date information, opinionated takes, and outright lies/misinformation. The tool you use doesn't change these factors.


No but it increases the speed and ease at which you can check any of those - making a lot of those steps practical when they were a slog before. If people aren't double-checking LLM claims against sources then they were never on guard for those without an LLM either.

Besides, those are incredibly short-term concerns. Recent models are a whole lot more trustworthy and can search for and cite sources accurately.


Does it? You google a query, get results, compare a few alternative results. You ask a prompt and what? Compare outputs to each other? Or just defer back to googling for alternative sources.

Firstly, these prompts tend to be shockingly close in behavior. Secondly, Google tends to rank reputable or self curated sites which have some accountability. It can be wrong but you know thr big news sites tend to at least defer to interviews to back up facts. Wikipedia has an overly strict process to prevent blatant, source less information.

There's room for error, but there's at least more accountability compared to what an LLM is going through.

> Recent models are a whole lot more trustworthy and can search for and cite sources accurately.

Lastly, prompts are still treated as black boxes, which is a whole other issue. For the above reasons I still would simply defer to human curated resources. That's what LLMs are doing anyway without transparency.

People want to give up transparency for speed? It seems completely counter to hacker culture.


We've had a long history of technological improvements being widespread distributed to the people. There's not a particularly bleak reason to believe the latest AI automation won't be too. Look around your desk or your house and just count all the effort-saving devices that have made their way down to you. Look at the price of TVs cratering. Tech that can be recreated easily spreads far and wide. AI can too. It's dropped 1000x in costs the last 2 years. This stuff will be running on old tech everywhere - and speedier and cheaper new chips, bots and other hardware are on their way.

Unless there's a new world war or draconian regulation, we're good. It's pretty much locked in.


I'm struggling to think of any technological advancement in the past 20 years that's saved me time. The only real change has been a shift to WFH, but that happened independently of technological change in that era. Even things like screen sharing and remote desktop were possible before that time.

25 years ago, sure: online shopping/banking, email and chat -- these are all things my Blackberry or Nokia could handle. The touchscreen smartphone hasn't really moved the needle much in that regard.


This is the core of my original beef, techno-optimism seems divorced from the exact things I think it’s ideologically trying to promote, and instead is just “AI will fix everything”.

And I basically agree about sort of time saving, we got smart phones, which I think was of questionable benefit compared to the invention of computers, the internet, and cell phones in the first place.


Tell me you never called for a taxi and waited two hours for it only not to show up without telling me.

Uber has saved me a remarkable amount of time.

More-generally:

https://gwern.net/improvement


I have waited hours while Lyft driver after Lyft driver canceled and the algorithm kept picking new drivers.


Yeah I actually had an Uber driver show up after like an hour and been like “I’m not taking you there!”


No, I haven't. I live in a city, a cab took 15 mins to arrive after calling the dispatch service. Uber can take just as long, because the request bounces around different drivers in the area, the time to arrive is dependent on which driver accepts the offered rate and how far away they are. I've had to request rides more than once because the Uber-set price was rejected by all nearby drivers.

When I lived rurally (in college) cabs had to be booked in advance, that's just common sense.


You sayin you were capable of ordering any product on earth from your couch and having it delivered within 2 days? Or building an interactive video (modern website) accessible anywhere on earth instantly (all used just to display people's resume and contact details lol)? Or navigate anywhere within minutes from the optimal pathway, without thinking about it? Or research and answer any question you have about anything in the world within a minute? Or hold daily conversations with all your friends in group chats despite vast geographical gaps? Or play games with them in - again - interactive cinematic masterpiece movies accessible anywhere on the planet?

So much time was saved you don't even realize it because most of the above was just practically impossible to do before - and frankly beyond the scope of what any human actually needs. But the scope crept anyway and now they're all normal parts of modern life taken for granted. As for where that time went - capabilities exploded, but any spare time also got eaten by tighter work hours from a more competitive market. That's capitalism for ya baybeeeee


> We've had a long history of technological improvements being widespread distributed to the people. There's not a particularly bleak reason to believe the latest AI automation won't be too.

the difference this time is that humans just moved to other activities where they were useful, and with super-AI(if it will happen) this is not the case anymore.


Progress is not guaranteed.

Humanity has been around in basically the same form for 2 million years (and the same form for probably 200 000 years) yet life for the average person on the planet really started improving circa 1950.


2 million years is a long time. It’s quite a stretch to say life only started improving in the last 50. An asteroid could hit today and nearly all evidence of our existence would be gone in 10k years.


Life expectancy skyrocketed once we discovered and applied hygiene, plus improved sanitation, and also antibiotics. More or less, modern medicine.

The industrial era did a lot of things but it also made cities, famous for being horrible places to live in, even worse. It took the realization of that fact and action against pollution to improve that.

Worker's rights movement also had to spring up for 40 hour work weeks, 2 day weekends, sick leave, unemployment benefits, pensions, disability benefits.

Slavery was barely abolished 150 years ago, and it's still present in some places. Ditto for serfdom.

Hunter gatherers had healthier diets that settled populations thousands of years after the invention of agriculture.

MANY things were better for society and neutral or worst for the average person.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: