Consistency is best, with a slow gradual, measured movement towards 'better' where possible. When and where the opportunity strikes.
If you see a massive 50 line if/else/if/else block that can be replaced with a couple calls to std::minmax, in code that you are working on, why not replace it?
But don't go trying to rewrite everything at once. Little improvements here or there whenever you touch the code. Look for the 'easy wins' which are obvious based on more modern approaches. Don't re-write already well-written code into a new form if it doesn't benefit anything.
If I use a library, I know it will do the same thing from the same inputs, every time. If I don't understand something about its behavior, then I can look to the documentation. Some are better about this, some are crap. But a good library will continuing doing what I want years or decades later.
An LLM can't decide between one sentence and the next what to do.
The library is deterministic, but looking for the library isn't. In the same way that generating code is not deterministic, but the generated code normally is.
I...guess? But once you know of a good library for problem X, you don't need to look for it anymore. I guess if you have a bunch of developers and 0 control over what they do, and they're free to drag in additional dependencies willy-nilly, then yes, that part isn't deterministic? But that's a much bigger problem than anything library-related...
Why do you think? Not because of any output of the company, of course.
But because buying it helps perpetuate the hype and money cycle of the 'AI' trend for awhile longer. It may not look like it directly, but a purchase like this keeps Nvidia's stock up in the future, which is all investors care about.
If this is true, is it just the HN community that understands this? Otherwise, wouldn’t it make sense that the market understands this already and doesn’t fall for the hype? It doesn’t pass the smell test for me that it’s that transparent of a play for hype. What am I missing?
AI is real and it's also hyped. There's circular financing and real money involved. Groq has good tech and smart people and Nvidia is also taking a competitor off the board. People who only see one side have a lack of imagination.
You can't fix it, because it's a problem of incentives.
* Businesses want to maximize shareholder value
* Those running websites want to do as much SEO-slop as possible to appear first
* Content creators need to maximize views, which means rage-bait, clickbait, etc.
* Addictive content = more time spent, more ads seen
You can't 'fix' the internet. The internet, like many things, is a tool. Shareholders and individual actors are only interested in maximizing their own gains - and so use this tool to that purpose - regardless of any negative effects on the whole. That's how humans operate in general (with rare exceptions).
You may as well say "Human selfishness and greed sucks, how do we get rid of it?". You can't.
I suppose people could try to make a non-monitizable internet.
Everything would have to be self-hosted.
No ads would be allowed anywhere.
No business would be allowed to build anything, just users.
Some kind of super-admin would have to have to power to perma-ban any website or user that breaks the rules.
But you'd still have the problem of people who aren't directly monitizing things, like influencers. You'd have bots. You'd have subtle ads that don't quite appear to be ads, or users writing fake testimonials.
Still, despite its obvious flaws, it would be cool to see someone try to build such a non-commerical internet someday. I wish them luck.
Bots and influencers are minimal concerns if there's no money to be had - since that's ultimately the driving force between those.
But trying to make a 'non-monetizable' internet is an oxymoron. If it has the ability to allow people to communicate, then it can be used to sell things.
You can't have an exchange of information without that information potentially containing garbage designed to make someone else money. You can only eliminate spam calls if you decide to get rid of the invention of the telephone.
Heh. True. The saga of r/art this last few weeks is a good lession in how trying to demonitize things can be difficult.
Still, with intense moderation it is sometimes possible. Wikipedia has a vast amount of information passing through it and has stayed pretty free of monitization - although, certainly some companies have written themselves some pretty positive wiki-pages - in general I would say it is a success.
Intense moderation eventually breaks down. Ultimately, the people doing the moderating are driven by the same selfishness as all humans. Even if a you can find a handful who won't bend for their own gain, they will be forced out by those who do and see an opportunity.
This is the same problem law has, even at the global scale.
You can't moderate when almost every single person in the chain is a bad actor. Individuals will chip away at any structure or organization day by day, year by year, until they are eventually rewarded.
It would be one thing if it was a 20% increase in space usage, or if the whole game was smaller to start with, or if they had actually checked to see how much it assisted HDD users.
But over 6x the size with so little benefit for such a small segment of the players is very frustrating. Why wasn't this caught earlier? Why didn't anyone test? Why didn't anyone weigh the pros and cons?
It's kind of exemplary of HD2's technical state in general - which is a mix of poor performance and bugs. There was a period where almost every other mission became impossible to complete because it was bugged.
The negativity is frustration boiling over from years of a bad technical state for the game.
I do appreciate them making the right choice now though, of course.
It was a choice, not an oversight. They actively optimised for HDD users, because they believed that failing to do so could impact load times for both SSD and HDD users. There was no speed penalty in doing so for SSD users, just a disk usage penalty.
Helldivers II was also much smaller at launch than it is now. It was almost certainly a good choice at launch.
You make a million decisions in the beginning of every project. I'm certain they made the choice to do this "optimization" at an early point (or even incidentally copied the choice over from an earlier project) at a stage where the disk footprint was small (a game being 7GB when it could've been 1GB doesn't exactly set off alarm bells).
Then they just didn't reconsider the choice until, well, now.
Even at the end of development it’s a sensible choice. It’s the default strategy for catering to machines with slow disk access. The risk of some players experiencing slow load times is catastrophic at launch. In absence of solid user data, it’s a fine assumption to make.
The first impression matters is the thing. This was John Carmacks idea on how to sell interlacing to smartphone display makers for VR: the upsell he had was that there's one very important moment when a consumer sees a new phone: they pick it up, open something and flick it and that scroll effect better be a silky smooth 60 FPS or more or there's trouble. (His argument was making that better would be a side effect of what he really wanted).
>But over 6x the size with so little benefit for such a small segment of the players is very frustrating. Why wasn't this caught earlier? Why didn't anyone test? Why didn't anyone weigh the pros and cons?
Have you never worked in an organization that made software?
Damn near everything can be 10x as fast and using 1/10th the resources if someone bothered to take the time to find the optimizations. RARE is it that something is even in the same order of magnitude as its optimum implementation.
But this isn't an optimization. The 150+GB size is the "optimization", one that never actually helped with anything. The whole news here is "Helldivers 2 stopped intentionally screwing its customers".
I don't see why it's a surprise that people react "negatively", in the sense of being mad that (a) Helldivers 2 was intentionally screwing the customers before, and (b) everyone else is still doing it.
I think what makes this a bit different from the usual "time/value tradeoff" discussion is bloating the size by 6x-7x was the result of unnecessary work in the name of optimization instead of lack of cycles to spend on optimization.
Eh probably not, it's probably handled by some automated system when making release builds of the game. Sure, implementing that initially was probably some work (or maybe it was just checking a checkbox in some tool), but there's probably not much manual work involved anymore to keep it going.
Reverting it now though, when the game is out there on a million systems, requires significant investigation to ensure they're not making things significantly worse for anyone, plus a lot of testing to make sure it doesn't outright break stuff.
Reverting it now was certainly a pile of work, but that's neither here nor there for the portion of the story bothering people. It's like they threw rocks threw the windows years ago to make them slightly clearer to see through and now put a ton of work in to undo that because they discovered that made no sense in reality.
It's great they did all the work to fix it after the fact, but that doesn't justify why it was worth throwing rocks through the window in the first place (which is different than not doing optimizations).
Optimization takes up time, and often it takes up the time of an expert.
Given that, people need to accept higher costs, longer development times, or reduced scope if they want better optimized games.
But what is worse, is just trying to optimize software is not the same as successfully optimizing it. So time and money spent on optimization might yield no results because there might not be anymore efficiency to be gained, the person doing the work lacks the technical skill, the gains are part of a tradeoff that cannot be justified, or the person doing the work can't make a change (i.e., a 3rd party library is the problem).
The lack of technical skill is a big one, IMO. I'm personally terrible at optimizing code, but I'm pretty good at building functional software in a short amount of time. We have a person on our team who is really good at it and sometimes he'll come in after me to optimize work that I've done. But he'll spend several multiples of the time I took making it work and hammering out edge cases. Sometimes the savings is worth it.
The trade off they're talking about is to arrive at the same end product.
The reason games are typically released as "fetuses" is because it reduces the financial risk. Much like any product, you want to get it to market as soon as is sensible in order to see if it's worth continuing to spend time and money on it.
And this really shouldn't surprise professionals in an industry where everything's always about development velocity and releasing Minimum Viable Products as quickly into the market as possible.
> God why can’t it just be longer development time.
Where do you stop? What do the 5 tech designers do while the 2 engine programmers optimise every last byte of network traffic?
> I’m sick of the premature fetuses of games.
Come on, keep this sort of crap off here. Games being janky isn't new - look at old console games and they're basically duct taped together. Go back to Half-life 1 in 1998 - the Xen world is complete and utter trash. Go back farther and you have stuff that's literally unplayable [0], or things that were so bad they literally destroyed an entire industry [1], or rendered the game uncompleteable [2].
Super Mario 64, widely recognized as one of the most iconic influential games ever... was released with a build that didn't have the compiler optimizations turned on. They proved this by decompiling it and with the exact right compiler and tools recompiling it with the non-optimized arguments. Recompiling with the optimizations turned on resulted in no problems and significant performance boosts.
One of the highest rated games ever released without devs turning on the "make it faster" button which would have required approximately zero effort and had zero downsides.
This kind of stuff happens because the end result A vs. B doesn't make that much of a difference.
And it's very hard to have a culture of quality that doesn't get overrun by zealots who will bankrupt you while they squeeze the last 0.001% of performance out of your product before releasing. It is very had to have a culture of quality that does the important things and doesn't do the unimportant ones.
The people who obsess with quality go bankrupt and the people who obsess with releasing make money. So that's what we get.
A very fine ability for evaluating quality mixed with pragmatic choice for what and when to spend time on it is rare.
Because LLMs are just about all that actually exists as a product, even if an inconsistent one.
Maybe some day a completely different approach could actually make AI, but that's vapor at the moment. IF it happens, there will be something to talk about.
Don't worry that much about 'AI' specifically. LLMs are an impressive piece of technology, but at the end of the day they're just language predictors - and bad ones a lot of the time. They can reassemble and remix what's already been written but with no understanding of it.
It can be an accelerator - it gets extremely common boiler-plate text work out of the way. But it can't replace any job that requires a functioning brain, since LLMs do not have one - nor ever will.
But in the end it doesn't matter. Companies do whatever they can to slash their labor requirements, pay people less, dodge regulations, etc. If not 'AI' it'll just be something else.
Text is an LLMs input and output, but, under the hood, the transformer network is capable of far more than mere re-assembly and remix of text. Transformers can approximate turing completeness as their size scales, and they can encode entire algorithms in their weights. Therefore, I'd argue they can do far more than reassemble and remix. These aren't just Markov models anymore.
(I'd also argue that "understanding" and "functional brain" are unfalsifiable comparisons. What exactly distinguishes a functional brain from a turing machine? Chess once required a functional brain to play, but has now been surpassed by computation. Saying "jobs that require a human brain" is tautological without any further distinction).
Of course, LLMs are definitely missing plenty of brain skills like working in continuous time, with persistent state, with agency, in physical space, etc. But to say that an LLM "never will" is either semantic, (you might call it something other than an LLM when next generation capabilities are integrated), tautological (once it can do a human job, it's no longer a job that requires a human), or anthropocentric hubris.
That said, who knows what the time scale looks like for realizing such improvements – (decades, centuries, millennia).
I'd imagine most people aren't 100% positive on what they want to get beforehand. Sometimes you only realize you want something after passing by it. Maybe it's something you haven't gotten in awhile and hadn't considered beforehand.
And even if you are completely sure on what you want in advance, having someone else do it is not always great. At least with Instacart, the person doing the shopping frequently didn't know where something was and just assumed it was 'out' and tried to substitute it (badly). There was all this awful delay and back-and-forth and crappy picures with the person shopping to try to get the right thing.
Doing it yourself doesn't have that problem. You know what you want, why you want it, and what you're willing to bend on. No, X brand cheese is not a substitute for Y branch I wanted, never do that. But yes, Z brand and type of milk is fine compared to what I wanted and I know they are frequently out.
Grocery store employees aren't any better at this, btw. Especially since the stores like to re-arrange on a monthly basis.
Bitcoin, and really all crypto 'currencies' were never meant to be currencies at all. Maybe a couple naive people who created them originally believed that, but it was never the goal.
They are speculative assets for gambling with. They have been since day 1.
> Bitcoin, and really all crypto 'currencies' were never meant to be currencies at all.
To be fair, there is a significant amount of disagreement about what a "currency" is supposed to be, and there is a large subset of people who believe that the desirable traits in a currency are exactly those things that make it function well as a speculative asset (notably, on average over a long time, value with respect to goods is at least flat and preferrably increasing) while simultaneously not thinking the things that another large group of people sees as desirable for a currency (e.g., lack of extreme short-term volatility) are important.
I can't speak to the original designer of Bitcoin, but I wouldn't be surprised if it and most cryptocurrencies were designed to be currencies, just by people who have a very specific (and, IMV, wrong) idea of what a currency ought to be.
A currency is fungible, easily accessible, tradable and convertible with little overhead. And in order to function, above all else a currency must have stability and trust.
If people lose faith in a currency's future, then it has no real value.
If people believe a currency (or the government/system which supports it) is unstable, then it has no real value. Real world global trade and investment is done on long timetables. You can't develop a product that won't start selling for 6+ years if you can't predict how currency will behave along that 6 years.
No one had a 'wrong' idea of what currency should be. They saw an opportunity to scam people out of all their money by convincing them that gambling was an investment and that they were much smarter and more clever, and sticking it against 'the man' or 'the system' when in fact they were just being used.
There were only two notable groups in crypto: The scammers and the suckers.
If you see a massive 50 line if/else/if/else block that can be replaced with a couple calls to std::minmax, in code that you are working on, why not replace it?
But don't go trying to rewrite everything at once. Little improvements here or there whenever you touch the code. Look for the 'easy wins' which are obvious based on more modern approaches. Don't re-write already well-written code into a new form if it doesn't benefit anything.
reply