I’ve never felt useful in any of the big tech companies I’ve worked at. It always feels pointless. Is it just the projects I’ve been assigned never being worth anything or whether it is my perspective on those projects, I have no idea
lol, this post makes me feel like crying at my job in BigTech co. He’s complaining about a 30 min feedback loop maybe, mine is 2+ hours.
I write a piece of code, however small the change. Then run the proc, first it takes ~20 mins to compile. Then since it is ML, it can only run on a remote server. It takes an easy 20 more minutes to start running in the remote server. Then after another 40 minutes I can confirm the job failed. The only way to debug is to read through a massive log file, for which we have an in built log reader lol. The log file will have thousands of errors that literally don’t matter, and you’ll never have enough context to know which errors don’t matter. You can simply ping someone and ask, does this error matter, could this be the reason my entire proc failed, oh no this is just a useless log that fails all the time, I shouldn’t have been wasting a day digging into this, ok thanks bye.
But this isn’t it, not even close lol, we in fact have a custom DSL to define computational graphs, which of course does not have any linter, or even any compiler and a very broken visualizer but our entire org runs on this. Syntax errors, logic errors, actual race conditions are all caught the exact same way —- as the process dying after trying to run on a remote server for 2+ hours with no useful error log. So our workflow is to just get a cup of coffee and stare at the graph which can get thousands and thousands of lines big to find at times completely trivial bugs that any half decent language would have caught as a syntax error.
My exp in BigTechCo makes me completely understand GitHub actions, my guess is many big companies have equally janky tools that harm dev productivity but still somehow take absolutely massive workloads. GitHub just thought of sharing it to the public.
Can a browser expert please go through the code the agent wrote (skim it), and let us know how it is. Is it comparable to ladybird, or Servo, can it ever reach that capability soon?
I'm interested in this too. I was expecting just a chromium reskin, but it does seem to be at least something more than that. https://news.ycombinator.com/item?id=46625189 claims it uses Taffy for CSS layout but the docs also claim "Taffy for flex/grid, native for tables/block/inline"
TLDR; the code is not a valid POC but throw-away level quality that could never support a functioning web engine. It's actually very clear hallucinated AI BS, which is what you get when you don't have a human expert in the loop.
I actually like using AI, but only to save me the typing.
Did you even read the article. Yes, he had 1 sentence on ICE, which maybe you disagree with its politics. But most of the article was with technical issues in github as a platform and its neglect of non AI features. Please take your low quality comment to X.
Finally, I can’t believe I have to say it, but a creator of an open source project is free to infuse it with whatever values he wants, no one hates on SQLite, no need to hate someone for adding his brand of politics / values to his passion project. Please go create your own “non woke” language if you hate it so much.
I’ve liked living in Delhi recently, much less congested than Bengaluru that gnaws on my soul with its insane traffic. The only reasonable way to live in India is to live away from the main streets, ideally in a gated community which is a bikeable distance from work.
Something has been ignored by legislators for over a hundred years and just now you have discovered it’s true meaning which happens to perfectly align with your policy preferences.
Please, just be honest and say you want to enact a policy and use the US Supreme Court to do it, rather than gaslighting us into believing that words don’t mean what they do.
The cool thing about Silicon Valley is serious people try stuff that may seem wild and unlikely and in the off chance it works, entire humanity benefits. This looks like Atomic Semi, Joby Aviation, maybe even OpenAI in its early days.
The bad thing about Silicon Valley is charlatans abuse this openness and friendly spirit, and swindle investors of millions with pipe dreams and worthless technology. I think the second is inevitable as Silicon Valley becomes more famous, more high status without a strong gatekeeping mechanism which is also anathema to its open ethos. Unfortunately this company is firmly in the second category. A performative startup, “changing the world” to satisfy the neurosis of its founders who desperately want to be seen as someone taking risks to change the world. In reality it will change nothing, and go die into the dustbins of history. I hope he enjoys his 15 minutes of fame.
Fundamentally, gut feels by following the founder on Twitter. But if I had to explain, I don’t understand the point of speeding up or getting true RnG, even for diffusion models this is not a big bottleneck, so it sounds more like a buzzword than actual practical technology.
Having a TRNG is easy, you just reverse bias a zener diode or any number of other strategies that rely on physics for noise. Hype is a strategy they're clearly taking, but people in this thread are so dismissive (and I get why, extropic has been vague posting for years and makes it sound like vaporware) but what does everything think they're actually doing with the money? It's not a better dice roller...
What is it if not a better dice roller though? Isn't that what they are claiming it is? And also that this better dice rolling is very important (and I admittedly am not someone who can evaluate this)
Yes, I think they claim they are a far better dice roller in randomness and speed and that this is very important. The first might be true, but I don’t see why second is in any way true. These all need to be true for this company to make sense :
1. They build a chip that does random sampling far better than any GPU (is this even proven yet?)
2. They use a model architecture that utilizes this sampling advantage which means most of the computation must be concentrated at sampling. This might be true for energy based models or some future architecture we have no idea about. AFAIK, this is not even true for diffusion.
3. This model architecture must outcompete autoregressive models in economically useful tasks, whether language modeling or robotics etc, right now auto regressive transformers is still king across all tasks.
And then their chip will be bought by hyper scalers and their company will become successful. There is just so many if’s outside of them building their core technology that this whole project makes no sense. And you can say that this is true for all startups, I don’t think that’s the case, this is just ridiculous.
Waymo has promised to launch In London and Tokyo next year. New York, London, Tokyo probably covers the entire spectrum of difficulty for self driving cars, maybe we need to include Mumbai as the final boss but I would be happy saying self driving is solved if the above 3 cities have a working 24/7 self driving fleet
The final boss could be something like Scottland mountain roads, or some of the million beaches on a cliff in Greece where this "you have to first reverse" kinda situation happens every 30 seconds.
No no, the final boss is a dozen of them have to race around such roads, no crashes allowed, and they must all finish at exactly the same time. Blindfolded.
reply