So how do you validate the data? You can apply all the changes to existing record and validate the result, but then you need to put everything in memory. Verifying the operations however sounds dangerous... Any pointers?
Also, if someone is using this in production: any gotchas?
If you are using Java, you may want to check out the library I created for American Express and open sourced, unify-jdocs - it provides for working with JSON documents outside of POJOLand. For validations, it also has the concept of "typed" document using which you can create a document structure against which all read / writes will be validated. Far simpler and in my opinion as powerful as JSONSchema. https://github.com/americanexpress/unify-jdocs.
The approach I've generally seen used is that you have a set of validation that you apply to the JSON and apply that to the results of the patch operation.
You probably want to have some constraints on the kinds of patch objects you apply to avoid simple attacks (e.g. a really large patch, or overly complex patches). But you can probably come up with a set of rules to apply generally to the patch without trying to validate that the patch value for the address meets some business logic. Just do that at the end.
Sure and people fearful of horseless carriages in the 1890s are rightfully fearful of dying since:
* horseless carriages won't avoid running into things whereas horses can see and instinctively won't run into a wall
* the greater speed of horseless carriages in the 1890s without modern safety infrastructure like signs and traffic lights does greatly increase the risk of injury
True, every bad technology is just ford's car, wright brother's plane, more than 256MB of RAM, and the internet. From blockchain to generative ai to FSD. No don't think about all the other bad technologies that have a similar profile to these and never went anywhere (or worse), only think about the extreme edge cases that don't have a similar profile.
Not sure what the point is here? My point is that OP is not afraid of new tech per-se, but instead doesn't like where it is now and finds it dangerous.
I'm sure in time the technology will evolve and there won't be a new car on the road that wouldn't support FSD - just like it did with "horseless carriages". It will take some more time though and not all companies will be equally advanced. From what I've heard (from other sources too) I wouldn't want to be driven by a Tesla FSD at the moment.
> * horseless carriages won't avoid running into things whereas horses can see and instinctively won't run into a wall
Off topic, but interesting - if you think about it, we actually already had mostly-FSD in the 1890s. It just ate a whole lot of grass.
My point is that OP's post which espouses a common fear, even if it may be justified for some, is not really interesting. I don't want to gatekeep HN, but personally I do not enjoy this type of content.
A secondary implied point is that new technologies which may seem dangerous can still be safe and useful when used properly, such as driving horseless carriages carefully in the 1890s or using FSD (Supervised) with proper supervision as directed.
Actually you can override it, but they made the process so convoluted and privacy breaking that I would guess not many people use it. So yes, effectively they blocked it.
> ...for various reasons, including security concerns.
Security is just a red herring.
Who wins with this move? The cynic in me (who is usually annoyingly right) says Google. Mozilla loses even more trust from its users and Firefox now has a tool to disable ad blockers on websites of their biggest competitor^Wsponsor if they reach a suitable mutual agreement (read: G pays enough for it). Win-win for all the parties that have a say in this. Not users of course, but that's life.
> ...but there's no guarantee my hardware is compatible with Ubuntu LTS 22.04 because the vendor doesn't support that config...
So it's your vendor who doesn't care about backwards compatibility - Linux does. Whatever hardware you have, once it was supported, it will stay this way until it is really ancient, and even then you will have special builds that will support it. That's the beauty of open source (or even source available) licenses. No corporate interests that would render your solution obsolete. [0]
[0] assuming your hardware doesn't need some binary blobs (khm nvidia khm) to work
(a) Of course it needs binary blobs. This is the real world where medium- to high-performance machines need binary blobs. The GNU dream never materialized.
(b) If Ubuntu (and I'm going to say "the Linux ecosystem in general") cared about backwards compatibility in the same sense Windows does, a minor-version bump to glibc wouldn't introduce API breakages that mean I can't build a codebase that's doing nothing new and special on my machine right now purely because a 31-subver bumped to a 32. They aren't doing anything new in the code; their JNI dependency just bumped up and so they bumped up the whole codebase's requirements.
That's fine, but it's not The Windows Way. The Linux Way doesn't think about compatibility issues like that in anything like the same way. It's a source-code-and-patch-it-yourself world. The approaches are completely alien to each other.
glibc goes to great lengths to ensure binary backwards-compatibility. If a binary interface has to change it uses symbol versioning to keep around the old interface - for example if you have a program compiled against glibc2.2 on x86-64 that calls timer_create(), it will call through the function __timer_create_old() that provides the old interface when run on a system with a newer libc.
The reverse scenario that seems to be what you have - where you compile against a newer library version then run it on an older one - is forwards-compatibility which is a different kettle of fish. Even on Windows it's not like you can compile against the latest DirectX and run it on an older Windows?
It's not "pretty standard", but we're working towards it and it looks like a pretty great solution. Our problem is that CI job runners sleep most of the day (low number of commits), but then you have spikes where the jobs are waiting on each other and times get really long. Autoscaling sounds great - you can have lots of runners when you need them and only a single one (or maybe even none? not sure yet) otherwise.
I'm reminded of Isaac Asimov, who has built a whole range of stories on the 3 principles of robotics, with many situations arising where reality didn't match how the creators thought the robot should behave in a given situation.
It looks we are making a huge progress in that direction, with very similar problems arising.
Exactly. It just shows that we can't really control such complex systems. Kind of funny that he got it somehow right. Years ago, I though, nah that can't happen and sounds stupid.
What makes me think that LLM may be a big thing, is that complex language seems to distinguish us from animals. So maybe this is what is required to invent everything else. Or, let's say, at least it is a major factor.
It's really just math though. Any "LLM" isn't really "thinking" in any spoken or written language, but rather in a massive series of weighted matrices (numbers).
I've commented a few times here and there about this AI hype, but might as well repeat myself: I think people largely misunderstand the technology and I see major missing aspects that are non-trivial to solve before we really get to anything looking like iRobot (or insert here any other scifi of your choice). These input / output models can only go so far, even if they are ever increasing in size. We don't just need 2 or 3 prompt memory, but full dynamic memory that the model can access throughout it's lifetime as well as the ability for the model to reflect and introspect on itself (much like human thought and communication). Without these things, an LLM will just remain an LLM, albiet larger and larger. Unfortunately I don't think size for sizes sake will bring much more improvement to such models.
Aside from any of the aforementioned breakthroughs being incorporated, I see this type of chat GPT stuff plateauing in ~1-2 years.
Maybe that's what thinking is though. I mean our brains have neurons that connect to form natural matrices... who's to say that the nature of forcing energy through that mathematical structure isn't the very definition of thinking?
> Maybe that's what thinking is though. I mean our brains have neurons that connect to form natural matrices... who's to say that the nature of forcing energy through that mathematical structure isn't the very definition of thinking?
I don't know what to say. I think the AI hype just go too far and now people believe random bullshit like in your comment.
Artificial neural networks look nothing like the neurons in our brain. They have very little in common. Artificial neural networks contain layers of neurons where each neuron is connected to all neurons of the previous layer with a floating point weight for each neuron - neuron pair and a floating point bias. This in theory allows you to approximate how the neurons in your brain work but even then it is just an approximation and you may need multiple neurons to simulate a single human neuron.
The next step up is spiking neural networks, which are actually biologically inspired and basically nobody cares about them because back propagation is hard. Why? Because spiking neural networks are not continuous functions. Instead, neurons send spikes and encode information in the timing of their spikes. Neurons only send their own spikes once they cross a certain threshold. So now you have non linear behavior. Again, you can simulate them using ANNs but the primary difference is that spiking neural networks are naturally sparse which is in complete opposition to your statement of "I mean our brains have neurons that connect to form natural matrices". It couldn't be further from the truth. You are now working backwards from the mathematical model of ANNs and are now telling people based on this information how the brain works, despite massive amounts of counter evidence. Do you understand how ridiculous that is? That is only something economists do, because there is money to be made from lying, not biologists or any other science.
> So maybe this is what is required to invent everything else.
A really interesting point. I've always held that we are nowhere close to real AI because we fundamentally don't understand what intelligence is, and we are not building complex enough devices for intelligence to be an emergent property. However, that doesn't consider the possibility that with enough computing power and sufficiently sophisticated models, we could end up with intelligence accidentally bootstrapping itself out of other large models, even if all we are doing is creating linkages between models via API calls and other similarly "dumb" steps.
>even if all we are doing is creating linkages between models via API calls and other similarly "dumb" steps.
I mean isn't this what Neuroscience has discovered about the human brain?
Millions of year old fish, lizard, and mammals brains... And the neocortex, which is new.
And destroying one part basically damages the whole person.
Asimov invented the Three Laws of Robotics to cause interesting stories, not as an actual proposal for how to guide robot behavior. The stories are about how they don't work.
Thus I'm not sure what 'making a huge progress in that direction' means. The direction where we attempt to guide AIs using an inadequate model deliberately designed to throw up ambiguity and paradoxes?
He didn't invent it as much as showed how flawed a concept they are. Yes, using them as interesting narrative devices I agree. But people seem to think this is a bit of world building by him, when it was more social commentary. I think it's clever to contrast our linear thinking with the complex systems of an automated, networked society.
From what I remember, Asimov wanted to write science fiction stories about robots where robots were useful tools for humans, instead of the rampaging monsters robots usually were in stories written by other people. Asimov’s early robot stories had no specified rules for robots, but he soon thought about what the specific rules should be, and came up with some rules, and used them as a backdrop and lore for many (many) subsequent stories. The rules were therefore formed as a narrative tool, and we should not realistically expect anything more from them.
Are you trying to say I'm reading too much into a science fiction author's work? Maybe. It's fun to think about it. He wrote it for me to have fun with it, no?
I’m saying that he did not invent the rules to show what a flawed concept they were, nor for the purpose of social commentary. He merely wanted some simple rules so that robots could be considered “safe” by the world and characters in his stories.
The so-called “death of the author” may be a truth with regards to you want to believe that the stories are about, but when actual authorial intent is a documented fact, what the author intended is, IMHO, not up for interpretation.
> He merely wanted some simple rules so that robots could be considered “safe” by the world and characters in his stories.
On the contrary, the first story to feature the Three Laws had the laws conflict with reach other and render the robot useless.
The entire point of the story is the counterintuitively bad emergent result of sensible-looking rules governing behaviour.
Later stories repeated this, finding new entertaining and interesting scenarios that showed the inadequacy of the laws.
Other stories did have them as background lore. But they originated as the center of the story, and very effectively, as we are talking about them 80 years later!
Maybe from a purely literature analysis point of view you are correct. I wouldn't know, I didn't study literature analysis. But it feels like gatekeeping when you say I'm not allowed to interpret some science fiction story some way or another.