Copilot will prompt you before accessing untrusted URLs. It seems a crux of the vulnerability that the user didn't need to consent before hitting a url that was effectively an open redirect.
Does it do that using its own web fetch tool or is it smart enough to spot if it's about to run `curl` or `wget` or `python -c "import urllib.request; print(urllib.request.urlopen('https://www.example.com/').read())"`?
What are "untrusted URLs" ? Or, more to the point: What are trusted URLs?
Prompt injection is just text, right? So if you can input some text and get a site to serve it it you win. There's got to be million of places where someone could do this, including under *.google.com. This seems like a whack-a-mole they are doomed to lose.
For the same reason government in general (e.g. the US military) isn't funded by one big GoFundMe. The marginal value any individual actor gains from their investment in public research or services is almost zero. It only works when it's prescriptive on a large scale. See: the tragedy of the commons.
Crowdstrike could have deployed the same broken code in their Linux or macOS agents. Nothing much for Windows to do if a kernel driver is segfaulting (when disabling it could be dangerous for users.)
So the question is why we need CrowdStrike software in the first place? Why our systems are not secure enough that companies feel the need to install additional security software? Obviously demand for secure operating system is there. CrowdStrike company valued at $80B, so lots of money for Microsoft and other operating software vendors to grab.
I do understand that main driver behind CrowdStrike installations is compliance checkbox. It still keeps the question, unless we assume pure corruption. But I've heard opinion from security experts, that this software really improves Windows security.
macOS and Linux do not have nearly as much a need for ridiculous endpoint security tools like this to begin with.
The world running on Windows is a monumental waste of resources and a huge security threat. This will happen over and over again.
The fact that even dummy little terminals that are strictly responsible for showing flight arrivals and departures was impacted by this is hysterical. Why was that not an android or chromeos device with an immutable filesystem, A/B blue green update strategies etc.
serious question, does anyone really think Linux antivirals are good or necessary, particularly if they are active measure kernel things and not just passive scanners?
I have only seen people use them when windows it departments suddenly have to pretend to be cloud savvy, or when enterprisey infosec teams are looking for more vendors to bloat up their budgets. If it’s written in contracts, it’s not the customers demanding av on ephemeral cloud servers, it’s the home team bloating costs so they can cut them later for a raise and applause.
Aaaand whenever it goes that way, antivirals affect performance and stability with random problems, always hurting more than they help
Nine times out of ten it’s not even for security it’s for checking some kind of auditing compliance box. We’re perpetuating this nightmare quagmire of shit and no one understand how it works.
Any details on what compliance regime specifically requires it for Linux tho, and whether it differentiates static servers from ephemeral? I’m just curious since you always hear “compliance” but I’ve never actually seen the requirement coming from anywhere except windows sysadmins who are out of their element
Part of the issue is that compliance is so broad and will vary from industry to industry, state to state and country to country. If you’re in defense and work with the government you’re requirements will be different versus healthcare or the education sector.
That should work fine in the vscode debugger, you just want to make sure that the transpiler you're using is generating sourcemaps. Generally they do by default. If you have issues, open a github issue and I'll fix it :)
In the VS Code JS debugger, there's an option to "exclude caller" on a call frame that which prevents stacks with the given caller from pausing at a location. As mentioned elsewhere, browser devtools have something similar with "Never pause here." Do you think there's more than tools can do to make your process easier?
I maintain the vscode debugger and found both the article and your comment interesting--there's a large overlap between "programs with anti-debugger techniques" and "programs that are hard to debug."
The overlap would be due to the JS obfuscation. This makes it both hard to debug and hard to run the debugger. What is needed is a way to unravel the obfuscation. This is mostly driven by a massive lookup table which contains text strings to be substituted for the coded vars in the JS. For example, a var called _0xff09b8 might be the code for 'toString'. Harder examples may involve coded vars that are used to call a function which generates the array subscript needed for the table lookup. It is literally mind-bending.
What I'm saying is that we need a way to get that table (array) and perform the substitutions in order to recreate the original code as text instead of numbers. This is likely way beyond the scope of a debugging tool. Or is it?
Because your example is a breaking change, and breaking changes are hard to make in a runtime that needs to reasonably support two decades worth of web content.
For example, if you have a `binarySearch` function that returns -1 if an element isn't found, a developer might do something. `const result = arr[index]; if (result !== undefined) { ... }`. This would then start returning the last element instead of undefined at that index.
We already have things like "use strict", because of backwards compatibility. Following the same idea, we could have something like "use ES2023" or something along those lines. Issue with JavaScript is that browsers have in-flux implementations of new features (as browser parent companies see it fit for their usage), and there's no cohesive point in time release process. I think "living" standards, are part of the reason why the web stack is so jumbled.
But what do I care, whatever mess and complexity arises from these "good enough" implementations is left for the generation after us to deal with :)
It is also a breaking change to use new syntax and functions since old browser does not support new features. In this perspective `arr[-1]` seems a fair breaking change.
No, because changing browsers to interpret `arr[-1]` as `arr[arr.length - 1]` breaks existing sites that expect `arr[-1]` to be interpreted as `arr['-1']`: That is, the value stored on object `arr` at key name '-1'.
Changing browsers to interpret `arr.get(-1)` as `arr[arr.length - 1]` doesn't affect any old code using `arr[-1]`.
It's not about supporting old browsers. It's about supporting old code.
I think you're confusing your application with the language itself.
Adding new syntax and functions to the language is not a breaking change. Old code will continue to work.
If you start using these new features in your application, and it no longer works on old browsers, then sure that's a breaking change. But that's a choice for you to make. The language is still backwards compatible.
There's a valid example of code that would be broken (`indexOf` returns `-1` as "not found"). Is it a good way of solving whatever the author was trying to do? Probably not, especially now that sets exist. Is it code you might conceivably find on hunreds of sites across the past decades of the world wide web? You bet.
Yes, we could introduce another "use strict". But we only just got rid of the one via ESM (which enforces strict mode). That was a one-off hacky solution to a hard problem coming off the end of a failed major version release of the language (look up ECMAScript 4 if you get a chance). We don't want to see a repeat of that.
All of this was hashed out during the "Harmony"[1] days. Versioned-JS was of course one possible future. Maybe even still is. But the prevailing decision coming out around that time and leading to ES5 and ES2015: We'll add "use strict" as a single-point-in-time breaking opt-in upgrade to fix a lot of the common problems, but let's otherwise stick to "One JavaScript"[2].
You may find [2] and [3] especially enlightening to understanding this thinking, and any other discussions from ES Discuss on the topic if you fell like digging into history.
Maybe this is simple in implementation, but it's definitely not simple in developer experience.
You grab some code in one of your old projects for implementing a binary search. Can you copy-paste it into a new project that targets a newer language version?
The question isn't as simple as "does it have syntax errors", because we're talking about changing semantics here. Given a set of semantic changes and a piece of code, figuring out (either as a human or a computer) whether the observable characteristics of that code have changed is somewhere between vexing and impossible. It's entirely possible, for example, that your code encounters changed semantics, but not in a way that changes the actual behavior of the code.
In this world it just becomes very, very difficult to reason about extremely common operations; it'd be a constant source of frustration. There's a good reason you rarely see languages versioning their behavior in impactful ways.
> Why is there still no simple way of handling changes like this?
This is nothing JS specific. Breaking changes are breaking changes.
If you can, don't introduce them.
> simple way to have a header in each file with the language version
One special aspect that differentiates JS from other languages:
It's both a language AND a universal runtime. A lot of JS that's executed is not JS that's written by humans but generated by a compiler/transpiler.
So adding a layer of header versioning is not a big win in terms of developer experience: It would anyways be the deployment toolchain that's responsible to deal with such a versioning scheme. It would ideally be invisible to the developer.
You can add a polyfill to check if `Array.at()` exists, and if it doesn't, create a function that does the same thing and add it to the `Array` object, so now all `Array.at()` code works as expected.
Then once every environment you target supports `Array.at()` by default, you can remove the polyfill to reduce the size of your code.
They test a lot of websites before introducing new methods. Something somewhere may break but it's very unlikely and this pragmatic approach allows progress.
This is also why the language got Array.prorotype.flat instead of flatten (flatten was breaking an old version of a popular library called Mootools): https://developer.chrome.com/blog/smooshgate/
And extending javascript's built-in objects has been considered bad practice since at least 2007.
Before that point, browser environments were so different that you needed to write code per-browser. Those theoretical concerns didn't really matter since in-practice you were essentially coding the same app in different scripting languages.
> extending javascript's built-in objects has been considered bad practice since at least 2007
Totally. It's just extending the prototype that causes the problem though, not extending from (class myclass extends array). This causes a lot of confusion among new js devs so I underline this on every opportunity.
> Is your microbiome not "you"? It's as active a participant in your hormone balance as any other organ in your body.
Your microbiome is definitely not "you". You can take antibiotics and nuke your entire microbiome, and you'll be mostly fine. Nuke "any other organ in your body" and you're gonna have much bigger problems.
The tenants in an apartment are not the apartment. You're the apartment, you're letting the microbiome stay as tenants as long as they pay their rent (break difficult foods to your advantage). Unruly tenants get thrown out. Of course it takes some effort to evict - this is the craving for particular foods that must be overcome to consciously stop eating those foods.
If you endure the unhappiness generated against you by your microbiome, you can change it. Eat what you know is good regardless of microbiome happiness, and you'll cultivate a microbiome that is happy when you eat that.
Your microbiome alters the very taste of foods in your mouth. It was hard for me to stop eating meat and start eating plants because the plants tasted like shit, but after some time sticking with it everything flipped - plants started tasting amazing, meat not so much anymore.
Well, why would you deliberately endure discomfort? That's one of the defining characteristics of our species: Our ability to consciously delay gratification to achieve better long term results.
If you starve your microbiome of sugar, then certain bacteria will die or go dormant.
Notwithstanding how you, or I, define “you”, once that change is made to your microbiome, you won’t find that coke to be very pleasurable.
I don’t think it’s about being less valid, but if you were to find out you were not making decisions for yourself rather someone or something else was controlling your decision making processes, would you make simple changes to take back control?
As I understand the blog post, the difficult (and buggy) part is not the addition of latency, but calculating how much latency to add and where to add it. I'm not sure how tc would help much here, and actually don't see anything to indicate they weren't using tc already.
I made the switch to a projector a couple years ago and advocate highly for them. The screen can disappear when not in use, and they're "dumb by default". New LED projectors have bulbs rated to last 10k hours or more.
I use a projector as well instead of a TV but they're definitely not standard replacement for a normal TV in all situations. You need a dark room (so if you like to have the TV on while do you chores or other tasks, it won't work well with a projector), the fans can be a bit loud depending on your noise tolerance, the contrast, color and brightness/blacks won't be as good as a decent OLED TV unless you go for a very high end projector and screen, which is going to be very expensive.
It's definitely worth considering but there are pros and cons to consider, it's not an acceptable replacement in all situations.
The other thing I've noticed, is that you can't casually watch something in the company of some visitors, it's going full movie night or nothing. That's a real use case I'm missing with a TV.
And you can't even make the picture on the wall smaller if you want, it always dominates completely.
When my mother is visiting we just want to half watch some game show and chit chat, we don't want to have a movie night, and we don't want to focus entirely on each other/conversation for a whole evening either, it's kind of exhausting.