For people younger than ~37, I'd remind them that crypto before 2000, especially in shipped commercial products, was playing under substantially different government restrictions.
Effectively and in short, you were prohibited by the US government from shipping strong encryption in any internationally distributed product. Which generally meant everything commercial.
Despite open source implementations of strong encryption existing (e.g. PGP et al.).
Now, no one bats an eye if you ship the most secure crypto you want. Then, it was a coin flip as to whether you'd feel the full weight of the US government legal apparatus.
Still is. To this day, we have to debate and justify ourselves to these people. They make us look like pedophiles for caring about this stuff. They just won't give up, they keep trying to pass these silly laws again and again. It's just a tiresome never ending struggle.
And that's in the US which is relatively good about this. Judges in my country were literally foaming at the mouth with rage when WhatsApp told them they couldn't provide decryption keys. Blocked the entire service for days out of spite, impacting hundreds of millions.
I down-voted this and I'll say why. I'm pretty dang liberal in my politics; my push back isn't because I'm carrying water for right wing groups.
Q-Anon is a current right wing conspiracy group that claims powerful democrats are trafficking children, the "we must protect our kids from XYZ" justification crosses political lines. But they aren't alone.
Back in the 90s there were a few years of "the satanic panic", where there were wild claims made about daycare centers doing unspeakable things to children, things that beggar belief just from a logistical perspective. People spent years in prison over this. There was no whiff then of it being a conservative cause -- it mixed the usual conspiracy theory dynamics along with the Christian moral panic dynamics.
Back in the 80s Tipper Gore, wife of then senator Al Gore, drove a campaign to label and censor music to "protect the children."
eg, children were coached into giving answers and making up scenarios. for instance, one child claimed that they were taken in an airplane and flown to a secret location with clowns and sex, then flown back to the class in time for their 2pm pickup. Stories about ritual animal sacrifice in their daycare room, stories about children being murdered even though none were reported missing.
> the "we must protect our kids from XYZ" justification crosses political lines.
> There was no whiff then of it being a conservative cause -- it mixed the usual conspiracy theory dynamics along with the Christian moral panic dynamics.
Yes, and? The point is to repay lies and ad hominem with lies and ad hominem.
If opponents of strong encryption want a good-faith argument, they are free to admit that the actual reason strong encryption is "bad" is because it stops them from attacking and spying on everyone in the world, but I doubt they'll take that option.
How hard would it have been for a "rogue state" to get a copy of that floppy? I understand that times were different, you couldn't just PGP encrypt it and attach a 1.44 MB blob to an email, sending it at 24 kbps. You couldn't just upload it to an anonymous filesharing site.
But today it seems fundamentally obvious that once a single copy is leaked, it's all over... was that not true in 2000?
Gnutella, including popular clients like LimeWire, were released around the same time as Windows 2000. People were doing decentralized filesharing of files larger than 1.44 MB just fine in 2000.
Filesharing at that time was just wild, by the way. It was far too easy to set up your client such that you were sharing the entire contents of your computer with the whole internet. More often than not, this was done by the kids in the family on the same machine where mom and dad had their work stuff plus their private finances.
So of course the files were leaked. If you were intending to share something illegal to distribute outside the US, you could easily get plausible deniability just by sharing everything on your computer and feigning ignorance.
Back in those days you didn't even need to be on LimeWire or eMule to look at the contents of someone's home PC. I remember around the late 90s/early 2000s, when I got DSL. This is before consumer grade routers became a common thing in the household. So most people had their PC connected directly to their DSL box. Browsing through windows share on other people's home PC was one of this easiest things to do.
with dc++ and a fiber network connecting major universities in Germany at 10MegaBytes/s, it was more than fire. I remember downloading the entire ** trilogy in minutes.
It was all extremely silly. Debian took a different approach: before 2005, they put all crypto packages in a separate "non-US" archive, hosted in the Netherlands. American developers weren't allowed to upload there. That way, Debian never exported crypto code from the United States, it only ever imported it.
Yep, US export restrictions ended up spurring foreign investment in crypto like Thawte (founded by Mark Shuttleworth) and SSLeay (later forked as OpenSSL).
There was a story of a hundred programmers taking the program PRINTED ON PAPER to a conference in Sweden to type it in again, because somehow export of binaries was forbidden but not the printed version of it. Is it true? Which event organized this?
> A book comprised entirely of thousands of lines of source code looks pretty dull. But then so does a nondescript fragment of concrete -- unless it happens to be a piece of the Berlin Wall, which many people display on their mantels as a symbol of freedom opening up for millions of people. Perhaps in the long run, this book will help open up the US borders to the free flow of information.
It was. People were sharing pirated software on BBSes 40 years ago! Downloading a floppy might take an hour. In the 90's, I knew kids who got jobs at ISPs just so they could run warez FTP sites off of the T1.
Fast forward to 2000: T1 lines were still being used, but ADSL deployment was growing like wildfire. Some providers were legendary for offering synchronous DSL with extremely few limitations, for a fraction of the cost of a T1. That's what really kicked off a new generation of distributed file sharing. Nothing today compares; the dark web is a tiny blip in the ocean that was the 2000's file sharing scene.
I'm skeptical. In terms of actual usage, there are many, many more people online today. Many people are sharing torrents over their broadband connections: entire movies, all seasons of show. We won't even get into the piracy of books! I think the piracy scene today is much bigger.
There is more bandwidth used today, but there was so much more shared material before. The most heinous illegal shit you could imagine would just float over the network through your node. Your dad's QuickBooks files were downloaded by strangers, along with your family photos. Corporate records, lists of social security numbers kept on a hospital computer. Anything you could imagine. It's vastly different today; the content you can get is curated, only select things get put onto services, and takedown notices often get them dropped. Nothing was taken down back in the day, nobody was watching, and there was no filter. I saw things at 17 that no kid today could possibly get access to. The nature of file sharing today just isn't an open tap like it used to be. You have to try hard to publish or retrieve stuff today; back then it was almost accidental.
We were sharing lots of 3-7MB files peer-to-peer at the time :D Napster, Limewire, Audiogalaxy, etc. Plenty of public FTP sites all over the place as well.
Even in the late 90s, 128kbps ISDN connections were not unheard of, and 256kbps DSL was rolling out as well.
Damn, Audiogalaxy! That takes me back! A simple Windows client for downloading (and well uploading), and to search and download you go to their website, login and add stuff to your queue (although I barely remember what the website looked like). Sooner or later someone with the files you want would come online and your computer would begin downloading from their computer..
Now, no one bats an eye if you ship the most secure crypto you want.
To me, there are only two plausible explanations for the change:
1. The three letter agencies gave up on backdooring cryptography.
2. The three letter agencies successfully subverted the entire chain of trust.
Only one of them is consistent with a workforce consisting of highly motivated codebreaking professionals available working for many decades with virtually unlimited resources and minimal oversight.
I think a 3rd option is actually much more likely and (semi) less conspiratorial:
3. NSA realized that "frontal assaults" against encryption were a lot less fruitful than simply finding ways to access info once it has been decrypted.
Would have to search for the quote, but Snowden himself said exactly that, something along the lines of "Encryption works, and the NSA doesn't have some obscure 'Too Many Secrets' encryption breaking machine. But endpoint security is so bad that the NSA has lots of tools that can read messages when you do." And indeed, that's exactly what we saw in things like the Snowden revelations, Pegasus, and I'd argue even things like side-chain attacks.
Plus, I don't even know what "The three letter agencies successfully subverted the entire chain of trust" means. In the case of something like TLS root certificates that makes sense, but there are many, many forms of cryptography (like cryptocurrency) where no keys are any more privileged than any other keys - there is no "chain of trust" to speak about in the first place.
I've long (post-snowden?) estimated NSAs capabilities are roughly what you imply. Lots of implementation-specific attacks, plus a collection of stolen/coerced/reversed TLS certs so they can MITM a great deal of web traffic. US-based cloud represents another big backdoor for them to everyone's data there, I think.
They've presumably got a pretty vested interest in making sure most communications are legitimately secure against most common attacks - arguably good for national security overall, but doubly good for making sure that if anyone can find a novel way in, its them, and not any of their adversarial peers.
There's a reason many corporate information security programs don't go overboard with mitigations for targeted, persistent, nation-state level attacks. Security is a set of compromises, and we've seen time and time again in industry that this sort of agency doesn't need to break your encryption to get what they need.
When the NSA for example has access to the Intel ME or AMDs version of it(and I think they do) then they surely don't need to break any encryption. They don't even need to hack. They just would have direct access, to most Desktops/Servers.
Even this is too conspiratorial for me. Not because I believe the NSA wouldn't like access, but because it's not the best approach. Convincing Intel or AMD to have a hidden back door, and to somehow keep that it hidden, is a nearly impossible task. Compare that with just hunting for 0-days like the rest of the world, which the NSA has shown to be quite good at.
Not saying there couldn't be a targeted supply chain attack (that's essentially what was revealed in some of the Snowden leaks, e.g. targeting networking cables leased by big tech companies), but I don't believe there is some widely dispersed secret backdoor, even if just for the reason that it's too hard to keep secret.
At a minimum, it's a thing that certain security conscious consumers (cough DoD) were able to get Intel to include a hidden (not typically user accessible) bios flag for disabling most features of the management engine. So they're at least concerned about it as a security risk. That doesn't necessarily mean they also have backdoors into it, but it's not crazy to think they might. It's hard to be too conspiratorially minded with respect to intelligence stuff, if you aren't making the mistake of treating suppositions as facts.
>Convincing Intel or AMD to have a hidden back door, and to somehow keep that it hidden, is a nearly impossible task
Interesting, how would an X86 instruction with hardcoded 256-bit key would be detected? IIRC it's really hard to audit the instruction space for CISC architecture.
Well sure, they would not use it for everyday standard cases to limit exposure. Intel does have something to loose, if this would became public knowledge.
But I cannot believe they resisted the temptation to use that opportunity to get such an easy access to so many devices.
Huh? As far as I know every Intel ME has access to the internet, can receive push firmware updates and write access to everything else on the system. It does not need a modified version, they can just use the official way, the normal Intel ME on target devices, if they can cloack their access of the official server, which I think could be achieved of using just the key of the official server and then use another server posing as the official server.
But it has been a while that I read about it and I never took it apart myself, so maybe what I wrote is not possible for technical reasons.
I don't think that's the case. Don't you need to have a selected NIC, integrated properly to get the Intel ME network features? Typically branded as "Intel vPro"
Otherwise, you need something in your OS to ship data back and forth between the ME and whatever NIC you have.
vPro, also known as AMT, is proprietary and it's for professional desktop and laptop systems. ME instead is based on IPMI and is for server-class systems.
Are they reusing the name to be more confusing? Intel ME calls to mind the management engine that's been embedded in most Intel based computers for the last 15 or so years.
The trouble is, as far as I know, that the ME cannot be deactivated. Even if you are a really sensitive network.
Your option is to find some of the few Intel chips without it, or find another chip vendor.
This often means you can't use common off the shelf systems, so now you can be a victim of a targetted supply chain attack.
Attacking machines directly over the network is dangerous for them from the standpoint of detection, though. You can bet that any ME/PSP remote access exploits are used very carefully due to potential detection.
You're right though, I guess I didn't mean to say that NSA would give up on or would not want back doors into widely deployed crypto algorithms, but even with Dual_EC_DRBG the suspicions were widely known and discussed before it was a NIST standard (i.e. I guess you could say it was a conspiracy, but it wasn't really a secret conspiracy), and the standard was withdrawn in 2014.
IMHO, the IC gave up on the feasibility of maintaining hegemony over encryption, particularly in the face of non-corporate open source. You can't sue a book / t-shirt / anonymous contributors.
Consequently, they still have highly motivated and talented cryptanalysts and vast resources, but they're attacking widely-deployed academically-sound crypto systems.
Hypothetical encryption-breaking machines (e.g. large quantum computers) are too obviously a double-edged sword: who else has one? And given that possibility, wouldn't you switch to algorithms more secure against them?
In reality, the NSA's preference would likely be that no-such machine exists, but rather there are brute-force attacks that require incredibly large and expensive amounts of computational resources. Because if it's just a money problem, the US can feel more confident that they're near the top of the pile.
Which probably means that their most efficient target has shifted from mathematical forced decryption to implementation attacks. Even the strongest safe has a weakest point. Which may still be strong, but is the best option if you need to get in.
I don't know much about hardware, but is it not possible that there is a small part of a chip somewhere deep in the highly complex systems we have that simply intercepts prior to encryption and, if some condition is met (a remote connection sets a flag via hardware set keys), encrypts/sends the data elsewhere? Something like that anyway. It seems possible, but idk how plausible it is, and if things like the Linux kernel would be likely to not report on it, if the hardware is not known enough.
Anyway, just suggesting something that wouldn't require quantum cryptography.
As pointed out by another comment above, exfiltration then becomes the risky step.
If that did exist, you'd still have to get packets out through an unknown network, running unknown detection tools. Possible, but dicey over the intermediate term.
Who's to say they didn't just plug a box in, run a fake workload on it, and put all network traffic it emits under a microscope?
Seems like you could just blast it out on one of the endless Microsoft telemetry or update channels that are chatting away all day and either intercept outside the network or with Microsoft's help. Only way to protect against that would be blocking all internet access.
I don't buy that it has to be just one or the other. Fundamentally, crypto is just very dense information and once it became widely enough standardized by people who could easily share and apply it commercially, getting even the strongest crypto to the most basic user becomes extremely easy.
Short of blocking the very essence of digital data spread and transactions, the three-letter agencies and the giant governments behind them realized that there was no way to effectively put that particular genie back in the bottle without fucking over too many other extremely well-connected commercial interests.
Thus, while they didn't entirely give up on their bullshit, and keep looking to find arguments for privacy subversion, they realized that roundabout methods were a usable practical course.
That's where we stand today: a world in which there's no obvious way to block something that's so cheaply easy to share and securely be applied by so many people, but governed by technocrats who do what they can to subvert meanwhile.
The fundamental math of crypto is secure, regardless of any conspiracy theories. AES-256, for example, can't just be broken by some secret Area 51 alien decoder ring. The mathematics of good modern crypto simply crush any human computing technology for breaking them regardless of budget. However, the agencies also know that in a complex world of half-assed civilian security and public habits, they still have enough methods to work with without delving into political firestorms.
Note that ACME (Let's Encrypt) means that anyone that can reliably man-in-the-middle a server can intercept SSL traffic (module certificate revocation lists, and pinning, but those are mostly done by big sites with extremely broad attack surfaces).
Similarly, most consumer devices have a few zero-days each year, if not more, so if you really want to decrypt someone's stuff, you just need to wait a few months.
I think that both your explanations are probably incorrect though. It's a bit of "neither" in this case.
They continue to backdoor all sorts of stuff (they recently were marketing and selling backdoored "secure" cell phones to crooks), and most chains of trust are weak enough in practice.
> Note that ACME (Let's Encrypt) means that anyone that can reliably man-in-the-middle a server can intercept SSL traffic (module certificate revocation lists, and pinning, but those are mostly done by big sites with extremely broad attack surfaces).
I don't understand why you think ACME means this. Can you explain?
Not the original poster, but if you can control responses to and from a server (MITM) you can get a TLS/SSL certificate issued for it easily. In the old days, getting a cert was quite a hassle! You used to have to fill out paperwork and perhaps even talk to a human. It could literally take weeks.
I don’t think a MITM would be sufficient to fool ACME. As Let’s Encrypt’s guide explains[1], an attacker in the middle would still fail to possess the target’s private key. As a result, the proof of possession check would fail.
The attacker could sign with their own key instead, but this is trivially observable to the target (they don’t end up with a correct cert, and it all gets logged in CT anyways.)
If you have a full MITM (you can do anything you like with all traffic to/from the target), you just do your own ACME validation with the target's domain without involving the target at all. Then you use that to MITM the SSL on any connections to the target (terminate SSL between the other side and your middlebox, then push the plaintext into your target, which is unaware anything has changed at all).
If the owner watches CT logs they will know about it (and you may need to jump through some more hoops once the target tries to renew their cert), but you get a lot of info in the meantime.
Sure, but this has nothing to do with ACME itself. The attack model here is "if the attacker is effectively in control of the domain, then they can demonstrate that control." That's a way stronger posture than being able to maliciously MITM a specific ACME session, which (I think) is what the original concern was.
However, even with the full MITM here, this attack assumes that the attacker can proxy plaintext to the host. I'm not aware of many sites that allow sensitive actions (e.g. logging in) over HTTP anymore.
(And, as you note, this is detectable via CT. But it's fair to point out that many/most smaller operators probably aren't bothering to monitor public CT logs for unexpected issuances.)
> The attacker could just proxy the plaintext by issuing HTTPS requests to the backend server instead of issuing HTTP requests.
Yes, assuming the target has HTTPS. The context I originally assumed was one where the target doesn’t yet have a certificate and is using ACME to obtain one.
Separate from that, I agree that an attacker with the ability to demonstrate domain control can subvert issuance in a way that only CT, stapling, and other “post hoc” methods can detect.
Would the target get notified by LetsEncrypt about this scenario though? Let's say I setup Certbot on my server. I'm not watching CT logs. How would I know about the double issuance?
I don’t think it would be a double issuance; it’d be either a failed issuance or a single issuance with a single unexpected key. In other words: the target would end up with in an error state, and they could use CT to determine what happened.
What if they're getting a new certificate and proxying the traffic? As long as the cert looks okay to the end user, they're not going to notice for a while.
Perhaps I’m misunderstanding what you’re saying, but this still doesn’t break the scheme: an attacker who interposes on ACME with their own private key is going to result in a CSR response for the wrong private key being sent back to the target server, which should cause an alertable failure. Even if this is somehow not checked (this would be a serious vulnerability in an ACME client!), the targeted server would end up serving a certificate that it can’t actually use (because it doesn’t have the private half).
I think you're misunderstanding. Say as an attacker, I am able to get control of the DNS zone for a target. We will assume the site is not using an ACME issued cert, but some other provider.
I am now able to get a new certificate issued with ACME using the DNS-01 challenge. I set up one of my own servers as a proxy, HTTPS terminated with this new cert. I then have it proxy to the existing site (by IP address.) I then change the site's DNS to point to my own server. The users are no wiser, but I am able to intercept all traffic.
Okay, I think I understand what you're saying now: this is similar to the attack described by 'rcxdude here[1].
I interpreted the original comment that started this thread to imply an attack on ACME itself, not the fact that ACME can't detect the difference between someone who legitimately controls a domain and someone who illegitimately controls a domain. As far as I know, that's considered a more general defect in the Web PKI, one that predates ACME substantially.
The HTTP-based challenge is similar in scope to the DNS one: an attacker would still need the target’s private key to actually impersonate the ACME session itself.
Put another way: this is still not an issue with ACME itself, but the fact that the Web PKI is built on top of unauthenticated substrates (primarily DNS). If someone (like your cloud provider) can demonstrate control of your domain, then it is ipso facto their domain as well. ACME can’t solve that any more than the previous generation of DV techniques could.
I understand. I just used the DNS challenge as an example. I generally use the DNS challenge since I can assign certs on my private network more easily (the zone is public.)
They aren't backdooring modern open-source encryption. They may have some elite knowledge about some esoteric corner of the code that allows them to theoretically throw a data center at the problem for a month or two, but the days of easy backdoors to decrypting everything in real time are gone imho. It is just too easy to implement mathematically-strong encryption these days. Too many people know how to do it from scratch. The NSA's real job is keeping american systems safe. That is done through creating the best encryption possible. They are very good at that job.
I see another plausible explanation: The NSA is concerned with maintaining security of its own / the government's infrastructure / is interested in finding breaches in infrastructures of others.
(this is speculation, I have no actual knowledge on this)
Only one is consistent with the documents that have been leaked since the change to export restrictions. The other is what the marketing department at Reynolds Wrap would like you to believe.
"Now, no one bats an eye if you ship the most secure crypto you want."
The most surprising thing to me is that, in speaking in the past several years with younger entrepreneurs, they're not even aware of the obligation to file for an export license for any/all software containing crypto (such as that submitted to the App Store).
I've not yet seen a case in which a mass market exemption isn't quickly granted, but devs still need to file - and re-file annually.
Not just US but other countries had their own restrictions. For example I think France didn't allow anything better than 40-bit encryption without key escrow.
For anybody who hasn't already read it, I highly recommend the book: "Crypto" by Steven Levy. I was 30% of my way through the book before I started recognizing real world events, news stories, whispered computer secrets; and realized that it wasn't a fictional book and was instead talking about real history.
Fabulous book, I found it in a public library when I was 15 or so and it was a hell of an education. Not least because I was already reading about tor and i2p. I'd recommend it to anyone - the story about Phil Zimmerman printing the code to PGP in a book made me laugh my head off.
Yeah, I worked at a company up to a few years ago where it was actually a huge competitive advantage for us not being in the US, because the products we designed, manufactured and sold (full satcom terminals as well as the microwave converters in them) would have been ITAR if they came from the US (being ‘dual use’).
I had never heard of this particular aspect of demanufacturing, that's fascinating. Do you know of any products where this was a deciding factor, or at least a major consideration? (I recognize you probably can't easily cite internal corporate documents)
Also you couldn’t just ship products with a spot where crypto went and remove the crypto. API designs had to go through mental gymnastics to allow crypto without explicitly adding crypto. Which is why you have odd constructs that take strings as arguments and give you encryption back. Sometimes.
And since new languages copy patterns from old to remain familiar, these APIs are still frequently some of the most patience-testing.
It's not completely gone. If you implement crypto in an iOS app you have to get an "export license" even if you're not based in the US or publish your app there.
and I believe it was a major contributor to us having poor infrastructure for PKI protocols today, since these restrictions meant that it was pointless to try to bake them into standards
https://en.m.wikipedia.org/wiki/Crypto_Wars
Effectively and in short, you were prohibited by the US government from shipping strong encryption in any internationally distributed product. Which generally meant everything commercial.
Despite open source implementations of strong encryption existing (e.g. PGP et al.).
Now, no one bats an eye if you ship the most secure crypto you want. Then, it was a coin flip as to whether you'd feel the full weight of the US government legal apparatus.
It was a crazy, schizophrenic time.