All well and good, but sadly this relies on the hosts managing DNS to include specific entries in their DNS configuration for keys to use during the encryption process. Unfortunately the vast majority of hosts probably won’t be bothered to do this, similar to DNSSEC.
And HTTPS relies on hosts managing SSL certificates. Web services don’t use them until it hits a critical mass, then it becomes weird and broken when you aren’t using it.
This just needs some time to settle in.
I remember when absolutely no one used https and then in a matter of a couple years things got really fast. Now you can easily browse with https required and only occasionally find the odd website that doesn’t use it (mostly some internet relic). That was such a great transition when it happened though.
It felt like it happened practically overnight when Let’s Encrypt released.
Let’s Encrypt was a godsend. Getting a TLS certificate before sucked.
Yes. Thank these folks:
Mozilla employees Josh Aas and Eric Rescorla, together with Peter Eckersley at the Electronic Frontier Foundation and J. Alex Halderman at the University of Michigan. Internet Security Research Group, the company behind Let’s Encrypt, was incorporated in May 2013.
They created the ACME standard, the open source community got on board, and soon enough everyone bought in, a massive step forward for Internet security and the benefit of open source.
So Firefox is basically the GOAT when it comes to internet security and privacy? They should team up with the signal guys.
Google preferring https sites was the motivator I saw for client demands.
SEO scores feed into the PPC cost in AdWords so all of a sudden people were crying out for their sites to “have the padlock icon” because what’s 20 bucks for a cert when you’re spending thousands of dollars a month
And now it’s free with stuff like Let’s Encrypt.
deleted by creator
Even with tools like Let’s Encrypt, people are still not
implantingimplementing HTTPS?HTTPS is pretty much ubiquitous these days. It’s mostly an issue on a few smaller websites and blogs that people haven’t cared enough about to bother getting a cert for… But even that is rapidly going away. Even if a website has HTTPS, it’s not entirely uncommon for some resources to be loaded over regular HTTP, and sometimes websites don’t properly redirect you to the HTTPS version, making it possible to end up on the unencrypted version by accident.
HTTPS is great, and Let’s Encrypt has been such a godsend for it… That said it’s not perfect, and also has some limitations on its own, and not every website implements all of the mitigations that help HTTPS do its job, so HTTPS adoption is a bit of a mixed bag. A big issue is that when you try to secure a previously insecure protocol this often makes downgrade attacks possible. For instance, if you just type “lemmy.world” into your web browser, and if somebody is able to intercept those packets, they could just reply “hey, I’m the lemmy.world, I don’t do HTTPS, let’s talk unencrypted” and your browser would have no idea that it should be talking HTTPS instead of HTTP. One way to avoid this problem is just by explicitly telling your browser to use HTTPS by going to “https://lemmy.world”, which tells it to talk over HTTPS, and in that case the man-in-the-middle wouldn’t be able to tell you to use HTTP instead and won’t be able to provide a valid certificate for lemmy.world (hopefully, anyway :P). This is also what HSTS is used for… It’s a header that the webserver sends to your browser saying “only talk to me with HTTPS”, so once you’ve visited a site your browser will remember that it should only use HTTPS with it in the future. This only applies to websites which you’ve visited before, though… To improve the protections a little bit there’s HSTS preload lists (basically your browser can have a list of HTTPS websites baked into it, so it knows when to only use HTTPS before you even do), https://hstspreload.org/… Or we could just solve this problem with DNSSEC and DANE, which allows you to look up the TLS certificates that should be used for the domain in DNS.
That’s probably more of a rant than you wanted 😅… But basically, HTTPS adoption is really good these days in the sense that most websites will have a TLS certificate available (probably from Let’s Encrypt!), and will speak HTTPS. But, there’s still areas where we can improve internet security. I’m not sure how the adoption of HSTS is going, but I think it’s pretty low. DNSSEC adoption is abysmal and we should probably fix that.
HTTPS is pretty much ubiquitous these days.
It never used to be, though. The same will happen with ECH/ESNI eventually, especially if browsers push for it like they did with TLS.
Yeah, especially before Let’s Encrypt recently it was a complete disaster. Definitely will be better support for ECH soon.
Cloudflare helped quite a bit too, although I wouldn’t call that “true” TLS as part of the connection was unencrypted. In the old Cloudflare days before Let’s Encrypt existed and before Cloudflare had their self signed origin certs, often the connection between the end user and Cloudflare was encrypted, but the connection from Cloudflare to the origin server wasn’t. People were celebrating Cloudflare as a way to easily add TLS to a site, but in the background it was still plain text!
Even if a website has HTTPS, it’s not entirely uncommon for some resources to be loaded over regular HTTP
I think all browsers will refuse to load a resource over HTTP if the website is served over HTTPS.
This is not true. Browsers will happily use http even if https is available, and without other mitigations like HSTS or DANE there is no way for your browser to even know that a site supports https. Many websites will forcibly redirect you to https, but this is the server telling you “hey connect with https instead”. A man-in-the-middle can simply not tell you to use https. Browsers have started marking http sites as insecure and will warn you about sending passwords, however.
I think I phrased it wrong, or there is a confusion with terms.
If a page is loaded with HTTPS, then images/CSS/JS/iFrames (resources) will not load over HTTP. The resources also have to be served via HTTPS.
If a page is loaded over HTTP, then resources (images/CSS/JS/iFrames) can be loaded over HTTPS.My objection was to the “even if a server has HTTPS, some resources will still load over HTTP”
As far as I know, this is not strictly true either. I believe most browsers currently block mixed active content like JavaScript or iframes, but will happily load images and such over HTTP (although I would not be surprised if this is changing).
You’re right, but HTTPS implementation added real, tangible benefits that everyone could understand. I think ECH is a little more abstract for the average user, which is why I compared it to DNSSEC which has notoriously poor buy-in.
Obviously I hope ECH becomes a well-implemented standard. I’m just rather cynical that it’ll be the case.
Apparently, Cloudflare already supports ECH, and a not-insignificant number of websites use them.
Unfortunately though, is that it’s cloudflare
Can you give me more insight as to why you don’t like cloudflare? I’m barely informed about this.
They created ECH. It makes what hosts you are visiting exclusive to them and browser companies when in use. You get marginal privacy through less companies being able to harvest your data.
Its marginal because that data is probably sold anyways.
That said, less competitors with the same data drives up the value when it does get sold which benefits, you guessed it, the author which is Cloudflare.
I encourage everyone to read this
https://0xacab.org/dCF/deCloudflare/-/blob/master/readme/en.md
Wouldn’t it be better if reverse proxies simply had a “default key” meant to encrypt the SNI after an unencrypted “hello” is received?
Including DNS in this seems weird.
What would stop a MITM attacker from replacing the key? The server can’t sign the key if it doesn’t know which domain the client is trusting.
Cool. Nice work Mozilla.
When is this coming to ff mobile?
Usually with these kind of engine-features, the rollout is simultaneous on desktop and Android.
And it says it’s rolling out with version 118 here: https://support.mozilla.org/en-US/kb/understand-encrypted-client-hello
Does anyone know how to enable this for nginx?
It’s been a couple years since I was involved with ECH, but the implementations at the time were:
The one by the draft’s authors in golang (Cloudflare). This is the actual test server. It uses Cloudflare’s fork of golang with an enhanced crypto library. https://gist.github.com/cjpatton/da8814704b8daa48cb6c16eafdb8e402
BoringSSL used for chrome. There are nginx builds with BoringSSL, but I don’t know if the setting are exposed.
https://boringssl.googlesource.com/boringssl/+/refs/heads/master/ssl/encrypted_client_hello.cc
WolfSSL which I never got around to playing with.
https://www.wolfssl.com/encrypted-client-hello-ech-now-supported-wolfssl/
NSS which is Mozilla’s TLS library. There is a test server buried in there some place for unit testing.
https://firefox-source-docs.mozilla.org/security/nss/index.html
With that, you ALSO need a DNS server that supports DNS over HTTP (DoH) and HTTPS service binding records (https://datatracker.ietf.org/doc/draft-ietf-dnsop-svcb-https/).
Bind9 had branches for both and I was able merge the two to satisfy that requirement.
When connecting to such a server, you MUST NOT use a DNS resolver hosted by any origination along the path from client to server as they can correlate the host from the DNS request with your encrypted client hello. You can actually man-in-the-middle ECH to decrypt the client hello by overriding the hosts record when controlling the DNS resolver. My project was testing this for parental controls.
Keep in mind, ECH really only benefits users connecting to a CDN. That is, when multiple services are behind the same IP. It masks which host the user is going to for any hop between the client and server.
Any data mining company worth their evils will have an IP to DNS index to figure out the host when only one is behind an IP.
This marginally gives some privacy to users. It hides the host from your ISP. It REALLY benefits browser companies and CDN hosts. What hosts a user is visiting now becomes exclusive data for those companies thereby driving up the value of the data. Assuming you aren’t being stupid with your addons.
Users using DNS-based filtering may need to tweak their configuration in order to make use of ECH. Firefox needs to be configured with a DNS-over-HTTPS server in order to make use of ECH. Depending on whether the DNS filter is locally hosted or hosted by an online provider, instructions for connecting to it over DoH will differ and users of these services will need to check their accompanying documentation.
Sooo, I’m a bit lost here. How do I ensure everything’s working when I’m using a pihole? I don’t think I’m understanding everything correctly
I think it requires you shut your pihole. Um. sorry. Ill let myself out.
It sounds like you’ll have to set your Pihole as the DNS server in Firefox’s settings, and then maybe from there it’ll work itself out? Or maybe the Pihole documentation will be updated in the next few days with some instructions on enabling this. I’m unsure myself to be honest.
That would make sense, wouldn’t it? I think I’m going to wait for the pihole team to inform about this.
So with this the ISP, or someone else sitting in the middle, would not even know the URL you’re accessing?
I don’t think so, that’d be straight up impossible unless you’re behind a VPN. Your ISP can see every connection made between you and any other server, but a VPN uses encrypted payloads between their servers and you, and they make the requests using their servers, and pass the results to you. That way, your ISP only sees that you’re using a VPN, but can’t see anything else.
As far as I understand it, ECH uses DoH (DNS Over HTTPS) to encrypt the domain name of your connections, but a direct IP address is always required, and most of the times, it’s enough to determine the website, as the ISPs can locate just about anything easily. However, the ISP won’t be able to (easily) know anything else about the connection, which remains unbroken between you and the server you’re connecting with.
But still a very good feature nonetheless.
IPs of websites are fine to expose in this day and age, in my opinion and threat model.
Most sites being hosted in the cloud, with rotating IPs give you obscurity there.
Agreed. Most of the servers are behind proxies anyway.
In my opinion, Firefox should give an option to enable ECH forcefully for users like me who has AdGuardHome/Pi-Hole running on Home Network. Currently, if DOH is disabled in Firefox setting, ECH won’t work, as per Firefox. 😦
deleted by creator
How about you first standardize it?
It’s being worked on. https://datatracker.ietf.org/doc/html/draft-ietf-tls-esni
I know, but it’s a draft, so we have to disable it in our distro.
What is your distro?
My personal one’s called ZilchOS, but I was talking about RHEL in that sentence =D