He makes a 'technically correct' case for fully qualified domain names, but I'm not sure this a real problem. Under what circumstances are things really improved by using them? If your DNS server is untrustworthy, this doesn't help. If your DNS server is trustworthy, do fully qualified domain names help you?
There's almost nothing on the web about 'Common Internet Scheme'. [0]
Also, it's a little ironic that we're reading a page on spoofing, from a site which doesn't support HTTPS.
> Under what circumstances are things really improved by using them?
Not necessarily "improved", but it could add to predictability. A lot of places probably use(d) "dev" as an internal sub-domain, and then ICANN went and approved Google's .dev TLD:
A similar thing happened where I work. Say our domain name was `example.com`, we had a fleet of hosts at `foo.build.example.com`, `bar.build.example.com`. The internal network handed out `example.com` as the DNS search string but web browsers always try the FQDN first. On the day the `.build` gTLD went live, people who use short names in their URLs (just about everyone) could no longer access these hosts and I was the one who got to figure out why.
His issue wasn't the domain they used becoming a TLD, his issue was a subdomain they used became a TLD and the DNS resolvers of his clients were not configured to append the parent domain first, probably the search list wasn't populated.
If you are inside a large corporate network, that spans the globe and has many internal domain names, and a lot of DNS forwarding, it's a very real problem. Especially when trying to debug inconsistent name resolution.
1. Privacy matters. A medical website, or indeed Wikipedia, should prevent a snooping ISP from finding out you have been reading about an embarrassing condition. This is similar to the way librarians are extremely protective of their loan records [0]. Netflix use HTTPS for their streams, for the same reason (it does nothing to aid their DRM, it's purely about privacy) [1].
2. As someone here already mentioned, it prevents ads/trackers/malware being injected into the page by unscrupulous ISPs (this really has happened [2])
3. Modern browsers will (rightly) warn users not to trust the site. This makes the site look bad.
4. Some fancy browser features are disabled if you use unencrypted HTTP. Likely irrelevant for a static site though.
5. Let's turn the tables and ask why you wouldn't use HTTPS for a public-facing web server. There are just two reasons: firstly, reduced admin overhead not having to bother with certs, and secondly, it enables caching web proxies, which is only relevant if you're running a serious distribution platform like Steam, or a Linux package-management repo [3]
> Why can’t I use my own self-signed certificate, it’s the same thing.
It is not. I don't think you understand the role of CAs. Self-signed certs do not provide protection against connecting to an impersonator.
I’ll agree to disagree, because otherwise the conversation would be moot.
I just disagree that we should “HTTPS” everything and that’s partly because of the overhead. A medical site is different to a static html site like listed in the post.
ISP regardless of SSL know which sites I’m visiting. It’s there in the HTTP host header regardless of SSL or not. And transit security can easily be encrypted without SSL but people are too lazy to encrypt their content within the application.
“run acme encryptbot”
no thanks. When I want a pure server with nothing other than my application I don’t want to download a certificate bot, install (lang) and let it mess with my configuration.
I don’t dismiss that HTTPS is important but I feel platform is flawed. Owned by corporate greed. It’s security based upon pay us money or else model.
> I just disagree that we should “HTTPS” everything and that’s partly because of the overhead.
The overhead in any modern CPU is negligible. If you're worried about negligible overhead then why are you using an entire operating system to serve up web pages?
> ISP regardless of SSL know which sites I’m visiting. It’s there in the HTTP host header regardless of SSL or not.
The host header is encrypted over HTTPS. Furthermore, SSL isn't used in 2019 as it's insecure and was replaced with TLS. That might be pedantic but it seems to align with what this post is about so I'll mention it.
You're confusing the host header with the server name indication sent in the client hello. There is a huge difference between my ISP knowing I went to example.com (with TLS) or example.com/medical/how_to_deal_with_cancer.html (without HTTPS)
> And transit security can easily be encrypted without SSL but people are too lazy to encrypt their content within the application.
I'm not sure what you're suggesting here. The industry standard transport security for HTTP is TLS. Trying to re-invent the wheel is counterproductive and dangerous.
> “run acme encryptbot” no thanks. When I want a pure server with nothing other than my application I don’t want to download a certificate bot, install (lang) and let it mess with my configuration.
Then don't. The specification for the acme protocol is open and available to you. You could automate the entire process and even choose a DNS based challenge. Typical use of cerbot, should you choose to use it, does not "mess with your configuration".
> I don’t dismiss that HTTPS is important but I feel platform is flawed. Owned by corporate greed. It’s security based upon pay us money or else model.
It literally isn't because Let's Encrypt gives out certs for free. Additionally, so does AWS, Azure, and GCP. If you're paying for a certificate you're doing it wrong.
You have always been free to use self-signed certificates, but then the challenge of convincing your visitors that your certificate really is from you and not someone else who created a self-signed certificate becomes your problem to manage.
The computational overhead is negligible. This simply isn't a credible argument against HTTPS.
Netflix send petabytes of data over HTTPS. There's no excuse for anyone else.
> A medical site is different to a static html site like listed in the post.
Different in degree, but not in category. Your browsing habits on ordinary non-sensitive sites can still be used to profile you.
> ISP regardless of SSL know which sites I’m visiting
They can still look at the destination IP, yes, and they can probably look at your DNS requests, as secure DNS is currently only rarely used. It's still worth doing. Knowing that someone went on Wikipedia tells you almost nothing. Knowing the specific pages they went to, tells you a great deal.
> It’s there in the HTTP host header regardless of SSL or not.
That's not the case. HTTPS encrypts all headers.
> transit security can easily be encrypted without SSL
The easy solution is HTTPS.
> people are too lazy to encrypt their content within the application.
A universal principle in cyber-security: rolling your own crypto scheme is generally a terrible idea. As I said, Steam and Apt are the exceptions; they use plain HTTP for delivery, and implement secure file verification using hashes. Even here, with competent people running a simple delivery scheme, there can be serious security issues [0].
When it comes to web applications, you cannot implement your own secure delivery, for the obvious reason: an attacker can just replace your code. Sites like LastPass.com still have to use HTTPS to deliver the web app.
Even if it could be done, there would be no reason to. The browser offers you HTTPS, so you get a carefully designed, battle-tested protocol, and a carefully designed, battle-tested implementation. You are entirely shielded from all the complexities. You aren't going to do better in JavaScript.
> When I want a pure server with nothing other than my application I don’t want to download a certificate bot, install (lang) and let it mess with my configuration.
It's additional configuration work, yes, but I don't accept that it's much of an argument against HTTPs. You still have totally free choice over your tooling and languages.
> I don’t dismiss that HTTPS is important but I feel platform is flawed.
You've not presented a single good argument against the technical merits of HTTPS.
> Owned by corporate greed. It’s security based upon pay us money or else model.
> manufacturing cars is so cheap these days, even General Motors can do it...
Short answer: empirically, there's no question: HTTPS CPU overheads simply aren't a problem [0].
Long answer: that's disanalogous. Manufacturing works with economy of scale. Netflix have it worse than the rest of us.
They have to deliver tens of gigabytes per month per customer, using roughly the same hardware as the rest of us. According to that Ars Technica article, they did their usual Netflix-ey software optimisations with FreeBSD, but their hardware isn't customised for the job. Their scale doesn't much help them. If HTTPS overheads would put anyone out of business, it would be Netflix, but here we are: they voluntarily added HTTPS for their streams.
Again, as I think it bears stressing: these guys are transmitting about 15% of all data-flow across the Internet. Out of all the HTTPS traffic in the entire world (measured by gigabytes, admittedly discounting number of connections established), Netflix handles at least 15% of it. They do so voluntarily. Same goes for YouTube.
If you can think of a business model where HTTPS overheads would cause more trouble than they do for Netflix and YouTube, I'd like to hear it.
> That's not the case. HTTP uses SNI (for now)
Thanks, I wasn't aware of SNI. I don't think it gives much more away though, does it? Someone spying on my line can already see the IP address I'm connecting to. They already know I'm connecting to Wikipedia. They still can't see the specific page I'm requesting.
SNI only leaks information where multiple sites are hosted on the same public IP, right?
SSL/TLS is a problem even on 80586-class machines (souped up 486's, not Intel Pentiums) - there's a few YT videos showing how long it takes to bootstrap a modern SSH session on that sort of hardware. But honestly, how many people are browsing the net on stuff like that, much less Z80's?
As I stated I will agree to disagree. I have my own views based on my own experiences as you have yours. This ain’t new to me.
Security is always number one. I wasn’t trying to pitch “these are good arguments”. I am expressing my own, whether right or wrong, secure or insecure. No successful hacking attempts so far on my watch.
Everyone feels so safe using HTTPS but when it cracks we are going to be screwed. let’s not forget about HeartBleed and all the others.
What’s stops a corporation/CA from revoking a certificate because you threaten their business?
It’s moot.
I wouldn’t touch let’s encrypt with a barge poll. If I want a SSL i’d rather fork out in a paid cert as at least there is insurance behind.
When google can create their own CA authority how is that secure? Look at google certs, they’ve planted their own global root cert in your OS without you even approving it.
HTTPS is lazy, might be easy but poor mans security. But to end this debate some is better then none.
While HTTPS has problems, you have circumscribed a set of objections (ranging from HTTPS being compromised to "oh I won't touch Let's Encrypt") that, if we're being frank, seems to exist to give you something to object about, not to actually have a meaningful and good-faith conversation. One "agrees to disagree" on the fairly basic security concepts under discussion when they know they're just being contrary.
Not everyone uses OpenSSL, but within epsilon of everyone has used a nasty WAP that injects shit into pages. That is what, and only what, Let's Encrypt exists to solve--it ignores the ever-more-foolish conflation of encryption and identity and explicitly only provides the thinnest (domain-level) identity while providing good to excellent security against drive-by attackers.
Then why are we not fixing the problems that exist instead of plastering the hole within HTTPS? Or is that the best we’ve ever going to have?
Corporations have ruined the internet yet we want to support them more. I refuse to gander anything created by the Linux Foundation. I dislike Linux. Would be my first point. Once upon a time I never held that view, but I do now.
I am happy to have a good-faith discussion but as I am currently on limited time. My objections on the whole matter may sound cynical but are still valid. We are ignorant towards security and a big hack is bound to happen sooner or later. All battle grounds have weaknesses.
I am on my mobile nor do I have the time right now however if i was however at my workstation then I would put concise in to my statements.
Respectfully, you were mistaken on several points of fact. It's quite clear that some of this stuff is new to you.
> No successful hacking attempts so far on my watch.
HTTPS doesn't protect a website against intruders, nor is it meant to. That's a whole different conversation.
> Everyone feels so safe using HTTPS but when it cracks we are going to be screwed
Is your issue with diversity of implementations, or with diversity of underlying algorithms?
The cyber-security community has collectively agreed that it's best to maintain relatively few cryptography codebases well, rather than maintain many of them. So yes, if there are flaws in libraries like OpenSSL, that's a big problem, but experience shows that this approach is far better than using half-baked crypto solutions.
The worst security of all is where the crypto is provided by a half-baked 'custom' solution thrown together by well-meaning amateurs, never subjected to critical review by serious cryptographers.
Ideally I'd like to see things go even further in the current direction: we should be using formally verified implementations of TLS. That would close the door on issues like HeartBleed. The only issues remaining would be side-channel attacks, and advances in cryptography that defeat the underlying cryptographic approach.
> What’s stops a corporation/CA from revoking a certificate because you threaten their business?
Has this ever happened? If it did, I'd report them to the CA/Browser Forum, and get a replacement cert from a different CA.
> I wouldn’t touch let’s encrypt with a barge poll. If I want a SSL i’d rather fork out in a paid cert as at least there is insurance behind.
I agree that you need to be able to trust your CA, especially as the web moves toward short-lived certs. If your CA has an issue, your whole site disappears from the web.
So far, I think all the major CAs have a pretty good reputation when it comes to reliability, but I'm not very experienced in that area.
> When google can create their own CA authority how is that secure? Look at google certs, they’ve planted their own global root cert in your OS without you even approving it.
I'm afraid I don't know what you're referring to. Does Chrome install root certs into Windows when installed? If so, I agree that Google shouldn't be doing that.
As for Google being a CA: I'm not too worried about that. When it comes to the imperfections of the CA system, there are worse issues than Google being on the list.
> HTTPS is lazy, might be easy but poor mans security.
What's the alternative? A web-of-trust approach would greatly complicate things, compared to the current approach using CAs.
I don't know what you mean by 'easy'. What do you have in mind? Again, whatever the solution, the website's own code cannot secure the channel.
Right now, HTTPS is the only game in town, and it works pretty well.
It’s all moot as I said. It is the only game in town and the game is whack-a-mole.
The mole will be whacked and we will be screwed. As is running out of IPv4 addresses.
I just decided not to stance on such issues like self-signed as it’s a pointless to go down that rabbit hole.
>Respectfully, you were mistaken on several points of fact. It's quite clear that some of this stuff is new to you
Sure, why not.
I’ve withheld my bragging but thirteen years of depressing soul destroying System Operator/Administrator experience including Linux,Unix,Solaris and *BSD within high secure data centres,banks,pornography,animation,telecoms who knows what else.
Two years is the max I can take because the field, software, hardware is so stale it hurts.
I still disagree you can encrypt and secure the channel within a app. Why would it be impossible?
I caught you mid-edit so forgive some stale quotes:
> It’s all moot as I said
It absolutely isn't. Security matters. HTTPS matters. This stuff is real.
When a website argues for why it shouldn't bother with HTTPS, it's generally because they don't know what they're talking about, and they end up putting users at risk. Personal favourite: [0]
> It’s the only game in town and the game is whack-a-mole.
HTTPS is doing pretty well. There's always some element of whack-a-mole with security matters like this. (Personally I favour different metaphorical mammals: cat and mouse.) Even OpenBSD doesn't have a perfect record. Perfection shouldn't be the enemy of progress.
We have online banking and online tax forms thanks to HTTPS, for instance. Without crypto, these simply wouldn't be possible.
When we read of lawyers using fax machines, that shouldn't reassure us that plaintext HTTP is perfectly fine, it should alarm us that such an important profession has such inadequate security practices.
> As is running out of IPv4 addresses.
I agree this isn't exactly a good situation, but it's not going to paralyse the Internet. Worst case: the price of an IP goes up. Best case: IPv6 at last.
> And yes, you can encrypt the channel within a app
You've ignored the point I made. Are we talking about a web app, or a conventional app?
If it's a conventional app: yes, you can layer your own cryto atop raw TCP, but it's a terrible idea. You should use HTTPS, or use TLS directly, or if you're in a Unix mood you could use SSH. There is no reason for doing otherwise. You cannot avoid the questions of key-management by handling the crypto yourself, you're not improving anything fundamentally, all you're doing is risking introducing easily-avoided security bugs.
If it's a web app: you simply cannot, because you have to securely deliver the crypto software to the end-user, which you cannot do without HTTPS.
> I just decided not to debate those issues as such as self signed as it’s a pointless to go on about
Do please go ahead. The point of HN is discussion, after all.
> self signed certificates.
These do not protect you against a man-in-the-middle attack. They do not protect you from connecting to a host under the control of an attacker, posing as the real host. They only give you protection against passive data-capture. That, and a false sense of security.
I am aware of your link, but only because we don’t have a public CA for self-signed or a mechanism to cover. Self signed can provide false sense of security but it’s still a certificate which provides encrypted passage that if you were to hypothetically say they were being used genuinely.
They been left neglected. LetsEncrypt is essentially self-signed cert by submitter to a database owned by whoever. With a root certificate held up by Operating System creators.
If I was to create an OS and not include the LetsEncrypt CA then errors would be thrown.
As stated, I do not dismiss, nor am I arguing against it’s a insecure mechanism. It’s a requirement for the internet for now but putting all your eggs in one basket like LE, I’d rather let mine be cracked with insurance. However should be free regardless.
I am taking about a webApp where the application itself is the web server.
Within a webApp you can and If the simple principle of HTTP is “Give me index.html” to which the contents of the file are then submitted and spewed our by that browser
read file > encrypt with some cipher
decrypt > user
Sure the packet can be mangled but based off your browser checksum the data is still not accessible.
You can even then add your own packet checksum.
If packet not checksum resend or route alternatively.
Is a simple poll which uses text files instead of database. However without SSL and using naviserver (TCL) as my web server for now. Encrypted using Blowfish.
All HTML is encrypted from first visit to submission result. The template files are stored in Hex encrypted, then decrypted, them ecrypto’s again for the user and then decrypted.
All without javascript or libraries. None of that HTML can be modified in transit.
You can validate the text has been encrypted and decrypting of this by looking at HTML source and seeing a ico at the top of the page.
Although I will admit, this discussion has opened my eyes and I can now see where the MiTM attack will be, and HTTPs would be another coat of armour and that would which would be capturing which post you make.
However I now need to encrypt the users click packet of POST.
Connection Open > Html spewed > Connection closed
User clicks option > Post result encrypted > update results file >
I hate using my phone for in-depth discussions of my views which is why I’m obmitting answers + the UI of HN isn’t the greatest when you have a seven line box to type in.
Can we just end this? I’m not stupid, I understand, but my own beliefs disagrees with the concepts currently in place. And I am constantly being forced to do stuff which feels uncesscalry like having to use Gmail to actually get a valid email service rather then my own.
The internet I feel is on-life support to where it’s erupted with corporate greed who’s drilling it in to the ground. Like fracking.
Maybe I’m just stuck in the 80’s where the internet was less of a thing as it is now. If it was me HN would be a BBS.
> only because we don’t have a public CA for self-signed or a mechanism to cover
So, generate your own cert and then have a CA validate it? That's just a clumsier variation on what we have today, where the CA generates the cert for you with the validation 'baked in'. If you still need CAs, what's the advantage?
> Self signed can provide false sense of security but it’s still a certificate which provides encrypted passage that if you were to hypothetically say they were being used genuinely.
That's wrong. Using self-signed certs, the client doesn't have any idea whether it's connecting to the intended host. This is the entire point of CAs.
> LetsEncrypt is essentially self-signed cert by submitter to a database owned by whoever. With a root certificate held up by Operating System creators.
That's not the case.
1. You don't submit a cert to LetsEncrypt for approval. They give you the cert.
2. LetsEncrypt won't give you a cert unless you can prove you are truly the owner of the domain. We can quibble about whether their checks are adequate, but it's not correct to say it's the equivalent of a self-signed cert.
3. There is no root cert held by the OS creators. The OS maintains a list of trusted CAs, but the OS vendor doesn't do any signing.
> If I was to create an OS and not include the LetsEncrypt CA then errors would be thrown.
Correct.
> It’s a requirement for the internet for now but putting all your eggs in one basket like LE, I’d rather let mine be cracked with insurance.
Like I said, rolling your own crypto is a bad idea. This isn't a matter of opinion.
> Within a webApp you can and If the simple principle of HTTP is “Give me index.html” to which the contents of the file are then submitted and spewed our by that browser
> read file > encrypt with some cipher
> decrypt > user
> Sure the packet can be mangled but based off your browser checksum the data is still not accessible.
This is wrong. If data is sent over HTTP, it can be maliciously modified by an attacker. I suspect you're thinking of the use of checksums to detect errors in network transmission. That's not relevant here.
> You can even then add your own packet checksum.
You cannot do this in a way that has any real bearing on security. Where would you put it? In JavaScript, sent unprotected over plaintext HTTP?
> Is a simple poll which uses text files instead of database. However without SSL and using naviserver (TCL) as my web server for now. Encrypted using Blowfish.
What's encrypted? The page is delivered over plaintext HTTP. That's why my browser is warning me.
> All HTML is encrypted from first visit to submission result.
The HTML is not encrypted, it's send unprotected using HTTP. The page is completely unsecured.
> The template files are stored in Hex encrypted, then decrypted, them ecrypto’s again for the user and then decrypted.
So the data is stored in encrypted form, then decrypted prior to being sent over unprotected HTTP?
> HTTPs would be another coat of armour and that would which would be capturing which post you make.
No, HTTPS is the only armour you have. Everything else is just playing with sand.
There's a very good reason Google don't put crypto code in JavaScript, and it's not because they didn't think of it. It cannot be done.
> However I now need to encrypt the users click packet of POST.
> Connection Open > Html spewed > Connection closed User clicks option > Post result encrypted > update results file >
It's not clear what you're really saying here. I should emphasise yet again, that you do not have an alternative to HTTPS here.
> I hate using my phone for in-depth discussions of my views which is why I’m obmitting answers + the UI of HN isn’t the greatest when you have a seven line box to type in.
I'm sure you're right, I've never tried. My habit is to note down HackerNews comment ID numbers on my phone, then type up responses when I get back to my laptop.
> Can we just end this? I’m not stupid, I understand, but my own beliefs disagrees with the concepts currently in place.
I mean no insult here, but it's plain that you don't understand what HTTPS does, how it works, what CAs are for, why self-signed certificates are inadequate, and why it's hopeless to try to make plaintext HTTP equivalently secure.
You have not invented an HTTP-based alternative to HTTPS. There isn't one. There never will be one. It cannot be done.
> The internet I feel is on-life support to where it’s erupted with corporate greed who’s drilling it in to the ground. Like fracking.
Several people, including myself, have already told you about LetsEncrypt.
> That's wrong. Using self-signed certs, the client doesn't have any idea whether it's connecting to the intended host.
TOFU is a possibility. That's essentially what happens when you click through the self-signed cert warning, and opt to make that exception permanent. Not something that should be expected from a naïve end-user (which is why a scary warning is appropriate) but technically feasible nonetheless.
If you trust blindly on first use, you're wide open to an impersonation attack. That isn't a solution, it's giving up. That was my point.
This is why browsers are increasingly hostile to self-signed certs, which users should never see.
It's the same reason SSH clients prompt you to manually confirm the expected fingerprint when connecting to a new server, rather than accepting it on faith, which is what you're suggesting.
No one's saying that you should trust "blindly". The whole point of having a "hostile" warning when encountering a self-signed cert is protecting the user from impersonation attacks. But if you've established trust in other ways (such as by verifying the cert via an out-of-band channel, or because you set it up yourself) TOFU should be available.
> No one's saying that you should trust "blindly".
Gotcha. Quick digression: I'm not a fan of the term 'trust on first use', as it can very reasonably be interpreted to mean exactly that. I mean, that's what is says...
Wikipedia even gives it as one of its two almost antonymous definitions of the term, i.e. it can mean on first use, manually confirm whether to trust, or it can mean on first use, blindly trust.
> The whole point of having a "hostile" warning when encountering a self-signed cert is protecting the user from impersonation attacks.
There's a strong argument to be made that if you really want to protect the end-user from impersonation attacks, you shouldn't even give them the option of proceeding. Unfortunately, users are conditioned to click through warnings uncritically.
> if you've established trust in other ways (such as by verifying the cert via an out-of-band channel, or because you set it up yourself) TOFU should be available
In principle, sure, but what's the use-case here? We already have a secure, scalable, fool-proof solution to the problem: CAs. We insist upon CAs, because there's no good reason not to use them for all public-facing websites.
What's the use-case here? If you want to run your own personal network, and your own personal alternative DNS scheme, you're free to set up your own personal CA for the purpose. Corporate networks do this. I see no reason to chuck out the whole concept of CAs.
SSH is a different beast, as it's not intended for use by non-technical users. It's reasonable to insist on your sysadmin manually checking SSH fingerprints. Not so for non-technical users and HTTPS certs. Even in the case of SSH, CAs are a good solution if you're working at scale.
To my knowledge, the only thing approaching a serious alternative to CAs, is web-of-trust, but it's not really a practical alternative.
Excuse me. I would say I am now now more aware of HTTPS works however what it is, how it is, how it's a life support of the internet that's going to crash and burn, I did know.
If it's the only thing of the internet to ensure security of HTTP then internet is dead.
For the last time, I do not want to use LetsEncrypt if I am to use SSL. As posted in my previous comment.
To be honest, I bored to continue in this discussion. I've said what I wanted to say long ago, take it as you wish. End of from me.
You could get one for under $10/y which is not what I’d call a fair bit of money.
I think the problem with HTTPS/SSL was that it tried to solve two problems at once (trust and encryption) without a practical way to separate them. You can argue that’s justified (what’s the point knowing the connection is encrypted if you don’t know who is on the other side), but those panicky browser alerts made self signed SSL certificates all but useless. That’s why we need letsencrypt now.
Even with let's encrypt it still seems odd to me that you need a 3rd party to confirm you own a domain name(and issue a certificate for it). The web trust model is broken.
Joke's on you. Once spent a whole morning failing to obtain a signed SSL certificate from one of the certificate authorities. The field description in a web form was clearly stating to enter the FQDN, yet the form was not passing through with a very general error description (it appeared later that the URL with a dot at the end was not validating, someone copy-pasted a regex of an URL?). On help request, the IT operations guys looked with disdain at the webdev unable to produce "a stupid certificate". The disadvantages of reading the instructions and field hints ¯\_(ツ)_/¯
I get the following error when I add a dot at the end of the address for a virtual host in Apache:
Misdirected Request
The client needs a new connection for this request as the requested host name does not match the Server Name Indication (SNI) in use for this connection.
Apache/2.4.25 (Debian) Server at example.com Port 443
When I click on the first link in iOS Safari I get sent to a search results page served by my (read: my parents’) ISP. That’s pretty disturbing.
The second link appears to work. It’s a page that says “It works!” but it’s not HTTPS so of course I have no way of knowing whether that’s the ISP playing tricks as well. ;)
Why is that disturbing? For me, the name doesn't resolve (http://pn/).
Assuming you are using your parents' ISP's default DNS servers, isn't it a safe, though less-than-desirable, result for the ISP to forward you to a search page when resolution fails?
Using a different DNS server not provided by the ISP would most likely solve the problem.
You can do so in either the router or your computer/phone. Two well known and performant public DNS servers are found at 1.1.1.1 (CloudFlare) and 8.8.8.8 (Google).
It’s just a TLD that actually resolves; Most don’t. For example, http://com./ doesn’t resolve even though it has “subdomains”.
This does though beg the question: can a second level domain (root website) have (what we call) subdomains and not resolve itself? For example, example.example.com would resolve, but example.com wouldn’t?
I find it wonderful that a link to a page talking about stripping a trailing period from a domain has been linked to with descriptive text from which the trailing period has been stripped from the domain
One issue I noticed is how browsers and sites handle the dot inconsistently; Edge browser used to "fix" the url, for example.
Google used to do a weird combination of rewriting and/or using the dot, depending on what part of the site you were on. What ended up happening roughly is that you could log into the FQDN, google would do logins for both dot/nodot, if you logged out of one, the other would still work (probably also a combination of Chrome keeping 2 sets of cookies)
I probably can't, but if I find my original notes I'll add a reply...I recall both Edge and Google fixing the problem, but there are other sites & browsers i'm sure that are still affected
Why would that list be blocking fully qualified domain names? Seems more like a bug than a feature, but if there is a good reason for it I'd be interested to learn
I would imagine the reason is that some ad provider used a fqdn to get around some badly written rule, and then someone added an even worse rule to block all fqdn to negate that trick.
Same here. Wrong filter:
/^(https?|wss?):\/\/([0-9a-z\._-]+)\.(accountant|bid|cf|click|club|com|cricket|date|download|faith|fun|ga|gdn|gq|info|link|loan|men|ml|net|network|ovh|party|pro|pw|racing|review|rocks|ru|science|site|space|stream|tk|top|trade|webcam|win|xyz|zone)\.\/(.*)/$document
Sadly many libraries get RFC and standards implementations wrong. I've run into similar edge cases before, and it's always frustrating to see it either not implemented at all (best case), or overlooked due to simplified implementation (e.g. regex instead of parsing), or that there's a bug report that's closed as wontfix because it would be too complicated to fix.
I feel dumber for having read this article. Tbf it’s from the early 2000s but it reads like a nasally academic trying to lecture people who work for a living about theory with little to no practical benefit.
There's almost nothing on the web about 'Common Internet Scheme'. [0]
Also, it's a little ironic that we're reading a page on spoofing, from a site which doesn't support HTTPS.
[0] https://www.google.com/search?q="Common+Internet+Scheme"