I get the best practice and why relying on the User Agent is helplessly flawed, but I've never come across a better way to get a sneak peek into the (first) Initial Page Request of a customer to your web app.
E.g. if you'd wanted to improve the page performance and weight of your app, you'll want to keep unused js/css to near zero. If you'd rely on feature detection, your critical path for "showing the initial screen of the app" is bound to "download feature detect, parse it, execute it, fetch correct variants of your js/css packages, parse them, execute them, show screen". That's not cool, imho.
Changing the UA only changes the behavior of the web server, so unless that's some block of untouchable code owned by some other team made to 'optimize' page loads for different clients, you're not doing yourself any favors by changing it in your browser; you should be loading up the client itself since you can't simulate the actual engine loading your web app differently.
> If you'd rely on feature detection, your critical path for "showing the initial screen of the app" is bound to "download feature detect, parse it, execute it, fetch correct variants of your js/css packages, parse them, execute them, show screen". That's not cool, imho.
Even for barebones web apps, your initial html response should have either empty data (but otherwise a visible UI) or a SSR'd version of the page shown while the JS downloads and executes.
1) If clients spoof their UA, they just get what they asked for. Couldn't care less.
2) With SSR you're actually making a case for sniffing the UA for the client's device class. Web apps might need to ship vastly different but overlapping code across different device classes. I'm not even talking about things that can be lazily loaded subsequently, I'm talking about the initial screen for given URL.
3) I'm _not_ going to show the customer any loading spinner or just show a white page for 2s. We had that with Java Applets, remember?
All in all, I'm not saying we should getting back into optimizing web apps for certain os/browsers/versions. What I'm saying is there might be good reasons where the mess of classifying the device on the (first) Initial Page Request is acceptable, given the alternatives.
> 1) If clients spoof their UA, they just get what they asked for. Couldn't care less.
I thought you were specifically talking about testing the behavior of your website on different client by changing your browser's UA. The only legitimate use would be if you have a mobile-only stylesheet like older wordpress themes (because you said "I've never come across a better way to get a sneak peek into the (first) Initial Page Request of a customer to your web app")
> 2) With SSR you're actually making a case for sniffing the UA for the client's device class. Web apps might need to ship vastly different but overlapping code across different device classes. I'm not even talking about things that can be lazily loaded subsequently, I'm talking about the initial screen for given URL.
I'm not sure why this would be the case. For Teams, they can just load a purple top bar with the words "Microsoft Teams: Loading", right? That's a single responsive stylesheet.
> 3) I'm _not_ going to show the customer any loading spinner or just show a white page for 2s. We had that with Java Applets, remember?
What else are you going to show while the page loads? A fake UI with fake data? As soon as the JS starts executing and loading data, that data can start to populate the screen.
Does using `window.navigator.userAgent` still work? Does it still return the actual user-agent?
Yes, and because it does, people will use it. You (or standards) bodies can scream from the top of the mountains that it's deprecated, but it won't matter, because if it's there, it will be used.
but Microsoft should understand why this is an issue, they had to rename their windows from 9 to 10 because applications did check based on os name string and just exit after detecting it contains "windows 9" as it suppose to match windows 95 and windows 98
Similarly, Mozilla recently had to freeze part of Firefox’s User-Agent string because some websites mistook Firefox 110 as IE 11 and blocked access because they no longer supported IE 11. The websites misinterpreted “rv:110” in Firefox’s Use-Agent string as “rv:11”.
Microsoft is a huge corporation with multiple different identities inside of it, like every international super-conglomerate. The people who work on the OS are very different people who work on the web stuff.
Because you’ll be surprised how many developers aren’t familiar with these things and how many of them think that they are more clever than the herd and do not validate their assumptions.
It wouldn’t surprise me if some junior dev picked it up and then went to ask a senior dev how can they detect the browser version and they got a half assed reply hmm check the user agent and went on with that.
95% of the major bugs and security issues I see on a daily basis are due to this.
UA isn’t the cause of security issues the same thought process or lack there off that led to UA being used as a proxy for compatibility in this cause is.
But in a more general view reliance on an unreliable and user controlled data for decision making is a pretty common pitfall in the security world.
There are genuine differences in browsers that need to be handled correctly and aren't easily observable or runtime-detectable. Multimedia (needed for camera/mic handling) and WebRTC still remain a giant landmine that requires UA testing 10 years later.
Web advocates can scream from the rooftops as much as they want that nobody should do UA testing, and then have basically no response when we encounter genuine browser bugs that can't be worked around.
I use the same userAgents in all of my web-scraper code, I just copy and paste my user-agent string after all these years.
It is still an old FireFox user agent I got somewhere. I should test my code and see if it still works after all these years. At the time, it was the only userAgent string which didn't lead to being limited. It was almost like an invisibility cloak for my bots.
Not always that simple. Where I work I need to provide a list of browsers customers can use, list those in the documentation and make sure QA test all of them and block any others.
Bugs in the app can lead to big issues and telling the customers to use whatever browser suits them and "if it works it works" is not good enough. There are still large differences between browsers that feature existence is not same as feature conptivinility.
Of course customers can tweak user agent and "lie" about their browser but then responsibility of issues is shifted to them.
I think it depends. Let's say I sell my finance application to an organization with 10k employees. They then use it to handle transactions on around 100mil per month.
If there are bugs, it may result in money being transfered to the wrong company, or money being transfered without proper approval. Maybe unlikely bugs but could happen.
I can show a banner telling each of the 10k users about browser requirements. I guess that around 99% of the users will ignore that banner and happily proceed with unsupported browsers.
If I don't know if it will work with the browser they are using, why should I let them perform transactions using it? Sure, if it goes wrong maybe I can tell the CFO that their employee ignored the banner, but that won't make him happy.
For me selling the service there are zero benefits with allowing arbitrary browsers.
userAgent is deprecated.
https://developer.mozilla.org/en-US/docs/Web/API/Navigator/u...