This seems to argue that accessing a web app (assuming an important piece of software that handles private encrypted data) is no less secure than loading software from a package repository, because a web server requesting the JavaScript and a software updater loading binary code from a server is structurally identical.
The obvious response to this is offline signatures: For package managers, app stores, updaters and the like, the integrity of the update server itself doesn't really matter, because the installer verifies a cryptographic signature from an offline key.
This argument is acknowledged, but seems to be dismissed without a real explanation:
> More astutely, there's also the distinction that either a file system compromise or a key compromise is required to serve malicious code to users with TLS, but software repositories can be architected such that a key compromise is required, through the implementation of offline keys. Despite this, though, there are several examples of secure software repositories that successfully use TLS. This isn't a grave concern (and they probably won't be converting soon) because the decision to use offline keys is palliative, at best, and only marginally increases a system's level of security.
> However, I openly concede that there are cases where a software repository can offer a higher level of security than a browser is currently capable of. I plan to discuss this later.
AFAICT, this "later" doesn't happen in this article, though I might have missed something. I really don't see why offline signatures are only "palliative". Web servers are hacked all the time, DNS misconfigurations happen, but organizations losing control of their software signature keys is comparatively rare. Why would you give up a very effective level of defense if it's so easily available?
Probably not if we consider threat models evolving in the same userspace as where they're implemented (eg as cited: keyloggers and malicious native programs).
HSMs may provide an entrypoint to the "root of secrecy" problem, though the way they interface with the system now becomes the single point of defense.
Years ago we were hopeful that the TPM would solve this problem, yay I rejoiced - a HSM on every computer. Just think of the possibilities!
Aside from the specification documents being eye-wateringly painful to decipher, and the design committee resulting in something that's so utterly obtuse to use that the devil himself may have had a hand in the process...
They forgot one tiny little thing: most units until very recently were deployed as a discrete component on the SPI bus, with all the commands & secrets passing back and forth in plaintext with no authentication or encryption.
Unfortunately that rendered them unfit for purpose for a huge percentage of the use cases which were championed, it's almost as stupid as Windows uploading your BitLocker keys to Microsoft for 'safe keeping' - it's almost as if some devious three letter agency had influenced the decisions to fundamentally weaken cryptography for everybody.
Even if we assume a proper protocol is in place (secrets remain in the HSM which is doing secure computation), the problem of trusting the hardware remains: how do we know a three letter agency isn't tampering with the HSM hardware assembly process to influence the keys being fused into it?
I don't necessarily care about the lettered agencies tampering with or backdooring hardware during the manufacturing process because there's nothing I can do about it, but when the best encryption manufacturers have to offer can be defeated by an average CS student with a <$1000 logic analyzer it shows a concerning defect in the process.
This line of thinking makes me want to give up, buy a cottage in the woods, adopt 20 cats, burn all my computers and start trading artisanal pottery at the local farmers market.
Having it as an extension just moves the weak point to the extension update mechanism.
Of course apps can just distribute themselves as an extension which is probably the best compromise right now and something most web based password managers for example already offer.
Crypto is always about having some people you trust, having some adversaries, and being able to interact with the trusted people without the adversary interfereing or evesdropping.
Most in browser crypto seems to view the web server as both the adversary and the trusted party at the same time. That is obviously not going to work well in most cases.
> Most in browser crypto seems to view the web server as both the adversary and the trusted party at the same time.
I think that only considering a single web server ("the web server") is an oversimplification that's crucial here.
I believe that the threat model in this case is that some web servers are fully trusted by the people you want to interact with (e.g. servers belonging to the website owner), whereas other servers are less trusted (e.g. servers belonging to CDNs, 3rd party repo owners e.g. npm/github, ad networks, etc...) - and many web pages/apps will load executable code and data from multiple sites in a single session.
That's fair, but even then the problem is better solved by technogies other than js-based crypto like SRI, CSP, iframe sandboxes, etc.
However that said, if you trust your bootstrap server (whatever serves initial html) then all the problems with javascript based crypto dissappear basically instantly. Most people experimenting in this space seem to be trying to protect against a malicious bootstrap server, and that is where everything goes off the rails.
The obvious response to this is offline signatures: For package managers, app stores, updaters and the like, the integrity of the update server itself doesn't really matter, because the installer verifies a cryptographic signature from an offline key.
This argument is acknowledged, but seems to be dismissed without a real explanation:
> More astutely, there's also the distinction that either a file system compromise or a key compromise is required to serve malicious code to users with TLS, but software repositories can be architected such that a key compromise is required, through the implementation of offline keys. Despite this, though, there are several examples of secure software repositories that successfully use TLS. This isn't a grave concern (and they probably won't be converting soon) because the decision to use offline keys is palliative, at best, and only marginally increases a system's level of security.
> However, I openly concede that there are cases where a software repository can offer a higher level of security than a browser is currently capable of. I plan to discuss this later.
AFAICT, this "later" doesn't happen in this article, though I might have missed something. I really don't see why offline signatures are only "palliative". Web servers are hacked all the time, DNS misconfigurations happen, but organizations losing control of their software signature keys is comparatively rare. Why would you give up a very effective level of defense if it's so easily available?