Hacker Newsnew | past | comments | ask | show | jobs | submit | earthrise's commentslogin

This post really needed to be written -- I get this question a lot, too. I like the blind indexes concept, it saves you from having to find some weird deterministic encryption mode (like using an HMAC of the message as the CBC mode IV or something insane like that) and it's more flexible by allowing the fuzzy indexing.


Security is half a technical problem and half a usability problem (for developers). We need more emphasis on the usability half.

For example, we should be doing everything we can to ease the mental burden of writing secure code. Whenever we can reasonably eliminate the possibility of a vulnerability, we should. Even if someone has to be an idiot to make the mistake, just don't make it possible to make the mistake in the first place. We should also try to reduce the amount of potentially-malicious input, reduce the amount of options and special cases, etc. Simplify.

Usability improvements pay off multiple times. They make developers' jobs easier because there's less code to write and the code that does get written is easier to reason about. They make security auditors' jobs easier because there are fewer "dumb" mistakes to check for, and that means more of the audit time can be spent looking for deeper flaws.

(Nitpick about the phrase "validating user input": The user's input should never be trusted, but that doesn't mean we should write code to try and decide whether the user's input is "safe" or "unsafe", as that can be impossibly-hard depending on what's happening after the validation check. The code should just be secure no matter what the input is.)


> Usability improvements pay off multiple times.

There's a bit in Google's paper about the Chubby lock service where they note that a big reason for the success of that project was all the concessions they made for developer usability.


The salt would need to be kept secret so it shouldn't be called a salt, it should be called a key. The benefit of Scott's solution compared to this is that you don't need to deal with all the usability problems associated with keeping something secret (where should it be stored on the server? how should it be backed up? how often should it be changed? etc.)


I usually consider this a "vulnerability" in the sense that the author probably intended to use AES and so they may have misunderstood the mcrypt API. Most importantly, they might have wanted AES-256 and missed the fact that mcrypt selects the key size based on the size of key you give it.

That does not appear to be the case this time, however, since the page acknowledges (in an update) "256 bit block" and the fact that it isn't AES. So I should probably make note of that in the CryptoFails post.

I'm unsure how well the analysis of AES (and the attacks against it) carry over to Rijndael-256, so I'd be hesitant to actually recommend it without asking a cryptographer... but, like you, I'd be very surprised if it was a source of vulnerability itself.


I wouldn't actively recommend it. I would worry if someone was using Rijndael-128/256 to make a hash function. But apart from that: the gain in reduced malleability probably offsets any reduced security margin; in other words, using a larger block makes the realistic attacks somewhat harder.

There are probably zero crypto implementations that that contain the string "AES" that use Rijndael-X/256 that aren't broken in some other comical way.


Yeah, when I made this program two years ago I never intended it to be taken this seriously. I added a big disclaimer to the readme.


Hi everyone, I'm the author of that software.

I really didn't want this to blow up. It's absolutely NOT a solution to getting raided by the police. While that was the original inspiration for writing the tool, I was half-joking when I wrote the README about it being a defense against law enforcement.

I've moved the code into a different branch and added a disclaimer to the README. The most important line of the disclaimer is: "If you need to rely on SWATd, you have already lost."


I've written down some general principles we should follow, but any reasonable implementation of them seems pretty far off:

https://defuse.ca/triangle-of-secure-code-delivery.htm

tl;dr: (1) Reproducible builds, (2) Make sure everyone is getting the same thing (to detect targeted attacks) and (3) Cryptographic signing.

Package managers and appstores are the best we have right now, but they're missing (1) and (2). In the meantime, offering a pgp-signed installer file is a lot better than curl | sh.


I'm really surprised this hasn't gotten more attention from the commenters here. :(

^~ To the people scrolling by at 70 mph: READ THIS


`curl | sh` is protected by a ssl certificate issued by CA that my system trusts.

"pgp-signed installer file" is protected by a key from a stranger.

Both are insecure.

Could you enumerate several points that show "pgp-signed installer file is __a lot better__ than curl | sh" ?


Say we're trying to download the PHP source code from php.net. If all that's protecting us is SSL, then if an adversary compromises the php.net servers (happens all the time, and actually has happened to PHP), they can immediately replace all the downloads with backdoored copies. Whereas with a PGP signature, the key can be stored off-line (even on an air-gapped system), so that even if the web server is compromised, the adversary can't make me believe the file is legit.

PGP can also be used in a trust-on-first-use manner. Get the public key once over an insecure channel, and if the attacker missed that single opportunity, you're safe until the key changes. With SSL, on the other hand, you're at risk every single time you make a connection, because any of hundreds of CAs has the power to sign that certificate, and as above, you have to assume the web server isn't compromised.

Another reason PGP is important is mirroring. Big F/OSS projects let others' volunteer mirrors. Even if those mirrors support SSL and the transport from the author to the mirror is encrypted, there's absolutely no guarantee that the mirrors themselves are not malicious. The mirrors could be backdooring their own files. The fact that you have an SSL connection to the mirror doesn't do anything to prevent this. But with PGP signatures, you're assured the files come from the software's developer, and haven't been tampered with by the mirror.

So the difference is: SSL secures the connection between your browser and the web server. PGP ensures you're getting the file the software developer intended you to get. It's a semantic difference.

I'd also argue hard against `curl | sh` for (assuming it exists) the psychological effect of teaching users that it's OK to pipe random things from the web into sh.


If an adversary compromises php.net server then naturally the files will be signed by adversary's keys. In both cases, all you need is the ability to replace files (the package files, signature files, html instructions), nothing else. I don't see how gpg is __a lot more__ secure here.

You need some other channel to communicate what keys should sign what files. You need some other channel to import the keys. Catch 22.

"trust-on-first-use" do you mean something like the certificate pinning?

Let's consider https://www.torproject.org It uses GPG signatures for its packages.

If SSL is compromised as you say then all I need to fool you is to give you files that are signed using my keys (unless you know that you should use 0x416F061063FEE659 key (magic secure channel) and you've already imported it (again the magic channel) and tor never changes the key.

Where should I go to check that 0x416F061063FEE659 and pool.sks-keyservers.net are correct values (google?) if we assume that the key and the key server (and the connection to it) are not compromised?

Ask yourself: when was the last time that you've tried to check that the instructions that show the key fingerprint, the key server to use are genuine?

Also, If you have paper walls I wouldn't try too hard to make the door impenetrable. It is a trade-off: if you are downloading code from stranger's repository then you won't get much from replacing `curl https://github.com/... | sh` with a gpg-signed (by the same stranger) download.

Security is like onion rings: there are layers but it is only as strong as its weakest link. We know that a real adversary will just hack your machine if necessary.


Steve Gibson is working on a system called "SQRL" that does exactly that, and should be very usable: https://www.grc.com/sqrl/sqrl.htm

edit: forgot some words


Here's the rough process I followed when I did that audit:

https://defuse.ca/b/hwwW9d3FkPGhM4T6xBIbhf

I think the reason I found so much in only 10 hours is that I had a good set of guesses about what could be wrong, based on what I've seen people get wrong before. From there it was just a matter of prioritizing which guesses to check. I did look at a lot of the code, although it was mostly guess-checking combined with a closer look at the cryptography code.

Because the audit was so short, the quality of the report suffered (ASCII, some mistakes, some severity ratings that I no longer agree with, etc.). My priority was to find as many problems as possible in the amount of time I was given, and then sort that out later.

To answer some other replies: I always report unbilled hours (in this case none), since I think it's dishonest to say you worked less hours than you did. You would essentially be claiming to be more productive than you really are.


In cryptography, the burden of proof is on the one proposing the system. It's up to the system designer to prove it secure. The reason why we stick to things like encrypt-then-HMAC, rather than rolling our own protocol, is that they HAVE been PROVEN secure. There are rigorous proofs [1,2] that HMAC and encrypt-then-HMAC are secure, assuming the underlying primitive is secure. There are no such proofs for Telegram's protocol, and there are many "smells" indicating attacks are possible, and I'm sure you'll see some actual examples soon.

[1] https://eprint.iacr.org/2006/043.pdf [2] http://cseweb.ucsd.edu/~mihir/papers/oem.pdf

Cryptography is HARD. It's so hard that it's hard to understand how hard it is. A large part of becoming a cryptographer is just learning how hard it is, and that you NEED security proofs, because it's just too easy to screw up.

I understand you're frustrated, but there's no need for the ad hominem attacks. tptacek is giving you good advice. We all want to see good crypto getting used. So why don't we work together to fix it instead of wasting our time defending a broken system? Honestly, replacing your protocol with encrypt-then-HMAC or the protocol from TextSecure isn't that big of a change, and it would make Telegram a lot better. So why not do it?


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: