Hacker Newsnew | past | comments | ask | show | jobs | submit | nickbw's commentslogin

> Twitter doesn’t have game mechanics

...

> You should totally follow me on Twitter

Yuh-huh. I think Twitter is genuinely useful, but I'm pretty sure scoring Big Numbers is the driving motivation behind plenty of following behavior there.


Haha, you totally got me there :)


Hehe. Sorry, that came across more accusatory than I intended. I totally agree with your overall point.

I think Twitter's follower numbers are an example of gamification done right. Useful information that's also a dopamine trigger.


Ditto.

I've always had the impression that, at least on HN, "weekend project" describes a level of "seriousness" rather than absolute time invested. It lets readers know what kind of scope and polish to expect when they click the link, and what sort of criticism would be helpful. Something like:

Startup > (Real Job) > Side Project > Weekend Project

If I click a weekend project post, I expect to see, e.g., a cute but not necessarily marketable idea, a clever technical hack that might not scale, or a cutting edge design that probably doesn't work in IE8. I expect feedback to focus on those things and not, say, funding advice.

ImageStash (since the article mentioned it ... yay, someone read my post!) took a good three months of weekends. But I thought of it as my Weekend Project because it was just something I started on a whim, and worked on sporadically when I was in the mood to code but burned out on both my Side Project and Real Job.


Thanks, I hadn't seen that one before. :)

Yeah, it's a similar concept, as is http://mlkshk.com/ with their new bookmarklet.

I think two things set imagestash apart:

1. The bookmarklet has many more features, and works in more cases. You can use it to snag multiple images at once, view any images on a page as a slideshow, expose images obscured by CSS or Javascript trickery, find full size images from thumbnails, or download a batch of images directly from their page as a .zip.

On browsers that support <canvas>, it will even get images behind a login wall where the server can't download them directly.

2. imgfave, mlkshk, imgur, etc. are all geared toward sharing first, and private collections second, if at all. Imagestash prioritizes the other way -- building your own collection is the primary goal, and sharing publicly is secondary/optional.



I think this is the salient point for startups. Painfully contrived "fun" can be a short-term win. Tech press and early adopters like it ... but only because they like being clever enough to appreciate it.

"Look at that, it gave me a badge! Normal people will eat this up!"

But then normal people fail to eat it up, and traffic goes nowhere.

Three possible ways to avoid needing contrived "fun":

1. Be immediately useful. E.g., save people money, or provide excellent search results. This is pretty straight-forward, but tends to require either amazing engineering skills or an actual business model.

2. Be a social obligation. Facebook is home to plenty of third party apps that are desperately projecting contrived fun, but FB itself is rather somber. It doesn't have to be anything else.

3. Actually be fun. This is subtle, difficult, and maddeningly subjective.


Animal Planet shows animal faces. Anything with a recognizable mouth and set of eyes is still pretty engaging to our wetware.

(In fact, I just tried "Animal Planet" on Google images, and got mostly face shots -- human and otherwise.)


Good point, but I was responding to this quote: "You wouldn't watch that show unless it had a lot of humans in it, preferably attractive ones, showing their faces." Not sure why I deserved a downvote.


For brand new/"experimental" projects with both user permissions and a non-trivial set of features, built by a small team, I've always found it easiest to work like this:

1. Build features, ignoring permissions entirely.

2. When the feature set is relatively stable, default to disallowing everything.

3. Re-enable one feature at a time as you add appropriate permissions checks.

Step 1 looks horribly irresponsible if you don't know 2 is coming. But if you do, it avoids a false sense of security from half-finished permissions in rapidly changing code, and it keeps up early motivation since you're rolling out "exciting" features right away. And counting on step 2 ensures you're always checking whether something is allowed, instead of foolishly checking whether something isn't allowed.

Whether this scenario applies to Diaspora at all ... I don't know. Time and another release will tell. But I do think there are valid situations where authorization is "the kind of thing you add later" for appropriate values of "later".


Heehee. Nice!


It's using millisecond precision time for the nonce, yes. Nonce collisions in a normal volume chat are unlikely. If you'd like to suggest improved counter code, however, I'm all ears. :)

A corrupted server could alter messages without the key if it had both the plain and encrypted versions of a text. But to get that, the javascript would have to be compromise, and at that point the server might as well just steal the key anyway. This has already been discussed above, and if you're worried about it, the solution is actually pretty trivial. I especially like Whimsy's suggestion of a Grease Monkey script for verification.


It's not just two messages colliding on the exact millisecond. They can collide on subsequent blocks. So if Alice sends a 10 block message that starts at T_0 and Bob sends a 5 block message that starts at T_5, then the server learns information about half of Alice's plaintext and all of Bob's. Each client should use an independently-chosen and unpredictable IV.

You are incorrect on your statement that the corrupted server needs both the plain and encrypted versions of a message to send bogus messages. Without authentication, the server can flip arbitrary bits of a CTR message. This opens up several types of attacks. You should apply a MAC to the ciphertext.

I think that the encryption needs to happen entirely in the client and you can't rely on code downloaded from an untrusted server.


Thank you for the insightful comments, and for taking the time to read the code! I really appreciate it.

I've added a pseudo-random component to the nonce, and a MAC to the messages.

I certainly agree that you can't rely on code from untrusted servers, but I think doing the encryption in the client with code that is publicly viewable, even if it can be compromised at any time by a malicious server, is the best we can do for web apps unless/until major browser vendors incorporate client-to-client encryption. Right now there is no reliable protection against a compromised server, but I would like to at least see web apps strive to be more accountable.


Transparently? Without anyone being the wiser?

The javascript is there for the auditing. The server-side code is not, but you're completely free to analyze the client-side code to verify that it never sends your password to the server. I've avoided minifying any of it (save jquery.js, which you can diff against the official release to make sure I haven't modified it) to make it more auditable.

It's true that you probably won't check the javascript every time to make sure bonchat.org hasn't started serving up a compromised version. Just like I don't tcpdump my network traffic every time I boot up my OS X machine to make sure FileVault isn't secretly beaming my password home to Apple. The point is that I could. More practically (hah), I can randomly sample.

Actually, it would be pretty easy to verify the javascript each time. As long as you're satisfied that any version of the js is secure, you can save a copy to your hard drive and write a script (curl | diff?) to verify the server's copy every time you load up a bonchat.

No, it can't guarantee that the server is free from tampering. Nothing can. But I believe it's the first web chat secure/transparent enough that you can protect your data even if the server is compromised.


"The javascript is there for the auditing" doesn't mean anything. Javascript is the worst possible environment for running crypto code: it's accepting and running code delivered over a network from untrusted hosts bound in all sorts of unpredictable ways to an extremely complicated markup format optimized for display, and, on top of that, almost every single feature in the language can be overridden from the code.

Even if you were among the 0.00000001% of potential users who could possibly look at a block cipher implemented in Javascript and judge whether it was intact (and among the 0% of people who would actually do this), you can't just look at one piece of code in JS and know what it's actually going to do. You have to have a way of assuring every single bound function in the entire Javascript runtime.

That mechanism of ensuring the entire state of a Javascript context in a browser? It doesn't exist yet in any modern browser.

Your last sentence? What a huge dodge. If your server is compromised, your users lose. My browser trusts your server absolutely. If you'd like me to demonstrate this to you vividly, you can catch me offline and we can arrange terms.


Your objections seem to boil down to a superstitious distrust of javascript. The web may be a messy platform, but javascript is not a particularly difficult language to read, and (if I may say so) the relevant chunks of bonchat are written in a pretty plain style.

I'm not making any promises of 100% perfect security with no effort and no room for attacks. Bonchat is merely an experiment in securing content against servers as well as network snoops.

I trust Linux more than Windows. I haven't personally audited all the code on my Linux box, and I don't know any one person who even has the skill to do so. But the code is there to be audited, which gives me more confidence than when I use a opaque operating system. The same applies here. Bonchat isn't perfect, it's just trying to be easier to keep honest than a normal web app.


You say "po-TAY-to", I say "po-TAH-to".

You say "to-MAY-to", I say "no thank you".

You say "supersititious distrust of Javascript", I say "a day job finding, breaking, and fixing the horrible things people try to get away with doing in Javascript". (Or, less charitably: "knowing how Javascript works in browsers.")

Trust me on this one. It's a cool little hack. It's even useful if you get rid of the vanity crypto. But you are asking for someone to write a really mean blog post about you and your actual understanding of how crypto works. That's drama you don't need. Don't bother with the AES stuff.


If you would like to propose improved crypto code, I would love it. Honestly.

But "javascript is a messy language" is not inherently an attack. You can obfuscate just about any language. Do you actually have an attack in mind based on the fact that it's implemented in the browser?

It's true I don't have a deep understanding of the AES algorithms, and the AES code, as stated in the attribution, isn't even mine. Again, I'd love improved code. But you have yet to make any rational argument that javascript in the browser is inherently unsuited to encryption.

I completely agree that the many attempts to make SSL irrelevant by doing all the encryption in JS (and usually horribly naive JS) are foolish. That's not the point. Bonchat isn't a shopping cart or a mail reader. SSL is for securing communication to the server. Bonchat is an experiment in securing communication against the server. Do you have a better way than client-side encryption?


Do you actually have an attack in mind based on the fact that it's implemented in the browser?

You, the owner of the server, change the code. That's the attack. There's no way for me to tell my friend Charlie that he can use the service and get secure communication, unless he installs a plugin for his browser to verify that the server has not changed the data it sends the user from the time when I verified the correctness of the code. And if he has to install a plugin to safely use this service, which is now never permitted to change its code, he might as well just install a plugin that has the code, or install a separate application for this purpose.


You seem intent on evading the point. Most security systems don't depend on me being able to read the code every time I use it.


Most web security systems don't even give you the option. You sends your data off and you trusts your server. You can't read the code at all because it lives on a box you don't have access to.

Any security system you didn't code yourself does rely on someone you trust having read it at some point.

Is it more secure than some audited local code you compiled months ago and haven't touched and don't have to re-download? No, of course not. Is it more secure than chatting on Facebook? Potentially, yes. The necessary transparency is there.

This is not an innovation in protecting your data from network snoops. In that sense it's no improvement on SSL and it's not trying or claiming to be.

What it does do better than other web apps is protect your conversations from the middleman you can't avoid -- the web server itself.


Nick. Please. It doesn't do this thing you keep claiming it does, because you left out any mechanism for the client to verify the code the server sends. You keep implying, directly and indirectly, that clients can do this by hand. People keep telling you, no they can't. When I give a specific and unintuitive reason why people can't manually verify your code in a browser, you throw out a smokescreen about how Javascript is a fine language to write things in, which of course has nothing to do with the issue at hand.

(By the way, thing you don't want to hear from the guy offering you super-secure communication system? "You have a superstitious paranoia about Javascript".)

And, as Steve Weis took the time to point out, even if you had invented some new system by which browsers could verify the state of a Javascript interpreter fed by an app that, among other things, had until a few hours ago a bunch of obvious Javascript injection flaws, you still wouldn't have been secure, because you don't really understand how CTR mode works and implemented colliding nonces. (Why did you bother with CTR mode anyways?) You also don't seem to understand the relation between encryption and authentication; not having a MAC would just be an embarassing oversight if you hadn't then argued with Weis about why it wasn't necessary.

The security model behind this application is just a really bad idea, Nick. I know that's tough to hear, since you obviously put some time and effort into it, but you're going to need to zag instead of zig now and come up with something cool about this chat system other than the notion that it's somehow more secure than Campfire or any other https chat.


Actually I really appreciate Weis' comments, because they're actual concrete problems and implementable solutions. (A bit-flipping attack is not particularly interesting by itself, since the server can inject gibberish any time it wants anyway, unless it can be used to insert meaningful content. But I will be happy to add both a MAC and a better nonce.)

What I don't understand is why you think any unaudited code is secure. There is no mechanism by which code from an unknown source becomes trustworthy without someone reading it, or being able to read it.

Nowhere am I suggesting that understanding the javascript source is quick or easy, but it is in fact possible, which is the difference between encrypting client-to-server vs. client-to-client. Source code you can view is not automatically trustworthy. Of course not. But it is easier to trust than code you will never be allowed to see. Can we at least agree on that much?

As far as trustworthiness, I would say: local code > remote code run on the client > remote code run on the server > closed source. This makes (a theoretically fatal-bug-free) bonchat-like system worse than PGP, but better than Facebook IM.

It sounds like your real complaint is with the "marketing", such as it is. Perhaps I should do a better job labeling it as experimental, or a proof of concept? These would be actual concrete criticisms.

Folks complaining of XSS vulnerabilities seem to be under the mistaken impression that it somehow secures you against the people you're talking to, which is not the problem it's trying to solve. Unescaped HTML chat is vulnerable to XSS, which is why you shouldn't exchange raw HTML with random strangers. You wouldn't open an HTML attachment from a stranger, but you'd open one from your good web developer friend.

(A very fair complaint is that, although XSS is irrelevant to the intended use, I should've anticipated that people would want to test/demo the basic chat features by broadly distributing passwords. This turns it into, essentially, a plain text chat with two URLs you have to enter. Which is useless, but perhaps amusing, so I've added default HTML escaping.)


Nick, "bit flipping attacks" imply that attackers can make messages say whatever they want to. They do not simply mean attackers can inject "random gibberish".

Furthermore, your nonce problem doesn't mean attackers can inject messages; it means attackers can cryptanalyze messages.

It would be helpful, before trying to implement cryptographic security for other people, for you to spend some time researching cryptography. A great way to do that is to learn how to attack cryptosystems in other applications. Maybe that should be your next project.

Best of luck, Nick.


I appreciate your concern, Thomas, but you seem to be telling me that:

1. no one will be able to gauge the integrity of a web app by reviewing the code, and

2. I should fix the security problems sweis found by reviewing the code.

I wholeheartedly agree with #2 (and I'm working on it now), but it rather contradicts #1.

I threw this thing together because I'm a long-time open source fan and I've been watching as more and more code people entrust their privacy to moves behind the opaque border of the web app cloud. I'd like to find ways to deal with that.

Bug reports are interesting and useful, and I'm extremely grateful for them. My implementation is flawed, no doubt in ways beyond the ones sweis has already pointed out, and discovering those flaws is one reason I posted it somewhere like HN.

But simply declaring that javascript is an unsuitable language for encryption, or that web clients are an unsuitable environment, doesn't add anything factual. I think code transparency is a boon to security, and I would like more web apps to at least try to make their security reviewable, even if reviewing is a job for experts. If you don't think that code transparency adds anything useful, well, fair enough. I respectfully disagree.


There are some solutions to this, right? For example, you could solve the problem for FireFox by writing a GreaseMonkey script to verify the JS is the same as last time, or just has the right hash.

Another way would be to make it an option to only use the GreaseMonkey encryption - trust it to rewrite the JS on the page, so the user can control updates to the JS.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: