Hacker Newsnew | past | comments | ask | show | jobs | submit | khandekars's commentslogin

Disagree. Even bullets shouldn't be displayed while the user types the password. Why should a security camera in an office know that the user's password length is ten, twelve or twenty nine characters?


If you have a well chosen password and it is that long, knowing how long your password is won't be enough for the office snoop to crack the password. I'd be more worried about the camera watching which keys I'm pressing.

On the other hand, if I start typing my password too soon after the login box appears on my laptop, it eats the first character. I would never have worked out why I was finding it so hard to login if the password box did not display bullets. (See also dodgy keyboards.)


Dodgy keyboard is a good point; makes for a compelling case in favour of bullets.


Security cameras can just record the keys that you type the password in.

If you cover your hand, the problem space is still small enough to guess just based on where you are covering and how long for.

Only systems that do not use passwords are viable if you are being closely scrutinized. e.g. entering only selected characters from your passphrase.


Thanks. OT: How do we protect from such snooping? I thought that recording the keys was harder for the cameras since the keyboard would be masked by user's height or a partially closed tray. E.g. in a cyber cafe, the keyboard normally is below the tabletop in tray. This is also true of many companies in India, where they use desktops.


Actually, password fields often display an incorrect number of bullets (maybe not while the user's typing, but after they're finished). The idea is to also mask the password length.


"Founders at Work" by Jessica Livingston. Full of good stories.

"My years with General Motors" by Alfred P. Sloan, Jr. How he built the organization is quite an interesting read.


Processor: 32-bit PowerPC 450 at 850 MHz. The supercomputer has 294,912 processor cores. Instead of running it at a higher speed, they are quite correctly exploiting the parallelism. It's a net win for performance and energy efficiency. I guess that the desktops of the future will tend to follow a similar trend.


I guess that the desktops of the future will tend to follow a similar trend.

They won't, because desktop software is quite different from HPC software.


Well, yes and no. A games developer will do whatever they can to get realistic fluid simulation, for example. People already use more compute power for fog and smoke in games than was used on the weather 20 years ago...

Remember history. Anything people could do on the "mainframe", they eventually wanted to on the desktop.


I will like to believe that the Desktop software will move towards parallelism, may not be to the same extent as of HPC, but in a similar direction in the sense that it will certainly go away from strictly single core approach. It won't be easy, e.g. none of the web browsers today fully exploits all cores of a box. But, it will be fun to watch.

Off topic: I'm in your fan club, :) due to your "Hack the planet" blog http://wmf.editthispage.com since Dec. 1999.


The cost of finding all the single-thread bottlenecks and parallelizing them is immense -- akin to the Manhattan Project -- and the payoff would be that Intel and AMD could sell different (perhaps lower power) processors than they do today. Why bother? We have processors that work perfectly well for desktop software.


True. At the same time, we keep expecting more from our machines, fuelling feature growth and consequential demand for faster processing mechanisms.


Yes, so future apps may prefer to run on the Larrabee cores, but the fat cores that old apps depend on cannot be removed.


I can't help but wonder how much you could do with a super computer budget working with these (http://www.amax.com/CS_nVidiaTeslaDetail.asp?cs_id=PSC2) It works out to ~10$ / 1ghz core.


The problem with GPUs is that I/O latency is very high compared to your average supercomputer. You can do a crazy amount of computation locally on one card, but for problems that aren't "embarrassingly parallel", i.e. those that require a lot of low-latency inter-node communication, you'll immediately be limited by latency.

If nVidia or AMD release GPU based stream processors with onboard or daughterboard-based interconnects directly accessible from the code running on the GPU, THAT's when they'll start eating into CPU market share.

If you're buying a supercomputer, you'll want to make sure to spend at least 50% on the interconnect or you're in for a big disappointment.


How would the NVidia GPU personal supercomputer do on large matrix-matrix multiplication?


Depends how large. If it fits in video memory, very well. If not, pretty badly.


Super computers are already focused on "embarrassingly parallel" problems. Otherwise 300,000 cores is not going to do much for you anyway. However, I agree that interconnect speed would be a major issue for many supper computer workloads. Yet, I suspect if you had access to a 10+million$ supercomputer built using 1million GPU cores plenty of people would love to work with such a beast.


No, these are not just racks and racks of individual machines. It presents the programmer with a single system image - it "looks" like one huge expanse of memory.


We have a Blue Gene at Argonne, it's not SSI. It is however not designed for embarrasingly parallel workloads, you use libraries like MPI to run tightly coupled message passing applications (which are very sensitive to latency). You can, and people have, run many-task type applications too.


The basic speed of light limitation means that accessing distant nodes is going to have high latency even if there is reasonable bandwidth. Ignoring that is a bad idea from an efficiency standpoint. And, unlike PC programming the cost of the machine makes people far more focused on optimizing their code for the architecture than abstracting the architecture to help the developer out.


Yes, the plumbing takes care of that for you. Oracle does similar tricks if you run it on NUMA hardware.


It take care of it to some extent, but you still have to be aware of it as the programmer. MPI and associated infrastructure are set up such that they'll pick the right nodes to keep the network topology and your code's topology well matched. But you have to do your best as a programmer to hide the latency by spending that time doing other things.


Thanks. Having actually used Mono on Linux, he shared some interesting observations. Clojure surely looks promising, but Python or Scala can be equally fine choices as of the moment.


This poses some pretty interesting questions.

1. Is it mandatory to install such filter software on Linux boxes also? 2. How do they handle the case where the filter software is chroot'ed in a jail, so that the individual is complying with the letter of the law by installing and running the software, but managing to avoid the ill-effects?

I'm not speaking about censorship etc., just plain curious.


In case you decide to compile MzScheme 372 from sources on a Fedora 10 box that has SELinux in enforcing mode, you need to take a few (simple) additional steps:

http://aadnyavali.wordpress.com/2009/06/07/making-arc-3-play...

I'm still downloading Fedora 11, but suspect that the same exercise will be needed for that as well.


C++0x standard is in "real soon now" mode for quite some time. I'm definitely excited by the possibilities, but somewhat unhappy over the current state of compilers, see e.g., http://wiki.apache.org/stdcxx/C++0xCompilerSupport and http://gcc.gnu.org/projects/cxx0x.html

In such as scenario, writing code that runs with different OS-compiler combinations while leveraging the cool features such as concepts, lambda expressions and closures, variadic templates and unicode is going to be difficult in the short term. Of course, if a project sticks to a single compiler, then it won't be much of an issue.

Nowhere near the power offered by LISP, but huge leap of convenience while writing in C++


How's LISP run time compare? Most of the time I'm concerned with speed for applications (and near zero response time).


There was a formatting problem with my previous comment. The statement "Nowhere near the power offered by LISP, but huge leap of convenience while writing in C++" was a footnote for "lambda expressions and closures." The asterisk got eaten when I posted the comment, :(


Looks like github+topcoder for ideas.


Thanks, it's good.

In examples/README, it mentions of exmaples/zenon -- certification of Tom's output using zenon and Coq. Apparently, the "zvtov" tool required for that is part of FoCaLize. Given that Coq was used to create a surveyable proof of 4-color problem, it sounds impressive.

Links:

FoCaLize -- http://focalize.inria.fr/

Coq -- http://coq.inria.fr/

http://en.wikipedia.org/wiki/Coq#Four_color_theorem_and_ssre...


Many times I wish the below mentioned, but perhaps it will take quite a few decades to achieve it.

If we could have a global infrastructure, wherein we tell the car the place we wish to visit, which causes the car to automatically figure out: the optimal route based on constantly changing traffic and weather conditions, refilling, obeying all rules such as one-way / speed limitations and so on, it will be truly amazing achievement.

The technology is there, fragmented in thousands of pieces; it needs to be weaved as a cohesive whole in a reliable and safe manner.

In a nutshell, save the time for driving the vehicles for an entire civilization, :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: