Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Trying to look at how software performance has improved in the way you are looking at it doesn't really work. There is no point putting time into optimising things that run fast enough. There are many algorithms that have been improved dramatically. People use this kind of matching for DNA, nobody is busy creating slower algorithms, they are only getting faster (e.g. www-igm.univ-mlv.fr/~lecroq/articles/cpm2009fl.pdf). Prime factorisation, optimisation, etc. are all being actively researched.

In other areas people are doing things which weren't even possible on old hardware, think about 3D graphics. Real time lighting approximations are doing really clever stuff, just look at the latest unreal engine demo. If you put these into old hardware you would get the same quality render in less time. Modern ray tracing engines are using more sophisticated algorithms, giving better results in less time.

Beyond the algorithmic work optimising for modern hardware is different to old hardware. An optimal program for old hardware isn't optimal for modern hardware and vice versa. Now if a program needs data from RAM you have to wait hundreds to cpu cycles. Even accessing the L3 cache, which is in a modern processor chip, takes about 75 cycles on modern chips. So you can recalculate things and find your program runs faster.

Similar things happen with branches, I was looking at some GPU shaders recently and there was a simple raytracing loop (parallax occlusion mapping), the obvious way to do it is to stop when you intersect the surface. Instead it is actually faster to always run the loop for 10 steps because this removes the branch. So you do twice as many loop iterations on average but your code runs faster.

Of course you can afford to be sloppy as well. People are 'sloppy' so that their code is faster to write, easier to read and less likely to have bugs (3 lines of python is pretty likely to be easy to read and bug free). I would say software is advancing in many different ways, which hardware has enabled to happen.



The numerous references to hardware in your comment only serve to illustrate my point.

Of course there have been some new algos and improvements to existing ones since 1984. But nothing approaching a Moore's Law. The dramatic changes the blog post highlighted are due to hardware, not software.

Take away the hardware advances (hold the hardware as a constant), then look at software as the variable. Then we can have a meaningful discussion of software improvements over time. IOW, run new software on old hardware.

To measure advances in hardware, hold software as the constant. IOW, run old software on new hardware.

If both are variables, it's difficult to assess how much software has improved on it own, without the benefit of new hardware.

Linus Torvalds' comments on the future of RISC, CPU instructions, and compatibility versus "cool new features" in the /. interview were interesting.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: