So the Tacoma Narrows bridge collapse was indistinguishable from a software failure because all software is the product of humans? That makes no sense.
The MCO failure came about because people took the correct output of one program, then incorrectly used it as input for another program, which then performed exactly as it was supposed to on the bad input.
This is an interesting study in human-computer interaction and how to make that robust, but I don't see how you can possibly apply it to the question of how to make "reliable software."
Ahh, now you're affirming the consequent. I said all software is the product of humans. I did not say that all human products are software.
Where our disagreement seems to occur is where the boundaries of software systems lie. You appear to be making the claim that it is at the granularity of individual programs whereas I am claiming that the entire software system must be considered. If you were to write a bash script that pipes the output of curl (presumably an html file) to /dev/dsp0 and a horrible screeching noise emanates from your speakers, what you have produced is a software error. It does not matter that each of the individual components is working as intended; the system as a whole is not (unless you actually intended to produce that screech, of course).
I agree with your example of a pipe. But what if you manually retyped it and you were supposed to carry out a format conversion as you did so?
My understanding of the MCO failure was that it was a manual step in the process that failed. The humans were supposed to do something, and didn't. I don't see how that can be defined as software, or anything even close. It's analogous to seeing a highway sign that says the speed limit is 80km/h, setting my car cruise control to 80MPH, and then saying that it was a software defect that caused me to get a speeding ticket.
The MCO failure came about because people took the correct output of one program, then incorrectly used it as input for another program, which then performed exactly as it was supposed to on the bad input.
This is an interesting study in human-computer interaction and how to make that robust, but I don't see how you can possibly apply it to the question of how to make "reliable software."