Hacker Newsnew | past | comments | ask | show | jobs | submit | blackbeard's commentslogin

This is one reason we don't use cloud-based source code hosting. All it takes is one idiot fork or an accident and wham, code everywhere.


Thinkpad love there as well. His "custom" computer appears to be an X220 tablet in an enclosure.


Would make sense, easily available commodity hardware that is reliable and in a decently small form-factor.


That sounds like a whole lot of eggs in one very expensive basket. Plus we can get that density with standard kit I reckon.


It may be one basket, but IBM high end kit is the nuclear fallout shelter of baskets.

E.g. we used to have an IBM Enterprise Storage System (aka the Shark) back in the day (around 2000), and it came in American fridge size, full of drawers of drives. You could just yank any drawer safe in the knowledge that all the raid volumes were distributed over multiple drawers. If a SCSI controller failed, you could yank a drawer of SCSI controllers and hotswap them safe in the knowledge they were fully redundant.

The "brains" of the thing consisted of a fully redundant pair of two AIX RS/6000 servers, and you could yank either one of them without losing data (all writes were committed to at least non-volatile memory on both servers before being acknowledged). Either server also had at least hot-swap RAM (raid-like memory controller) and may have had hot swap CPU's (tell the OS to move all threads off a CPU, swap, switch back).

On top of that, it had a phone connection and would dial out to report any early warnings of problems directly to IBM who'd send out a technician before anything even failed as long as you kept paying your support plan.

So yes, you can get that density with standard kit easily, and probably much cheaper too. Assuming you have enough skilled staff to manage it. The reason IBM still manages to sell this kind of kit, on the other hand, is because what they are really selling is peace of mind that most issues are Someone Elses Problem. For some people it makes sense to pay a lot for that.


I wish I had that much confidence.

Back in the late 1990s I was involved in provisioning a large Sun e15k. Not indestructable but nearly.

It broke. You know what happened? The factory roof leaked and poured water onto the DC sub-building which the roof then collapsed onto the e15k which promptly blew up and caused a spectacular fire, halon dump and about a month of work arguing with insurance companies and guys with shovels.

In that circumstance, it doesn't matter what promises the vendor make. That's still all your eggs in one basket.

Buy two and keep one somewhere else didn't help either as the network termination, switching and routing layers were down and all the people using it were about 300 miles away from the backup location anyway. So some poor fucker had to dismantle the backup e15k and disk arrays, bring them in a large truck[1] to the original location and erect a temp DC in a portakabin outside the building.

Edit: We would have been better served with two smaller DCs with off the shelf kit on the same site but different buildings running a mirrored arrangement. All for pocket change compared to a zSeries...

That's what the company I work for now do. We have off the shelf kit,SAN replication, ESX, redundant routing and multiple peers in different locations.

[1] imagine the shit if that truck crashed.


That's why you never deploy to just one location no matter how reliable the actual kit is.

You'd be in exactly the same situation if you had off the shelf "normal" servers in a rack. The point is one IBM mainframe is generally going to be more reliable than the vast majority of "homegrown" setups in a single location.

If you're comparing against a setup in multiple locations, then you should compare against two or more of these.

And there too these kind of solutions are far more reliable if you are willing to pay the money. E.g. IBM provides a range of options up to full synchronous mirroring of mainframe setups over up to about 200km where both systems can be administered as one identical unit (distance is down to latency). They also provide a range of other options for various performance vs. amount of data you can potentially lose vs. cost tradeoffs.

> Buy two and keep one somewhere else didn't help either as the network termination, switching and routing layers were down and all the people using it were about 300 miles away from the backup location anyway.

And this wouldn't have been any better if you had two racks of kit instead of two mainframes.

> All for pocket change compared to a zSeries...

There we agree. I'll likely never buy or recommend one of these, for the reason that I tend to work on cost sensitive projects.


Except that you're going to pay a lot more than you would for those off the shelf "normal" servers in a rack. Probably enough that you can afford doubly-redundant normal servers for the cost of a non-redundant IBM mainframe, with quite a bit of cash left over.


Yes but when that system falls over, your boss is yelling at you, and you're on the hook. With IBM, you can all yell at IBM. And that's why big enterprise companies buy IBM.


Its also why, IBM has seen decreasing revenue for 13 straight quarters


Until its time to build that new datacenter.


This story is obviously a bit ridiculous nowadays, since no one that can afford an ESS is buying a single site. In fact, most can't due to legal regulations about having data redundancy. These regulations typically lead to having a secondary site across town and a tertiary site across the country.


We have an AS/400 (excuse me, iSeries) and the damn thing is rock solid. It also alerts us and IBM when it needs maintenance. Its basically a tank with a logistics chain.


System/38, later AS/400, was one of the most brilliantly designed systems of the time that I've seen:

https://homes.cs.washington.edu/~levy/capabook/Chapter8.pdf

Designed for business apps, future-proofing, integrated database, largely self-managing, capability-security, continuing on solid (POWER) hardware... did about everything right. That's why we regularly fix crashed Windows and 'NIX machines but my company's AS/400 has been running for around 10 years.

I've always wanted a modern, clean-slated version of the System/38 w/out relics from that time and with any tricks we've learned since. Throw in hardware acceleration for garbage collection and some NonStop-style tricks for fault-tolerance to have a beast of a machine.


Strangely, I've always wanted a modern version of the Burroughs Large Systems, but I like stack machines and have been a fan of Forth and Postscript.


It's not strange for anyone whose read this:

http://www.smecc.org/The%20Architecture%20%20of%20the%20Burr...

A similarly amazing machine that IBM's System/38 learned from a little bit. Somebody posted a link to an emulator but honestly I don't want to dredge through that. Like you said, a modern system that reimplemented its best attributes without the limitations or baggage would be nice.

Mainframes are complex enough that there's rarely projects to implement them but there's lots of work on safer CPU's. See crash-safe.org's early publications for a CPU that combined Burrough's-style checks, Alpha ISA, and functional programming at system level. Given stack preference, you might like these:

http://www.jopdesign.com/

https://www.cs.utexas.edu/~jared/ssp-hase-submission.pdf

http://www.ccs.neu.edu/home/pete/acl206/papers/hardin.pdf


I started out my career in IT as an AS/400 operator / Netware 3.12 admin, and while AS/400 / iSeries aren't "en vogue" these days, I have a lot of respect for those machines. As you say, they are rock solid. One of the places I worked for had an even older machine, an IBM S/36 (predecessor to the AS/400) and while ancient, it just kept plugging away, day after day after day after day...

OTOH, you couldn't pay me to program in RPG/400 using SEU. Building menus, or playing around with a little CL on on the '400 is one thing. But RPG programming sucks. Well, it did anyway. Maybe things have gotten better. I understand the ILE stuff made RPG less column oriented and closer to a free-form language, but I had never had a chance to use that.


I remember original RPG as being the electronic descendant of the old IBM unit record machines, with their plug boards and mechanical processing cycles. That heritage likely predates even COBOL. IBM added many extensions over the years, and at one of my mainframe workplaces we even did online CICS programming with RPG (not fun at all!).


Who are you and how have you stolen my[1] early career history?!

Let me guess you started in the early 90's, right?

I remember looking at HTTP for the first time and feeling like it was 5250 display files writ anew. The y2k mess made me jump into web dev full time.

-----

[1]: https://news.ycombinator.com/item?id=9816696


Who are you and how have you stolen my[1] early career history?!

Hahaha... well, it's a long story, regarding the early part of my career. Especially the whole bit about exactly how I got involved with AS/400's in the first place.

Let me guess you started in the early 90's, right?

Almost. I graduated H.S. in '91, started programming in '92 or so, but didn't start my first IT job until 1997.


The AS/400 (err, iSeries) never gets any love. If you need line-of-business applications that just work all the time, it'd be a great choice.


This used to be my choice of integration. Microsoft web front end and AS/400 back end for warehouse. DB2 is a beast.


That's kinda the point of a mainframe, is it not?


Spending money? Yep.


No, putting all your eggs into one (hopefully redundant) basket so you only need to yell at one person.


Except that the OS is still provided by a different company to your actual hardware, so there's plenty of room for blame-passing, and most of the OS development is being done on x86 machines by people with no access to IBM Power hardware of any kind.


If I never hear "one throat to choke" again, I'll die a happy man.

This only works well if you can negotiate an acceptable SLA, your main vendor doesn't balk when integrating with subcontractors or other vendors and if you have a rock-solid vendor manager on your side enforcing the SLA.

Needless to say, it often doesn't work that way.


Oh, that works great when you lose power to the rack. Or the datacentre. Or the SAN fails. Or the core routers. Or any of the many other SPOFs that can and do occur in a datacentre.


All of which are accounted for by having two or more of these, combined with the feature they call (I'm not kidding) Geographically Dispersed Parallel Sysplex (GDPS).

You can hook up multiple IBM mainframes remotely and set them up to automatically ensure consistent replication of machine state to various extents depending on your reliability vs. performance tradeoffs and replication distance (latency being the issue), all the way up to active-active operation across systems.

So in other words: It works far better than the failover options most people deploy on their off the shelf servers in their self-wired racks (and yes, I run my own setup across off the shelf servers; and no, they're not nearly as redundant as a pair of IBM mainframes).


Problem is we kitted out two 42U racks in two DCs with HP and EMC kit on VMware and got four humans for five years for less than the comparable quote from IBM. And we've tested replication and failover to the same extent and didnt have to rewrite the 2 million lines or so of code we have...


> All of which are accounted for by having two or more of these, combined with the feature they call (I'm not kidding) Geographically Dispersed Parallel Sysplex (GDPS).

And it is an awesome thing - although I didn't realise it supported zVM these days, rather than just zOS.

In any case, you've still got two baskets, which was my point.


That's why you buy two and put them in different DCs


And that's why IBM has their own bank branch to help customers figure out how to afford that?


Well one of their big customers are the banking sector...


That would be most of their customers I imagine. I work with one who still runs mainframes.


There's nothing wrong with dynamic allocation in a kernel. It is for example better than having fixed size process tables and all the crap that comes with that.

"holy crap I've got to recompile my kernel to get more processes" is so 1995...


You know what else is "so 1995"? 400-day uptimes.


Not really. I have centos boxes that have been up for over 3 years.


(if you can remember how to use it each time)


Guess nobody else reads GNU humor.

https://www.gnu.org/fun/jokes/ed.msg.html


Ed is actually really easy to use once you understand that it's a small command set. a,c,i,d are all you need.


and that it is very similar to ex's command set


True. We use composition to attach things to an object rather than develop a deep taxonomy through inheritance.


I've been open to it. I went on a journey of SICP, Common Lisp, Haskell and F# over three years.

I went back to my OO roots and c#/c++ and carried on mostly as I did before. Why?

Simply that its easier to rationalise a large system in terms of objects and actions. Even atypical OO languages like Go use this model.

What I did take away was limited mutability (not total immutability) and some functional paradigms such as map/filter and a well founded opinion that one shouldn't listen to programming religious wars and use what works for you.


> Simply that its easier to rationalise a large system in terms of objects and actions.

I'm currently of the opinion that OO is basically a form of module system: useful for carving up large problems into more manageable ones, but not necessarily a good model for writing algorithms.


We did it in the UK. Its fine.

Apart from some things like miles and gallons which would require a massive synchronised change that is.

I'd also like it if we drove on the RHS here as well so we can get decent import vehicles.


> Apart from some things like miles and gallons which would require a massive synchronised change that is.

It's less the synchronisation and more the expense of replacing all road signs throughout the kingdom, making it a political non-priority.


I'd happily help foot the bill.

So how much is that 1000 mile journey going to cost?

1000/44.94.4561.119= where's my calculator?


Yes couldn't agree more with that.

Also dates in numeric order I.e. yyyy/mm/dd you know like all the other numbers we deal with not dd/mm/yyyy or the crazy mm/dd/yy.


Use minus instead of slash and you have ISO 8601: yyyy-mm-dd


I had - I think - independently come up with this format for naming log files, and I was so very, very happy when I found out it is an ISO standard.

A little disappointed, too, because every single time I have a great idea like this, I find out that somebody else had it before me. But still.


Sold.


8601 also gives you proper week numbering (which Europeans tend to like) + weeks start on Monday, after the weekend.


Well it's all the weekend. Saturday is the last end of the week and Sunday is the front end of the week.


In the US perhaps, not in Europe


In the UK too traditionally, although that’s changing (by convention).


Yet, in America, people say things like "do you have plans for the weekend?" Or, "What did you do last weekend?"

Nobody ever says, "Do you have plans for the upcoming two days which, respectively, constitute the end of this week and the start of the next one?"

So, Americans and Brits are inconsistent. They have "the weekend" which is a block of two days when salaried people with regular working hours don't work; and they have Sunday as not the week end, but rather the beginning; or the "front end" of the next week. Which means that the two days cannot be the weekend; they are two different ends of two different weeks.

> that's changing (by convention)

It's changing because people have to confront the above reasoning and realize that a week beginning in the middle of something that they have been calling "the weekend" for decades is silly.


Neither does anybody say "What are you doing for the holidays constituted of Christmas Day and selected other days around it?"


Christmas has 3 days.


In some countries the weekend isn't Saturday and Sunday though. So the start of the week is kind of arbitrary.


Saudi Arabia used to have it's weekend on Thursday & Friday. Recently they've switched to Friday & Saturday.


I believe in my entire life I've encountered exactly one situation in which the day that is defined as the first day of the week actually made a difference to anything: my kids' swim school schedule, where Week N of the term runs from Sunday through to the following Saturday inclusive.

Since the working week (and the school week) here starts on Monday regardless of whether Sunday or Monday is regarded as the first day of the week, it seems to have always been a distinction without a difference to me.


Yes. Putting the year first and with 4 digits is the only safe way because, afaik, nobody anywhere uses yyyy/dd/mm. It's nice that it's also in order of significance but the main advantage is unambiguousness.

You'd think people would have learnt from Y2K but somehow we still see 2-digit years which make dates like 03/04/05 impossible to even guess at. It's slightly better since 2012 where a 2-digit year can't also be a month, but we'll have to wait till 2032 for 2-digit years to unambiguously mean year.

This is an area where I believe localization makes things worse, not better. If every website showed dates with the year first, people would easily understand, regardless of whatever silly local convention they have. As it is, whenever I see a ##/##/## date, I have to think about what the website might be trying to do (do they know what country I'm from? What country I'm in now? Are they using their own local convention?) and what possible dates it might mean. "I think that happened around August, so 08/10/14 is probably not the 8th of October." Localized dates just make no sense at all on the internet.


Wait until Teslas become cheaper.

Until then, buy a Lada Niva. No one will want to steal it and it doesn't have anything complicated in it that can be hacked.


1993 Corolla with decayed paint. Utterly, utterly, reliable. Appears undesirable. It will also guarantee that you'll never get laid.


Nope. 90s Toyotas have eminently resalable parts and no immobilizer, making them far and away the most-stolen cars in America.

There are very high-end car thieves who want flashy cars, but the vast majority of car theft is about 1) ease and 2) what the parts are worth.


No way. Late-model Toyotas and Hondas get stolen a ton. Spare parts are still useful for tuners and sport compact car ricers.

Get an old American junker.


1993 is late model?


Good cars. Too wet here in the UK for something to last that long. My 2006 Fiat is on its way out already...


I know someone bought 1988 Honda. Made a sharp turn - front wheel fell off.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: