Reading this article and the HN comments, I'm realising something nice: to me, Walter Bright was always up there with the unapproachable gurus of programming. Clearly, he doesn't see himself like that at all, which is a very encouraging thing to realise. It's nice to be reminded that many people I look up to are really just down here, feet on the ground, and that maybe I could accomplish similarly great things by just (keeping) giving it a go.
Reading the D forums you see how friendly and approachable he really is. Hand in hand with that is great patience, I'd say.
Where I fall down is that without encouragement I'm not a 'completer-finisher'. Walter has that great ability to grind out the finished quality product. I love D for its expressiveness and speed, here's hoping it wins out.
You only hear about the people who succeeded when everyone told them they would fail. The many that failed when everyone said they'd fail don't often make the news.
People usually have good reasons why they say negative things. It's worth understanding the negative sentiment.
For example, victims of certain types of fraud are often warned repeatedly by friends and family that they are going to be defrauded. Some still go ahead and lose staggering sums of money.
It's important to really listen to the naysayers, if only to understand the risks. Then you can make an informed choice as to if you'll try your hand at something that's potentially costly.
I just needed a hobby to keep me from getting bored and depressed. I had adapted a screenplay and made plans to write a novel, but my interest in programming had been sparked by an early exposure to Super Star Trek and the profound realisation that this illusion was being created by a magical recipe.
I wanted to learn these arcane incantations so that I could fill this black void with my own Universe. Whether it be CRT, LCD, LED, or Plasma, I have been trying to find a way to bring worlds of light to the darkness of a computer display ever since. Yet, as the scale of my endeavour dawned on me I realised that it would be better to spend a couple of years creating tools that I was comfortable using than rely on C++ as others recommended.
Unfortunately, my estimate was out by an order of magnitude.
Some twenty years later I have only managed to wrangle the process of research and (re)design to a point where I am finally ready to write a comprehensive specification of my multiparadigm "live" programming language and its alternative document-centric graphical user interface only after a self-imposed deadline made at the beginning of last month - otherwise, I'm sure I would still be amusing myself exploring endless "rabbit holes".
Despite this delay, I feel that my project is stronger as a result. I really didn't know enough about computer science when I started and I have found that I need to grok OOP and FP to know that they aren't appropriate for my needs. I was denied the opportunity to study the subject at school and had to travel to Foyle's in London to buy obscure computer books for years before unlimited broadband became affordable. Admittedly, I was paranoid that I would find myself several years into programming my videogame only to discover that "high productivity", "spare me the details", programming language was weak in some respect and could not accomodate the retrofit of some unanticipated, but very necessary, feature. Hence, a lot of the work I have done has been defensive: trying to create a future roadmap that specifies how concurrency and parallelism would work even if I plan to leave the implementation of these features until much later.
Hopefully, it won't be too long before I am using an integrated suite of development tools (created with my own language), to build my own procedurally-generated intergalactic MMORTSFPSRPG (Massively-Multiplayer Real-Time Strategy First-Person Shooter Role-Playing Game), or "adventure" if you prefer. Without my language/tools I very much doubt I would be able to complete such a grandiose endeavour unaided, and I very much prefer working alone without social expectations or professional deadlines - despite how much of my disposable income it has cost.
I'm not sure if you're serious or not, but you tell a good story! If you are serious and you really want to deliver something rather than enjoy the daydreaming about how great it's going to be (something I suffer from myself sometimes) then you should know that it's time to put fingers to keyboard. You've got a dozen of years of effort in front of you. Have a look at the story of Robert Szeleney and SkyOS for instance.
Well, I've written about thirty million words on the subject, but I don't think it will take me 12 years to implement an initial prototype, more like two and then several years for all the tools that I want to make with it which will come to define its API, it should evolve over time with only me using it - I don't want to find myself in the same situation as Dennis Ritchie and Bjarne Stroustrup when a redesign was impeded by their early versions being adopted by comparatively few users and want the total freedom to change the design of my language if absolutely necessary with it only breaking code that I have written with it myself. Indeed, the likelihood of me making my language public is quite small as I know how every new language gets a hostile reception on here, Reddit and Slashdot. Really, I'm only doing it to help solve my own problems like Larry Wall did with his Perl scripting language.
If I had started implementation sooner I would have made something naive and half-baked. I did not have the benefit of a formal education in Computer Science and I probably wouldn't have attempted a project of this size if I had known all of the work that was involved. Walter Bright wrote a compiler by himself and Paul Woakes wrote Mercenary unaided and Elite was made by just two people, with David Braben on graphics and Ian Bell doing the trading (okay, three if you count the novel included in the box by Robert Holdstock), but that is still just 1% of the staff of a Ubisoft game like Assassin's Creed employing a whole bunch of artists, animators, scriptwriters, voice actors and composers for an estimated thousand man-years to create its content-rich high production values - all of which can be circumvented with procedural content generation as in No Man's Sky (initially, just four developers), and supplementary user-generated content such as or the seven million user-created levels in Little Big Planet, or about seven thousand competitive Halo 3 maps on the Forgehub community website.
Rather than waste mine and everyone else's time "doing the sensible thing" and writing another Tetris clone, I've gone and jumped straight to what I wanted to do, mindful that I will need productivity boosting tools in order to make it and that to write those all by myself I will need a highly productive exploratory programming language and that in order to make THAT it would help if I knew what the hell I was doing and did PLENTY of preparatory reading so that I didn't go in to it uninformed.
I agree too. I don't regret a single second.
True, the business didn't work out.
But what I've learned is so much more, than if I would keep working a perm job...
The startup experience changed my life for the better.
It will help me to make another business without mistakes.
I've learnt who my real friends are.
How to control the fear of uncertainty.
Etc etc etc...
It's funny (not really) that the set of people who actually do things and the set of people who reflexively post negative comments on the internet barely intersect, if at all.
This has been posted many times, but the words of Teddy Roosevelt bear frequent rereading:
"It is not the critic who counts; not the man who points out how the strong man stumbles, or where the doer of deeds could have done them better. The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood; who strives valiantly; who errs, who comes short again and again, because there is no effort without error and shortcoming; but who does actually strive to do the deeds; who knows great enthusiasms, the great devotions; who spends himself in a worthy cause; who at the best knows in the end the triumph of high achievement, and who at the worst, if he fails, at least fails while daring greatly, so that his place shall never be with those cold and timid souls who neither know victory nor defeat."
The critics often seem to be saying, implicitly, "_I_ could do it better". It's a claim of superiority. I guess it's beguiling, because criticism of this form is so much easier than genuine creation. I think many of the critics don't consciously realize this, since they've never created something of great worth, and have no idea how difficult it is.
I've known very accomplished people who could be haters. But usually accomplished people seem to be appreciative of creative projects, and understanding of flaws (while wanting them eliminated). I think it's because they understand just how difficult it is to do good creative work. They know that someone can bring immense gifts and effort to bear on a project, and still have it go awry.
Citation needed. Probably the best programmer I ever worked with was the acidic guy who shot most suggestions down and at one point almost started a fight with another developer.
> I'm glad I didn't listen to the naysayers, the Debbie Downers, and of course that nameless C guru from long ago, whom I owe thanks to.
That's great to hear that you didn't let the negativity stop you.
I was recently told "If you're going to get bogged down trying to get the lexer/parser to work, you're not ready to work on a full blown language/compiler." Hopefully my story will go the same way as yours.
I'd also recommend that one master algebra before attempting calculus. I don't think that's negativity, it's just recommending an order in which to do things. Negativity would be "you're not smart enough to handle calculus."
Not that anyone has ever followed my advice, anyway :-)
And that's probably good advice. Take it as: toy around with lexers and parsers a bit before moving on to more complicated subjects. (Or cheat, and stick with a Lisp like syntax.)
Well, having a working lexer/parser is needed to make a compiler, but I see very little reason to write your own parser. Lexer yeah, but from what I've heard parsers are tedious and better optimized by programs.
Anyway, thing that helps me do stuff - focus on getting a piece of program right. Then move onto another part.
Another good saying, all good software starts simple and evolves into a complex solution. So maybe start with Lisp syntax and evolve it into something more complex, when you're ready.
Interestingly there is another article by Walter Bright on writing your own compiler on DrDobbs [1] in which he explicitly recommends to write the parser yourself and I have received similar advice by others that know how to implement compilers. My personal experience in implementing a subset of C turned out similar.
Nice article! I tried out D a while back and I liked it a lot. I'm not sure I'll pick it back up again but it's certainly a possibility, especially if I feel compelled to write C++ again... yikes. I'm a language geek myself; I'm working on a language, my third so far but this one is the farthest I've yet gotten and I'm really excited about it.
One question I have for Walter, when you were writing your compilers (both back in the day, and for D), how much did you just try to figure out yourself, and how much did you rely on literature to guide you? I tend to find it more fun to just try to do it myself, and only turn to literature when I'm really stuck. But obviously there's a bunch of valuable things to be found in books and articles, and they can save you from bad decisions or going down fruitless rabbit holes. As someone who's very experienced in language implementation, do you tend to invent your own algorithms, or rely more on academic and industry literature?
I tended to have many amazing insights, only to discover I'd just reinvented the wheel. It's because I do not have an academic comp sci background.
I learned how to do data flow analysis by taking a short course put on by Ullman and Hennessy.
I learned about GC from the famous GC book.
I learned a lot from people who started out saying "Walter, you damned ignorant fool! [...]"
I've learned a lot about things that sound like great ideas but just don't work, from bitter experience. I try to pass this stuff on to the next generation, but they are usually determined to make the same mistakes. Oh well.
Walter, always wanted to ask you - what do you do for living? Is it something related to D or compilers in general (consulting maybe?). Also interesting if Digital Mars commercial compiler had/has any success.
How would one compare/contrast D vs. Rust? Both concerning the problems they try to solve / the niches they target, and the way they choose to solve them.
I almost never heard/read something about Rust from a D developer and the other way around, it's a bit like they live in different parallel universes :)
There was an interesting discussion on a golang thread about Go vs C++ vs D, with great insights in conversation between Andrei Alexandrescu and some of the Go guys (can't remember where to look for the link right now...), but it only made clear the fact that Go and D target very different niches and it was basically an apples to oranges comparison... but D and Rust would be an interesting comparison, they really are in the same "zone" but the communities seem very different and each seems not to know or care about what the others are doing (there was a Rust guy on that thread mentioning the way Rust uses the pointer types system for memory safety and the others were something a long the lines of "can a type system really be used for that?!" that clearly suggested the two groups didn't share their ideas a lot).
Rust and D will largely inhabit the same conceptual space (both can easily be used to write efficent, low-level programs) but their individual styles are so different that I think in practice any given programmer will have a clear preference for one or the other, with little overlap between the communities.
Rust has an overriding emphasis on guaranteeing memory safety and correctness, and when the programmer must resort to potentially unsafe code it isolates the unsafe portions of the source code so that they can be more easily audited by hand and thoroughly tested. It specifically incorporates safety features with zero runtime overhead (with the exception of array bounds checking, which can be turned off on a case-by-case basis) in order to appeal to C++ programmers who need to work close to the metal.
D also emphasizes greater safety than C++, but not to the fanatical degree that Rust does. D smooths and streamlines the experience of writing C++-level code and favors expressiveness, with especially impressive compile-time programming abilities.
The most important conceptual division between D and Rust is that D guarantees memory safety via garbage collection (which can be disabled, at the cost of losing memory safety), whereas Rust guarantees memory safety via compile-time checks (which makes Rust code less convenient to write, since it forces you to think about the lifetimes of your data, but provides the benefits of memory safety in environments where GC overhead is unacceptable or where the code must run without an accompanying runtime present).
Both languages share many similarities to C++, but this is merely convergent evolution. D deliberately began with a very strong C++ influence, whereas Rust began as more of an OCaml-like language and gradually evolved toward C++ due to the pressures of designing a production-grade systems language. The result is that Rust favors ideas more prevalent in functional languages: immutability-by-default, algebraic data types, everything-is-an-expression, and so on.
A good oversimplification might be something like: D was conceived by people who were tired of how clunky C++ is. Rust was conceived by people who were tired of how unsafe C++ is.
I guess it is a matter of how both communities look into the problems.
Currently I spend more time on D forums than Rust ones, and to be honest given my type of work I can only use JVM/.NET/C++ languages, as customers have the last word.
However as language/compiler geek I do follow many discussions.
Trying to avoid a flamewar here.
D has a GC and follows the school of thought from Cedar, Oberon, Modula-3 and so forth, where it is assumed you can have a systems programming language with GC, which also allows for manual allocation when required to do so.
Still room to improvement there in terms of performance, though.
Rust leaves GC to the library, at the expense of a complexer type system as a means to allow the compiler to reason about automatic memory usage.
Both provide very powerful and modern abstraction mechanisms.
Which one is better? I think it is a question of use cases.
"I confided this to a colleague, and he suggested a lunch with him and the local C guru"
This stood out to me as the #1 difference between me and successful people. I always notice it in every article or bio of someone successful. Something that has never, and would never happen to me at any point in time, but in the bios, it's always there:
The first person I mentioned my idea to suggested a lunch.
I believe the parent comment is implying that in every biography by someone who does something impressive, there are moments that show the environment they where in. Some things to note about the statement "I confided this to a colleague, and he suggested a lunch with him and the local C guru"
1. He had a colleague to speak to about this issue. This indicates he and the colleague had a relationship conducive to speaking about such problems, and was in a position to speak to him in the first place.
2. That colleague was engaged enough to want to hear more.
3. The colleague knew a "local C guru" and in turn had enough of a relationship with that guru to arrange a lunch between the three of them.
All of these things point to an environment that helps foster this kind of work. It shows that the author was hard working enough already to be in that position.
I don't know if others interpreted his statement to mean the same things, but that's what I have.
Note that experience was entirely negative. Nobody offered to help write any code, invest any money, do any marketing of any kind, etc.
Even after I wrote the compiler and was shipping it, a different colleague asked me one day: "Walter, I have a friend that needs a C compiler. Which one do you recommend?"
Me: "Why, mine (Datalight C), of course!"
Colleague, laughingly, "Not yours, Walter, a real compiler!"
A friend of mine, who was in earshot of this little exchange, thought it was most hilarious.
If you want to do things like this, you've got to have a pretty thick skin for this sort of "help" from your colleagues.
I once used D to implement a toy LISP language under a thousand lines including the native and loads of built-in functions. I can't imagine doing it this easily, this cleanly and this performant using either C or C++ (I even had fixnums and clojure-like vectors and maps, although not persistent.)
There's a lot of features to learn, and a lot of ways to use them. But the result is something almost as powerful as LISP, almost as safe as Haskell, as low-level as C and even more with inline assembly, and most importantly as productive as Ruby. Definitely a lot to digest as a first language!
Congrats on making the top 20 most used languages, well deserved!
It's interesting how games have driven the development of computer systems. Unix, for example, was motivated by Ken Thompson's work porting his game Space Travel* to the PDP-7.
I loved his Zortech compiler, esp. the nice GUI which beat all others, but mostly his C library which came with it. I used a lot of his libraries for while, until I decided that C/C++ is a dead end.
With D he found his mojo, but people are still raving about Rust and are still mostly ignoring Walter. Big thanks to Facebook for deciding on D. Only ObjC came close but D is better.
Yeah, he and I traded emails briefly 7 or 8 years ago, I think, but didn't really have that much to talk about.
But we go way back -- we were in the same class and same dorm at MIT (before he dropped out and wrote BDS C), and were housemates for about a year c. 1981-82.
Ever hear of MINCE? It was a micro-Emacs that first ran on CP/M, later MS-DOS. The first version was developed using BDS C.
Ah. Well, I was one of the founders of Mark of the Unicorn, and helped write MINCE (though most of the credit belongs to my colleagues Jason Linhart and Craig Finseth).
I have a bachelors in mech engg. I eventually moved to software and Walter Bright exactly echoes why:
My degree was in mechanical engineering. But mechanical engineering was frustrating because of the large expense involved with building anything ... With programming, on the other hand, I could build the most intricate machines imaginable at no expense, needing only access to a computer.
I had a rather critical view of D at first. Another C like again, we never get rid of these bracket-oriented languages! Then came Go and Rust, and my opinion has changed, I see it more favorably, as it does not seem to be the worst of them.
Why do they use a name that would make it almost impossible to search information on it? I can understand "C" on a pre-internet world, but "D" in 2014?