Hacker Newsnew | past | comments | ask | show | jobs | submit | tomasGiden's commentslogin

I would differentiate between iterative development and incremental development.

Incremental development is like panting a picture line by line like a printer where you add new pieces to the final result without affecting old pieces.

Iterative is where you do the big brush strokes first and then add more and more detail dependent on what to learn from each previous brush strokes. You can also stop at any time when you think that the final result is good enough.

If you are making a new type of system and don’t know what issues will come up and what customers will value (highly complex environment) iterative is the thing to do.

But if you have a very predictable environment and you are implementing a standard or a very well specified system (van be highly complicated yet not very complex), you might as will do incremental development.

Roughly speaking though as there is of course no perfect specification which is not the final implementation so there are always learnings so there is always some iterative parts of it.


Guy here with kids in Swedish school. In general I support the direction of learning basic analogue skills and detoxing from the constant dopamine hits of the digital world.

BUT one of my kids has Asperger’s and it is extremely hard for him to muster up the energy to do something ”boring”. So gamified learning on an iPad works very well for him. Also doing math on an iPad where, instead of seeing full pages of equations to solve, he sees only one equation makes it much easier for him to get started.

With these kids you learn to not focus on parenting/teaching principles and instead focus on the goals. I’ll do whatever to get him to go to school and learn, no matter if I’ll have to drive him the 700 m to school while he is watching YouTube or having him to math on an iPad.

So as long as the push for more analogue tools is just a direction and not without individual exceptions I’m all for it.

Sadly today’s Swedish government seems more focused on being seen as hard on kids, crime, immigrants etc (basically everything except environmental protections) than actually following scientific principles.


I just tried out your app for the first time. First time trying to learn Spanish. I feel exactly like the user you describe but it is because I have to click Don’t remember for 70-80% of the words.

I’ve always had difficulty remembering vocabulary. I remember cramming German in School 30 years back. We had 20 words we had to learn per week and I could sit a whole night repeating and repeating them just because they wouldn’t stick. And then in the morning they were all gone anyway. So I gather I am a bad language learner.

In your algorithm, do you assume everyone’s recall is the same or do you optimize for a recall rate which make everyone fail a certain percentage of the word? If so, knowing that I am supposed to not remember 70% would be a good reminder in the app to not feel bad.


How about in-app purchases and subscriptions? The code is already there. Is it abusive?

Is it abusive because it is tied to hardware?

No, I see it as the opposite. I see it as Volkswagen simplifying production by limiting variability and giving you the option to get a less capable product at at a cheaper price.

A 6 and a 8 core processor is probably the same die also and produced at the same cost. Maybe 2 cord were turned off because they were faulty or maybe they were turned off because some people don’t have the need and money for 8 cores. Does it matter? Now they can still buy a computer. Is that a bad thing?


> How about in-app purchases and subscriptions? The code is already there. Is it abusive?

Sometimes yes and sometimes no. Pure software is a bit different than hardware as copies are effectively zero cost. Same would go for e-books, music, etc. Not that they get a full free pass, these media can also engage in abusive practices.

> Is it abusive because it is tied to hardware?

Yes. Another example of the absurdity is if you want to buy half an apple and the store charges you enough to cover the costs of a full apple, then pull out a full apple and destroy half of it before handing it to you. Does that seem ok? Putting in extra effort to make something worse is bad.

> A 6 and a 8 core processor is probably the same die also and produced at the same cost. Maybe 2 cord were turned off because they were faulty or maybe they were turned off because some people don’t have the need and money for 8 cores.

Big difference between these two cases. If the two extra cores were faulty, then charging a lower price makes sense. Like paying less for used tires that have some wear on them. But taking a perfectly good chip and purposefully disabling two cores is like taking a belt sander to a new tire and then charging less.


It's not the same comparison at all. Part of the problem why IT people developing electric shitboxes has ruined the car industry.


I personally think the answer to this is "yes” and have never bought any product that did this. I don't even do subscriptions.

The turbo example is insane. It's literally unwanted dead weight probably slightly negatively affecting fuel economy.


I did some benchmarking on BlobFuse2 vs NFS vs azcopy on Azure for a CT imaging reconstruction a year back or so. As I remember it, it was not clear if Fuse (copy on demand) or azcopy (copy all necessary data before starting the workload) was the winner. The use case and specific application access pattern really mattered A LOT: * Reading full files favored azcopy (even if reading parts just when they were needed). * If the application closed and opened each file multiple times it favored azcopy. * If only a small part of many files were read, it favored fuse

Also, the 3rd party library we were calling to do the reconstruction had a limit in the number of threads reading in parallell when preloading projection image data (optimized for what was reasonable on local storage) so that favored azcopy.

Don’t remember that NFS ever came out ahead.

So, benchmark, benchmark, benchmark and see what possibilities you have in adapting the preloading behavior before choosing.


With Fuse you can make it transparent for the Application, it just exposes the mount with all the files. When your application reads them, it's pulled from Object storage, while az-copy is a utility to copy it to your disk


This! When you are doing something simple (as in there are known best practices) you do want people to have the same formal education. They’ll talk the same language and everything will be smooth. Nobody wants a self taught surgeon or pilot on the team. There is a best practice for washing your hands and you want your surgeon to know it.

But when you are in the complex domain (as in there are no known good practices), what you want is many different viewpoints on the team. So getting people with different backgrounds (different academic background, tinkerers, different cultures, different work experience etc) together is the way to go.

Same with the discussion about remote work. People do not seem to get that they’re no best way but it depends on the type of work. If it’s simple or complicated, let people stay at home to concentrate. If it is complex, give them the opportunity, and the knowledge it’s good, meet up by a whiteboard. And what’s best may of course differ from day to day.


I worked in the telecom business 15 years ago on 4G (LTE) and there it was considered a big savior compared to how it was done before.

Basically before they had a lot of error handling code and it was a significant part of the code base (don’t remember but let’s say 50%) and this error handling code had the worst quality because it is very hard to inject faults everywhere. So basically the error handling code had a lot of bugs in it which made the system fail to recover properly.

But DbC was a godsend in the way that now you didn’t try to handle errors inside the program any longer. Now the only thing that mattered was that a service should be able to handle clients and other services failing. And failure in a few well defined interfaces is much easier to handle. So the quality became much better.

What about the crashes then? Well, by actually crashing and getting really good failure point detection it was much easier to find bugs and remove them. So the failures grew less and less. Also, at that time I believe there were 70 ms between voice packages so as long as the service could recover within that timeframe, no cell phone users would suffer.

Plus of course much less error prone error handling code to write.

And as someone else said, DbC should never be turned of in production. Of course, in embedded systems, speed is not so important as long as it is fast enough to not miss any deadlines. And you need to code it so it doesn’t miss deadlines during integration and verification with DbC so there is no reason to turn them off in production.


Nice.

Bertrand Meyer in his usual painstakingly detailed manner explains how to integrate DbC with Exception/Error handling in his paper Applying Design by Contract linked to here - https://news.ycombinator.com/item?id=42133876


I loved the books as a child and as a father I love reading them for my children. And they love them too.

Some things have not aged well in them though. Thinking specifically around the gender roles. Not matching Sweden of today. Basically all men are working and having a good time and the women are taking care of children and their husbands. But I sometimes make a lesson of that and tell them that it used to be more like that and ask them whom of my wife and I do different chores and takes care of them. Then we can laugh about it a bit together instead of me grinding my teeth. “Mom’s work is never done”.


I think complexity frameworks (like Cynefin) describes it pretty good. When the complexity is low, there are best practices (use a specific gauge of wires in an electric installation in a house or surgeons cleaning according to a specific process before a surgery) but as the complexity goes up best practices are replaced with different good practices and good practices with the exploration of different ideas. Certificates are very good when there are best practices but the value diminishes as the complexity increases.

So, how complex is software production? I’d say that there are seldom best practises but often good practices (in example DDD, clean code and the testing pyramid) on the technical side. And then a lot of exploration on the business side (iterative development).

So is a certificate of value? Maybe if you do Wordpress templates but not when you push the boundary of LLMs. And there’s a gray zone in between.


The job for an embedded engineer can vary wildly and it gets hard to define what embedded software even is. I’ve worked on microcontrollers in elevators and battery management systems for battery packs on the low end and I’ve worked on application processors, many-core processors DSPs and soft cores in FPGAs in telecom on the high end. Sometimes you don’t even notice the hardware. All depends on the job and the size of the company (do they have a platform team abstracting all the hardware away?).

As others say, many companies in the embedded space have had a very hard time realizing they are software companies and their practices are very old school and frustrating.

Talking salaries (Sweden), yeah it’s a bit higher in the cloud but not wildly so.

My recommendation is to start working in a not tiny company and on an existing product. Then it’s more about adding logic rather than knowing everything about RTOS and bootloaders. Them you will pick these things up as you go.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: