Hacker Newsnew | past | comments | ask | show | jobs | submit | ironSkillet's commentslogin

I interviewed someone recently who worked at Meta a couple years ago. He was a software engineer, was paid a bunch of money to mostly up dashboards all day, and eventually quit because it was neither interesting nor challenging.

No that's not true either. A quick Google will reveal many examples, in particular the "Cantor set".

It is not determined by the derivative, it's the antiderivative, as someone else mentioned. The derivative is the rate of change of a function. The "area under a curve" of the graph of a function measures how much the function is "accumulating", which is intuitively a sum of rates of change (taken to an infinitesimal limit).

Thanks for bringing some intuition!

This is some high quality content. Love the visual animations to go along with the mathematical ideas. Did a great job helping to tie the algebra to geometric intuition, but I think the importance of commutators could have gotten a little bit more exposition.


The disk model of hyberolic geometry is made to map hyperbolic 2 space (which is infinite in area) into the finite interior of the disk. In order to capture this, the normal euclidean notion of distance is distorted by a function which allows "distances" to go to infinity as a curve approaches the boundary of the disk.


Binary search minimizes the number of expected moves until you find the target. If you are already ahead, this is a natural thing to want to do. The reason why this doesn't work when you're behind is that your opponent can also do that and probabilistically maintain their lead.


I know that it minimizes the expected number of moves. But, the goal is to maximize the probability that you win in fewer moves than your opponent, not minimize the expected number of moves. Given that your opponent is playing some riskier strategy, it's not intuitively obvious to me that your optimal moves for those two objectives are the same.


If it helps your intuition: Even at 3-4 remaining, you'll still win at the next turn. Above that your chances of getting it right are too low compared to the reduction (assuming there is an option eliminating enough).

This could be made more complicated/interesting if you play a series of games and are awarded points based on either how many rounds it took to win or how many remaining cards you still had.


I think the user means that bullies in school face little consequences, but a bully at work may get called by HR and potentially disciplined.


Oh fair enough. My apologies Mr Worf. I don't fully agree - plenty of shitty behavior gets ignored (or even encouraged) even in a workplace - but there's definitely some truth here.


It's way easier to change workplace than to change school, so I don't think it can get so extreme.


With a workplace if you can gather evidence and document it there's a significant chance of a lawsuit with a payout.

Schools are generally protected against that, and your only hope is to replace the school board, who is commonly bullies themselves.


Yep, so HR people occasionally learn a hard lesson when they don't do their job to cover for their friends.


I don't know about other use cases, but AI is definitively a game changer for software development. You still need to know what you're doing and test/think critically about what it's giving you, but the body of software problems that you can conceptually treat as "boilerplate" becomes massively larger with the help of a good AI coding tool.


I've just had to "fix" a bunch of shit that was thrown over the wall that "sort of did the job" that came from someone using AI.

It's a game changer for some people who only need it to mostly get things started and pretend they did their job, and a work generator for anyone who actually needs to get things working.

The code was shockingly bad, and had to be rewritten to be able to do step 2 of the task.


In my mind that is a problem with your lazy developer colleague, not AI as a whole. You can't expect it to be right on the first try (just like human code), you have to iterate with it and have the experience to know when it's off track and you have to take over.


> In my mind that is a problem with your lazy developer colleague, not AI as a whole. You can't expect it to be right on the first try (just like human code), you have to iterate with it and have the experience to know when it's off track and you have to take over.

The problem with this IMO is when a human writes the code, they know the code they wrote, and have a sense of ownership in terms of correctness and quality.

Current industry workflows attempt to improve quality and ownership with PR reviews.

Most folks I see using AI coding don't know all the corner cases they might encounter, but more importantly don't know the code or feel any real ownership over it.

The AI typed it, and the AI said it's correct. And whatever meager tests exist either passed or got a 1 line change to make them pass.

Quality is going down from those who rely on tools to produce code they don't know. This has a cost associated with it that's been deferred.

Sometimes this is fine, like POC where you are comfortable with tossing the code out.

This isn't fine for business who need to be able to plan out work in the future. That requires knowing the system more so than just reading the code base.


If only it was this once, and only this person.


It’s like Stack Overflow but much faster and doesn’t insult you. Which is useful, but this is so much less than what the companies are claiming it is.


It seems like there is a universal sense in which statements like 1+1=2" or "7 is a prime number" are true, no?


I disagree, it is not universal. 1+1=2 is just a specific system of notation with consistency. There was a time when no human conceptualized this idea of 1+1=2, they did not have numerals or know about addition. Before you get to 1+1=2 you need a bunch of prior concepts that are themselves contingent on culture and history.


> 1+1=2 is just a specific system of notation with consistency.

So, that that is system of notation which has consistency is itself a truth, isn’t it?


If by "universal" we mean median adult human which are apt and willing to engage in basic mathematical thoughts, yes. That’s certainly already a very greatly reduced set of entities compared to everything in existence, though.


Mathematics does its best, but it's still a language, and fallible. It's trying to explain things, and the concepts like "prime number" and "one" can be shaken by later improvements to understanding.


Almost certainly a Western intelligence operation to send a message to Putin.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: