Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's a bit scary to see that one of the highest-voted answers to this question (188 points) is completely wrong. It says that the (0,0) hotspot simplified the calculations for a cursor position update, because you didn't have to add any (X,Y) offset.

https://ux.stackexchange.com/a/52349/43259

The problem with this idea is that the arrow pointer was never the only cursor. On the first Macintosh, there were many others including the text I-beam and a couple of kinds of crosshairs. And you could define any cursor of your own by providing a bitmap and transparency mask and the hotspot position.

You can see some of these cursors in the original Inside Macintosh Volume I and also in previous works from PARC.

https://web.archive.org/web/20230114223619/https://vintageap...

Page 50 of the PDF (page I-38 of the document) shows some sample cursors.

Page 158 of the PDF (page I-146 of the document) has the pixel detail and hotspot locations for several cursors.

Fun fact! The hotspot for the arrow cursor was not (0,0) but was (1,1).

Can anyone explain why? I think I used to know, but it has long since escaped my memory and I would appreciate a refresher.

This page also has the definition of the Cursor structure:

  TYPE Bits16 = Array[0..15] OF INTEGER;

  Cursor = RECORD
      data:    Bits16;  {cursor image}
      mask:    Bits16;  {cursor mask}
      hotSpot: Point;   {point aligned with mouse}
  END;
Point is defined on page I-139 and is more or less what you would expect, a pair of vertical and horizontal coordinates.

To be clear, the scary part is not that someone came up with the idea that (0,0) saved a few instructions. In fact, the notion came up elsewhere in this HN discussion. It's a perfectly reasonable hypothesis, until you realize that there are many cursor shapes that require different hotspots.

The scary part is that 188 people upvoted this answer!



It's only scary at the beginning. Then you get used to it. Every single social media site - including HN - has uninformed people agreeing that a correct-sounding answer must be right. My friend the tax accountant gets downvoted for clarifying how taxes actually work. My wife the linguist gets downvotes for explaining no that's not how language works. It's not scary - it's typical.


The way I internalize it: public voting selects for layman plausibility, not correctness.

Because laymen massively outnumber experts, the layman vote always overwhelms the informed one, so the reaction of people who don’t know the subject is the only thing that matters. Truth only seems to matter because most subjects either can be somewhat intuited by non-experts, or are in a niche that you’re not, so “layman plausibility” means your reaction, too. But the true nature of the dialog reveals itself as soon as people talk about something you’re an expert on.

Answers like this aren’t a bug in a truth machine, they’re a plausibility machine working as designed.


> The way I internalize it: public voting selects for layman plausibility, not correctness.

To lend credence to this idea, I reflexively upvoted you despite not having read any experts on this voting phenomenon.


In that way, it’s a bit like an LLM choosing the most likely answer based on the mass of training material.


Humans are nearly all mimics, at least 98%+. They are LLMs. It's a survival optimization (energy spent copying the existing vs creating/innovating/distributing). It's only fitting that we'd create LLMs in the human mold.

LLMs are to human mimics what AGI will be to human creators/innovators (and then some of course).


> Humans are nearly all mimics, at least 98%+. They are LLMs.

We are GIs, at least 98%+, LLM like behavior may exist in our cognitive repertoire, but we certainly aren't limited to it. Can an LLM drive locomotion?

I never understood AGI as generating sui generis ideas as a requirement. I thought that AGIs could also be uncreative mimics.


> Can an LLM drive locomotion?

Can't see any principled reason it couldn't, if it was a big enough, sufficiently trained one, running on fast-enough hardware, if you represent the sensor data in its token vocabulary, and have the reverse for control outputs.

Quite probably not the most efficient way to drive locomotion, though.

> I never understood AGI as generating sui generis ideas as a requirement.

Creativity is among the applications of intelligence that I would deem included in the "G" in AGI; OTOH, like most proposed binary categories, its probably more useful to view generality as a matter of degree than a crisp binary attribute.


>Can an LLM drive locomotion

If it's been trained to, but I'm not sure that's been the focus recently, though I think some research has been done into it. Turns out prediction engines with attention are useful for more than just predicting text; our bodies and brains work on learned assumptions and behaviours.

But certainly I imagine transformer + attention can = learning to walk/perform task. LLM specifically no...because it's trained on language and not motion, it's all in the name. But even then perhaps motion can be turned into language (non English tokens tho) & an LLM still used, I know people are working on funky stuff like that as well.


It's basically the question of Turing machines and universal calculation. As a lay person I just wouldn't expect an LLM to be good at intelligences using other than sequential symbols.


And, it would seem, that training material is mostly wrong...


And just think, its training material is all this upvoted - and then believed and repeated - BS.


As we know in the age of the internet, truth doesn't matter, only popularity does.


The internet has taught me how many brilliant people there are out there. And how massively outnumbered they are by the rest of us!


there's another reason for some optimism about a voting-truth connection: wisdom of the crowds. As long as there isn't a strong bias to people's estimate, the average will converge on the truth.


> there's another reason for some optimism about a voting-truth connection: wisdom of the crowds. As long as there isn't a strong bias to people's estimate, the average will converge on the truth.

Hmmm ... that doesn't seem to match what actually happens. After false beliefs holding back humanity for its entire history, science came along and produced actual, working, truth. And science is the opposite of what you say: The crowds don't matter, only the facts. Newton was not a crowd, and the crowds didn't produce anything remotely as true and valuable for all those years. The crowds persecuted Galileo (and many others).

"In matters of science, the authority of thousands is not worth the humble reasoning of one single person." - attributed to Galileo

As someone pointed out, I think here on HN, the intuition of the crowds sucks. If it was any good, we'd have had the right physics in 5,000 BCE not starting in the 17th century.


I thought Newton was a mathematician, not a scientist.

> the intuition of the crowds sucks. If it was any good, we'd have had the right physics in 5,000 BCE not starting in the 17th century.

Eh. People used to stay in their lane. Only these days can you get a city person voting on proper farming techniques.


Newton was a mathematician and arguably the most important scientist in history. I recommend his biography - it's amazing reading.


I'm the kind of person who is completely disinterested in biographies.


Fine, but then why talk about Newton if you are aware you know nothing about them? Talk about what you know.


It's not as if I haven't been exposed to his laws of motion in physics courses. I just think of them as more math (or heck, even philosophy) than science.


I'm always interested in unique perspectives, but at the same time, English has a meaning outside any individual's concept of it.

> Newton was ... not a scientist.

That has a meaning, and its false. Whatever you personally think of it, Newton was a scientist. I don't love a wild goose chase.


I guess so. It's hard for me to think of anyone prior to about the mid 1800s as a scientist, but sure, he qualifies by the standards of the day.

I still don't understand why people view Linnaeus' classification as scientific though. I guess maybe because it functioned as a hypothesis of common descent later on?


> I thought Newton was a mathematician, not a scientist.

Newton was a mathematician, scientist, alchemist, theologian (though, by the view of most Christians at the time and now, quite a heterodox one), and high government official that conducted undercover investigations personally. People can sometimes do more than one thing, and Newton did...a lot.


> As long as there isn't a strong bias to people's estimate, the average will converge on the truth.

Yes, as long as the truth is the most significant systematic influence on beliefs, any reasonable method of aggregrate of belief will converge on the truth with sufficient numbers.

Unfortunately, the required condition for convergence on the truth is often not true, and there is no way of reliably determining when it is true other than determining the truth independently and determining if belief converges on it.

Significant effects on belief about facts from cognitive/perceptual biases, especially where the fact is not something easily observable like “is it raining at this instant where you are standing” are not rare, and these biases often align for similarly situated individuals.


I am quite unsure as to the veracity of the claim that "the average will converge [upon] the truth". I recall cases being made (as asides) for the opposite conclusion. Intuitively even, this idea of equating truth with convergance towards the average opinion appears contradictory, counterfactual, and ahistorical. Excuse my being brass, but a "wisdom of crowds" seems to me oxymoronic on its face. I'd love to be persuaded otherwise though; mainly due to my perception of a lack of credence towards your view. Perhaps I have misunderstood your qualifier: "As long as there isn't a strong truth bias to people's estimate . . . "? Off the top of my head, I can't imagine any scenario in which a mixed population of laypeople and academics/experts would converge towards the same (vote average) findings as a sample of a handful of experts/academics. For example, would The Average converge towards correct mathematics or physics answers? Besides trivial, non-technical questions that do not require complex analysis, I think not. (See: False Memory: Mandela Effect. [0] [note]) [0]: https://en.m.wikipedia.org/wiki/False_memory#Mandela_effect [1]: https://en.m.wikipedia.org/wiki/Information_cascade [Note]: My point is that groups' thinking is liable to be compromised. (After all, what has been more important to a human — evolutionarily: the truth or social access?) Also see: Information Cascade. [1] {Post-Scriptum: My position is that if averages for answers to questions were taken, from the 'crowd' of the whole Earth, then these would diverge significantly and routinely from The Truth. If there are cases in which you feel this to not be the case I would inquisitively consider such scenarios waveBidder.} <Edit: Deletion: " . . . ~difficulty in lending~ . . . ">


> I can't imagine any scenario in which a mixed population of laypeople and academics/experts would converge towards the same (vote average) findings as a sample of a handful of experts/academics.

Then you get crap where the experts, even when they agree, "dumb it down" for the crowds. This leads the masses who actually do pay attention to experts to think the wrong ideas are truth.

> After all, what has been more important to a human — evolutionarily: the truth or social access?

I don't think this is required for people to be very wrong. Caring about the truth can easily lead to assuming other people who speak authoritatively know what they're talking about, or to speaking authoritatively yourself when you think you're right.


As a peer comment mentioned, the wisdom of the crowds only functions when people operate independently. When people collaborate, our answers turn to junk again. And any sort of voting system is an inherent collaboration because you are basically seeing what's 'trending' by definition, so it destroys any sort of wisdom of the masses.

The only way you might have it work is if random people were shown random posts from random topics, and asked to vote on them. And the ranking was based upon that feedback. There's problems there as well, but probably far fewer than in the current system.


> And any sort of voting system is an inherent collaboration because you are basically seeing what's 'trending' by definition

Massively aggravated by "sorting by top" defaults for both original posts and separately for the comments on those posts.


Unfortunately not, because wisdom of the crowds requires not only a lack of bias but independence which, let’s face it, is usually impossible achieve.


That only works when people bet that their guess is correct.


Wisdom of the crowds is obviously dog shit.


I think this also partly explains the LLM hype — people can be as confidently incorrect as LLMs, or maybe LLMs are as confidently incorrect as humans since they are trained on text from social media.


I hope we reach a collective maturity on this. LLMs have so far I've noticed, left a trail of mediocrity. I'll of course not notice the parts that are good, so there is some confirmation bias here.

And, I hate it. If not used appropriately, it's an automated output from left side on the Dunning-Kruger. Bullshit asymmetry has gotten more skewed, and it's tiresome.


Thats just not my experience

I’m using Miqu 70B Q4 and it immediately replaced Mixstral 8x7B Q5 for me

I screen almost all of my responses in relationships through it, with deep context on why its modifying things, total paradigm shifts like a therapist is showing me more effective conversation styles. I wasn't seen as having low emotional intelligence to begin with and the results have been great.

Translations

Coding syntax

Entire code bases

Nuanced legal aspects of industries (deep conversations about obscure drug and treatment pricing by region and billing method, which matched reality)

More stuff about different kinds of insurance and how to navigate insurance brokers, to great effect

Whenever a contractor or professional outside of my knowledge domain gives me a word salad, I make a note of what they said verbatim and have the LLM translate that for me. Then I come back to them with informed responses they cant bullshit around. I got my HVAC fixed by pointing out what they are probably missing, and they were previously too prideful to notice or consider or admit. Got a payment coming from my landlord for this because they caused my energy bill to be higher.

Large document analyses, which I thought was the final boss. I’m only giving these things an 8,000 token context window and things have been great and coherent


How does it compare to ChatGPT 4? That's the only thing I've "vetted". It'll be subtly wrong about something, and if I point it out, I'll be "of course you are right!".

And, if you are wrong about what you said it was wrong about, it'll still almost always say how right you are.


for my use cases they are very close to ChatGPT4 and I primarily use GPT4 for multimodal prompts and responses. synchronous voice conversations, Dalle3 images, uploading images to it.

they all lean to be agreeable out the box, but the aforementioned two will stick to their guns harder and tell you that you're wrong. you have to ask all of them to take the other side for more insight.

with ChatGPT4, for example, I posted a conversation where I felt that a woman I was dating gave a response to my followup that was way negative and way out of left field. It told me she had a disproportionally negative response to a benign text. Then in another session I posted my post and told it to predict her response, and it predicted a variety of responses some of which were like the woman's and this time it told me why. This means it was being too agreeable and affirming my feelings the first time, unprompted, while actually giving insight in the second session without knowing there was an existing reaction to navigate.

(Analogous to how we have to patch our speech to never blame the victim even though we know there are measures they could have taken to mitigate that scenario. While if the same person asks in advance we would give them advice.)

Dumbfounded that its predictive qualities were better than its affirmation-by-default trait, I told it to act like the woman's friends who have no context of me if they saw my message, I told it to act like redditors on /r/relationship_advice responding to the woman who similarly have no context beyond what OP feels. you have to create outside observers, and you can run all of these alternate realities within 3 minutes. It will begin crafting responses that break the conversation molds you might be more familiar with and get better results, but if all that sounds too much, you can simply tell it to disagree with you.

in LM Studio you can modify the system prompt and change the temperament


I appreciate your responses. It seems to me that your uses are to a much larger degree interpretative and also somewhat creative.

LLMs are great here. My concerns are mostly that its ability for factual accuracy is confused with its own expressed confidence. Assessment of one's owns abilities is usually optimistic in humans. LLMs just model that language. Facts are baked in as linguistic patterns, rather than a knowledge machine. And, you can often invert its answer to a scary degree.

There was recently an article on Forbes, where the journalistic "source" was the output of Bard.

The most terrifying part about LLMs to me, is not that it's sometimes wrong. But that it's often enough right and excels at certain stuff, that we use it as a source of knowledge. It really isn't.

On the other hand, the creative part, or dialogues regarding creative processes. It's mind bogglingly amazing.


The LLM output matches what the crowd expects.


It's amazing how far that can take you. I saw a post on another social media site about something being wrong, and a comment said it's not wrong, it was just missing a "not". Which was the exact reason it was entirely wrong.

So people can state absolute absurdities and have people agree.


"So people can state absolute absurdities and have people agree"

That Reddit mission statement.


Happens on HN all the time too.


Some people are able to correct typos when reading.


Not the case there. The context of this was definitely not a typo.


> my friend the tax accountant gets downvoted for clarifying how taxes actually work.

Let me guess: Tax brackets? That's the one thing that most regular workers in the US just don't seem to understand (and arguably, many people knowingly spread falsehoods to further some agenda).


Decent guess, but nah. Something to do with corporate tax accounting. Can't remember the details because that's out of my element.


Tax write offs would be my guess. Every employee of charities that partner with retail locations for PoS donations must die a little inside each time some fool confidently asserts that they never give to those charities because it's all a scam. The money, they will assert with the confidence only someone so wrong can muster, is just used for write offs so the executives can have a big bonus and the company gets to claim they donated all the money. Bonus points if they assert that the charity doesn't even get the donated amount.


I think the basic thing about taxes that is least understood is the difference between gross income and taxable income (the latter is the amount that tax brackets apply to). A close second is the difference between tax liability, and refund/balance due on the tax return.


Just try to convince the average person that “getting a big refund” is a bad thing, since it means you gave the U.S. government an interest free loan.


Oh yes, that's another fun one! Your yearly tax return should be as close to 0 as possible, otherwise you're either over- or under-withholding. Then again, I met some people that use it as a kind of piggy bank because they wouldn't be disciplined enough to save up for bigger purchases otherwise and... well, I can't even, but if it works for them, there are worse things to spend money on.


I have income from multiple sources and they are not aware of each other. For example, they will all keep paying social security even when I’ve exceeded the max deduction. It is far too complicated to correct the finance departments of multiple companies. I just reconcile it all at the end of the year and get a refund. Got a better strategy I can use?


Woah, it’s okay to have different tax situations. I started a business one year and got a pretty big refund. But we’re not out there bragging about how we get big refunds every year like it’s some goal to aim for and accomplishment to be proud of if achieved. That’s the mentality people are criticizing.


It sounds like you’re not one of the people they met who use it like a piggy bank. From my perspective, they’re just describing the habits of people who are used to not having any money: gotta spend this windfall quick because money doesn’t last long. It’s irrational and ultimately harmful but it’s borne from the practice of spending all of your money every month on non-trivial things and still being required to increase debt in order to stay in your apartment, e.g., credit card spending.


It's not necessarily irrational. For example, for some people if they ever have any extra money, someone else will immediately spend it for them. If the earner wants to make a larger purchase, perhaps something that will cost short term but pay off in the long term, they need some mechanism to save, outside of the regular controls that apply to daily life.

You may think this situation is still irrational, that the other person is being irrational. But again, there are many life situations out there. Perhaps they have lived in situations where they had to fight for what they needed. Perhaps they lived with an earner who would spend their money on drugs if it wasn't taken away, and yet if the non-earner saved it up themselves, the earner would find it and spend it.

The supposedly rational thing may depend on everyone around you to also be rational, and everyone around them, etc. And given that we are human, and humans are not fully rational...


If your separate income streams are pretty predictable and so is the overwitholding, and if you care enough: you can put a negative number in the "extra witholding" box on your W-4.

I wouldn't say this is a better strategy, but you can definitely min/max this even if your income is not stable by extrapolating out your expected income and expected witholding a few times a year and adjusting your W-4 based on your calculations.


Wow I wouldn't trust that. I'd add extra exemptions plus a positive withholding if needed.


You can file a W-4 with exemptions and avoid overwithholding! This is a fixable problem.


How do I use a w4 to fix the social security problem without incurring underpayment of state and federal? Exemptions apply to all the taxes, no?


Not sure about state, but you don't pay Federal Income and FICA separately. They are just numbers that get added together. You just pay money, and the IRS splits it up after they collect. If you "overpay" Income by $1000 and "underpay" FICA by a $1000, you're done, no problem.


https://www.irs.gov/forms-pubs/about-form-w-4

Fill it out correctly and your employers will do the right thing.


I think the W-4 only applies to federal income tax. There's no field that instructs employers how much to pay in FICA (their share and yours). At best you can reduce withholdings to account for the excess FICA payments.


It's all total tax burden. Dollars are fungible.


FICA cap is per employer, not total. Is that what you're referring to?


FICA cap is not per employer. Well, it is from a withholding perspective (only because it would be impractical to make employers monitor withholding outside their control), but once you do your taxes for that year, you’ll get everything you paid in over the cap refunded.


Right, but employers aren't allowed to coordinate to calculate whether the hit the cap together or not. They don't have that discretion. See

https://www.ecfr.gov/current/title-26/part-31#p-31.3121(a)(1...


By the interest free loan logic, you should have your employer withhold zero and then you put your taxes in a high yield savings account and pay them all as late as possible.


The IRS already thought of this - they charge you interest on the money you owed them (with some exceptions, like waiving it the first year it happens, only charging you if you withheld less than last year, etc).


The interest rate is also much higher than you can earn on anything risk free (8% right now) plus there’s penalties on top.


Yeah, it sucks that the IRS is such a buzz-kill here, with 4-5% HYSA's, it would be nice to just let all the taxes sit there and pay one lump sum in April.


I owed a decent chunk more one year due to investments I sold, and left the money in tbills since I knew I was withholding at least as much as the previous year.


Not sure I understand. Taxes are due in April. You don’t get charged a year of interest on the amount you own when filing…



You owe taxes all year round. What you're doing in April is settling up and filing for the year. If you run your own business, you're required to pay a lump estimated taxes 3 times a year in addition to your annual filings. If you're too short, you owe and you owe interest. The IRS has a safe haven rule where if you pay 90% of what you owed this year, or at least 100% of what you owed last year, they won't penalize you. It's actually one of the reasons I personally do over withhold. I do some contract work on the side, and rather than calculating and sending in estimated taxes every quarter, I just have my regular job send in about 25% of the contracting income I expect to make. On years when I did as much contract work as I expected, I basically get nothing back or I might owe $200. On years where I don't, sure I gave the government an interest free loan but I also didn't have to think about my taxes for the whole year.


Whenever these things are talked about in percentages it is worth bringing it down to dollars to make sure the effort is worth it - the average “interest” on a return is probably about $100 (4K return/2 * 5% - tax).

$100 isn’t nothing but it’s not everything either.


If you get a refund it means you overpaid your taxes. The amount you've overpaid can be considered a zero interest loan to the government. If you hadn't overpaid your taxes you could have invested that money.


Try convincing a tech person that the shares they get as compensation from their employer are equal to getting money and buying those shares at the time they get them. I also think tech workers tend to over-estimate what "average person" really is because they mostly know "above average people".


Yes, this is another example of irrational money behavior that is common in the tech world. According to a poll on our company slack, only about 50% of the employees plan to sell any of their RSUs and maybe 15% sell all.


That's not how the Earned Income Tax Credit works....


(Not from the US) Why is it a good thing to lend for free to the US gov ? Because the regional banks aren’t that stable ?


People like getting the big lump sum and some don't even realize it was their money all along that they just overpaid throughout the year. It's not a good thing for individuals to overpay.


Nah, because for people with poor financial skills, the ability to save is very difficult (even if they had the "Extra" money in their account each pay period instead of paying extra taxes). So even though you're technically getting your money "back", for some people they would not have been successful to 'save' so much without it being forced on them.


“Look! I filed my taxes and I got money back! Yay money!” (Could have had that money all along.)


The people that enjoy a tax refund would not really even notice the small amount they "could have had all along" by adjusting their withholding amounts.


It is not a good thing because it is interest free and inflation exists. If you would have had that money earlier you could have put it in high yield saving account or payed down debt.


It is not interest free. E.g., I was paid $480 in interest on overpayments last year.


It is interest free if the IRS pays you within N days of you filing. If they're slower, then they pay interest.

Where N is some value between like .... 30 and 90? I forget.


There are a number of different overpayment interest regimes[1]. Mine was paid based on time elapsed from the time of overpayment (overpaid quarterly estimated taxes).

[1]: https://www.irs.gov/payments/interest#pay


You pay tax on those overpayments.

But they only pay that if they don’t return it that year (for normal W-2 employees).


Yes, I got a 1099-INT from the IRS.


In some cases (depending on your tax situation) other forms of bonds (municipal/state) may be better because of how or if they get taxed.


I did not deliberately overpay as an investment strategy :-).

Last time I looked, you needed to be in a higher tax bracket than I was to make Muni bonds worth it, in part because my state (WA) does not have income tax. Something like the 35% bracket.

And anyway, my investment strategy is not long bonds at this point in my life.


I never understood why people think that giving an interest free loan to our government is a bad thing.


The average person is a financial train wreck of dumpster fires.


And yet here you are trying to spread an agenda in a thread about mouse pointers that taxes are too low because the majority of people are too stupid to understand tax brackets.


But Reddit is exceptionally bad at this though. It's basically about what sounds the most positive for the upvoter's way of thinking rather than anything else.


Reddit is a place where you get downvoted for linking something that proves what someone was saying is wrong just because it goes against the site’s overall narrative. Lies are encouraged if they’re the correct lies.


Depends on the sub. Political ones are the worst.


The exact same can be said for academia


Is it not possible to be both scary and typical?


This is why I ask for qualifications when someone has an authoritative tone.


It is the Gell-Mann Amnesia effect, but on social media:

> The phenomenon of people trusting newspapers for topics which they are not knowledgeable about, despite recognizing them to be extremely inaccurate on certain topics which they are knowledgeable about.


The difference is that (subjectively) there used to be less of this on HN. The herd moves much more aggressively now rather than granting for debate. The generational differences are much more pronounced; the politics not a match of Democrat vs. Republican, but something equally vindictive. It isn’t particularly pleasant, but what are the alternatives?


> It's not scary - it's typical.

It's really a great strength of human species. We may not exceed animals in any other quality, except for persistence hunting, but we are exceptional at copying the behavior of other individuals of our species without considering whether it's sensible or not. Even monkeys don't do this as much as humans.


Hackernews is similar to ChatGPT in that regard. Lots of correct sounding answers that are really just a word salad.


> It's not scary - it's typical.

It's no surprise that people who lack expertise will downvote and actual expert. I see it happen every day when people try to be the smartest person in the meeting or just simply cannot allow anyone to think they are wrong.


Most often it is because people are too lazy to take the time to understand the explanation they are being given.

Sometimes that is because experts just say "I am an expert so trust me" without proper explanations or links to explanations or evidence.


I've noticed whenever a topic comes up that I have a lot of knowledge in, people almost always chime in with incorrect or just flat out made up stuff. I always remain suspicious of anything I read in any comment section. Including here on HN.


whoe political movements are built on this kind of momentum


Yeah this behaviour is pretty normal for humans/tribal animals, I don't know why people are surprised really.

I mean just look to politics, that already explains enough!


truth is stranger than fiction

in my experience thats true, its less familiar


[flagged]


To be honest, I was debating on even posting my message initially. It was off topic but I figured it would've been ignored. If I knew it was going to derail the discussion about Stratoscope's analysis that much, I wouldn't've posted it.

Edit: Also grammar nazi'ing has little to do with linguistics and more to do with being a jerk...usually.


You can see similar things in the Apple Lisa source code as well: https://info.computerhistory.org/apple-lisa-code

The linked SO page is a page of complete speculation.

History isn't just a bunch of logical thought exercises, it's an assembling of documentation and evidence.

As far as I can see, there is no contemporaneous documentation claiming intentionality so the question remains unanswered.

A smoking gun would be a file with a name like cursor.bitmap or some code like "declare cursor_default = [ [ 1, 0 ... ] ];" from a major source (ms/xerox/apple) say, pre-1988 or so, with some comment above it explaining the rationale of why that cursor style in particular. I'd even accept a more minor source like Acorn, Digital Research, Quarterdeck, NeWS, VisiOn or MIT Athena (X).

Finding something that talks about say, lightpens and then defends the mouse cursor style in that way is working backwards from a hypothesis. It's weak and doesn't preclude other possibilities. Let's be rigorous and get it right.


> A smoking gun would be a file with a name like cursor.bitmap or some code like "declare cursor_default = [ [ 1, 0 ... ] ];" from a major source (ms/xerox/apple) say, pre-1988 or so, with some comment above it explaining the rationale of why that cursor style in particular.

The Inside Macintosh pages from 1985 I cited above may be what you're looking for.

Especially page 158 (I-146).

It doesn't give a longwinded rationale of why you need an X/Y hotspot offset, it does much better than that. It shows you several cursors with their hotspots, so you can see why a hotspot is needed. And it lists the data structure to support it.


but that is 4 years later than the xerox optical mouse tech report, and from a different company which copied their default mouse pointer style from xerox. it doesn't bear on the question of whether xerox was implementing cursors without hotspot coordinates at the time that they adopted the left-leaning shape

(i suspect xerox mouse cursors always had variable hotspot coordinates because it's, what, six microseconds extra in the screen update to subtract them? and i think smalltalk-76 mouse cursors have hotspots. but 01988 or even 01985 is way too late)


In one of my comments on the Stack Exchange answer, I linked to a couple of Xerox Alto cursors with different hotspots:

https://ux.stackexchange.com/questions/52336/why-is-the-mous...


this is great information, thanks! i think it confirms that even back in the mid-70s they used different hotspots


Within 8 thousand years, people will figure out variable length storage and processing for integers. I promise.


do not underestimate human stupidity


The arrow has a white outline around it, so the hotspot is at the tip of the black arrow, at (1,1).


Bingo! Now that you jogged my memory, I can confirm this.

The next question is why you need a white outline around the black arrow.

This is easy to answer: if you didn't do that, what would the black arrow look like against a black background?


Some DE solved that by having an inverse outline.


Took me a really long pause to think up what DE meant, so to save others from similar waste “desktop environment”


I'm pretty sure even on windows there is the option of having the whole cursor be the inverse of the background


The format has a special color for it, you can mix inverse pixels anywhere in a classic windows cursor IIRC


And if I'm not wrong, it still applies to today's Mac interface. The cursor still has a white outline all around.


Yup. You can even customise both the inner and outer colours as an accessibility feature!


What??? TIL.

A lot of the accessibility features are actually neat even to those without the need for them.


Yeah, it’s become super useful for me to color code the cursors between my work and personal Macs.


Seriously. Anyone over the age of about 18 should spend a bit of time going through them, there are many useful things (zoom part of screen is a powerful help on macOS for example).


I was just about to say that.

There's an amazing video by Posy documenting mouse cursor history, and even provides his own cursor pack:

https://www.youtube.com/watch?v=YThelfB2fvg

http://www.michieldb.nl/other/cursors/


As you said ten years ago https://news.ycombinator.com/item?id=7253841

The scary part is that you will likely be saying it again in another ten years and again and then you’ll die as “that weird cursor offset obsessed fanatic”.


Wouldn’t be hard to get a cursor engraved on a tombstone. “Returned to origin (1,1)”.


> The hotspot for the arrow cursor was not (0,0) but was (1,1). > Can anyone explain why?

My assumption (not having an old Mac or documentation to confirm it...) is that the tip of the cursor had to be at (1, 1) to allow for a pixel's worth of mask around the outer edge of the tip.


>the arrow pointer was never the only cursor. On the first Macintosh

the first macintosh was very late to the party, there had already been GUI cursors for about a decade at PARC, and cursor styles had settled down to some standards.

in the early days of GUI cursors on relatively low resolution displays (by today's standards), an important issue was to reduce the amount of calculation and squinting the human had to do to identify the hotspot so you could accurately select/swipe what you wanted to. the tilted arrow cursor points right at its hotspot quite effectively even if the tip pixel is blurred, as does the i-beam (whose vertical offset is not as important to know accurately) the five fingered hand for moving bulk selections also does not require accurate placement, although I think the hotspot is at the end of a finger.

early GUIs let you edit your own cursors and hotspots.


As I understand it the cursor angles are mostly a function of cursors originally being pixel art. In pixel art you need nice integer ratios to your angles or the line starts looking wobbly.

If you then design your cursor to be nice and pointy so it doesn't obscure the thing you're trying to click too much you end up with two angles where if you bisect them you're no longer at an integer ratio angle. So some fudging for the cursor tail is required.

Of course these days cursors are generally high resolution vector art, so none of the integer ratio angle concerns apply, but I assume most vector cursors originally got traced from their pixel predecessors.

This is all from memory and I was quite young back when any of this was relevant, so caveat emptor.


> Fun fact! The hotspot for the arrow cursor was not (0,0) but was (1,1).

Perhaps it's because cursors have a one pixel wide black border around them to enhance contrast, but users associate the cursor's position with the first bit of white (or color) at the tip. (0,0) is colored black for a typical cursor.

Edit: ninja'ed further down.


I think you touched on a wider problem. Peoples shallow understanding of the world, translates to a shallow world view and policies. It's kind of scary to me how much my high school sociology class, group projects, became political policy decades later. Simplistic reductions, when in real life even unclogging a toilet can have complictated steps, nuanced decisions, and many caveats.


It drives me up the wall! Permit me a digression: so much has been written about the early FPS era, but discussions of rocket jumping often skip straight to Quake and omit Rise of the Triad, despite rocket jumping being necessary to complete the game! ROTT's shareware release was the same day as Marathon, another game that does come up in these discussions.


The second-highest answer is an incorrect just-so myth. It even includes a screenshot of the historically correct answer!


I was hoping that it would be lower than 188 when I clicked. It's not. (196):-(


It’s obviously wrong, to me, because of how little latency performing two additions would actually add to the system.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: