> Using the word “Mentoring” is anthropomorphic and subconsciously makes you think it will learn.
I think this is a bit pedantic. Obviously the parent you’re replying to is referring to the concept of “in-context learning”, which is the actual industry / academic term for this. So you feed it a paper, and then it can use that info, and it needs steering / “mentoring” to be guided into the right direction.
Heck the whole name of “machine learning” suggests these things can actually learn. “reasoning” suggests that these things can reason, instead of being fancy, directed autocomplete. Etc.
In other news: data hydration doesn’t actually make your data wet. People use / misuse words all the time, and that causes their meaning to evolve.
Anthropomorphism is a subtle marketing tool used by these big AI companies, who are financially incentivized to push the myth of AGI and want everyone to believe they're right on the cusp of achieving it. It's good to be pedantic in this case, we shouldn't anthropomorphize these tools.
This is just a “hurr durr AI companies evil” argument without substance.
It’s the people that are the problem, nobody told the grandparent to use “mentoring” as a word, and my argument is that it’s a complete overreaction to classify them as anthropomorphizing AIs, and I’d argue default to that argument would be an insult to them, and it’s super pedantic.
But in-context learning is like a student only remembering what they’re being taught for the duration of the discussion. That’s not really how mentoring is meant to work, so pointing out the issues with the metaphor seems pretty reasonable.
In other news: That words can change meaning doesn’t mean that every possible change in meaning would be beneficial to communication and therefore desirable. Would you advocate in support of someone suggesting to use “left” to mean “right” simply on the basis words can change in meaning?
I agree it’s pedantic and personally don’t get bent out of shape with people anthropomorphizing the llms. But I do think you get better results if keep the text prediction machine mental model in your head as you work with them.
And that can be very hard to do given the ui we most interact with them in is a chat session.
Absolutely, but there is no evidence that the grandparent was doing that, all they did was use the word “mentoring” and my argument is not that anthropomorphizing isn’t a problem - it is - but that the response to this particular HN is super pedantic.
Obviously the real people that are classifying AI as human intelligence aren’t going to be the top comment on reviewing LLM’s PhD-level papers. They are on very different, much more problematic areas of the internet.
It really is failing more, and it’s well known amongst industry experts. It’s the oldest, largest, and most utilized region of AWS.
I’ve heard people say that the underlying physical infrastructure is older, but I think that’s a bit of speculation, although reasonable. The current outage is attributed to a “thermal event”, which does indeed suggest underlying physical hardware.
It’s also the most complex region for AWS themselves, as it’s the “control pad” for many of their global services.
Most of the other regions are fairly stable. Ohio (us-east-2) is a great choice if you're just starting out. Not sure about ca-central-1, but I've never heard anything bad about it.
It wasn't heavily utilized when I worked at AWS, until 2024.
If your customers are clusterrd in Toronto and Montreal, it probably makes a lot of sense to use ca-central-1. If you've got a lot of customers in Western Canada, us-west-2 is gonna have better network latency.
Other than a couple regions that had problems with their local network infrastructure (sa-east-1 was like that), there's little or nothing to differentiate the regions in terms of physical infrastructure and architecture.
For me it also lacks observability. It has been a few years since I last used Clojure, but I found manifold to be a much better fit for actual production code that you want to optimize.
I loved ztellman’s “everything must flow” talk on the topic.
That means we have had various exchanges in the past, but I’m operating under a different username here.
I’ve had several PRs accepted into there, I think manifold got a lot of things right, if only it weren’t for Zach leaving the community.
Unfortunately I left the community for similar reasons, I have a different vision of how the language should evolve than the people in charge, but wasn’t as vocal about it. I suspect there are more people like me.
It’s fine, like Rich Hickey said, it’s his project and we have no right to expect anything.
I am forever glad for what Clojure taught me, it made me a much better developer.
Zach wanted to introduce a community-driven steering community, make the language easier for beginners to adopt, and standardize library choices.
This did not align with Cognitect’s centralized stewardship.
This was around the time that there was some quite some commotion in the community around this, with Rich posting his infamous “Open source is not about you” conclusive post, which was in direct response to @cemerick’s Twitter post: https://x.com/cemerick/status/1067111260611850240
That was in nov 2018. Two months later Tellman published his "Elements of Clojure". I don't remember the date when he retired from the community. And I don't remember him publicly saying anything about that drama, but I do know for a fact that he joined a team with a ton Scala. How do I know that? Not sure, he might have told someone I know. I might have heard that from him in person, honestly, I don't remember.
I can't speak for Zach, but in 2019 on The REPL podcast #23 (https://www.therepl.net/episodes/23/), at 00:41:01 and 00:45:31, he talks about being a bit unhappy with how the core team communicated about his arity-optimized vec/hash class proposal.
He then talks about Aphyr's and Chas Emerick's similar experiences, and laments how in the earliest days, it was still possible to contribute, and how when core development closed off, it was never articulated up front until "Open Source Is Not About You", which is its own can of worms.
Overall, it's a good and nuanced discussion, but it's obvious he wasn't in unreserved love with the language, so I'm not surprised he left.
Linux Kernel doesn’t differentiate between security bugs and other bugs, which is the main complaint here I think. They have the same process.
So the issue is bigger than the mishandling of a single issue, it’s a fundamental process issue around security for one of the most impactful projects in the entire space.
Instruction following is a specific fine tuning / post training phase, yes.
That’s why you see “base” vs “instruct” models for example — base is just that, the basic language model that models language, but doesn’t follow instructions yet.
Especially the open weights models have lots of variants, eg tuned for math, tuned for code, tuned for deep thinking, etc.
But it’s definitely a post train thing, usually done by generating synthetic data using other models.
This isn’t my experience, but I think it depends highly on the segment. We have mainly senior C++ devs (database company), and it’s still a challenge to find great engineers.
I think the current job market isn’t “one size fits all”. Having said that, obviously if they’re getting laid off, they may very well be in the segment that’s less desirable.
I've got a couple of friends that left London to go back to Poland during covid. They first continued to work remotely, but ended up switching to Polish companies because the pay was better.
Yes I think salaries are still a bit lower, but the gap has closed a lot. And cost of living is lower in Poland plus there is some tax break for self employed contractors that means you only pay ~20% tax compared to ~40% in the UK.
With those two factors you could easily end up better off overall, especially if you have kids
The kids factor is even bigger if you move back close to relatives. The ability to drop your children at grandma's instead of paying for childcare is an easy 1k a month you're saving.
Daycare is completely free in Poland since 2024 (you need to submit an application to ZUS, but there are no limits, it's always accepted), even the private ones.
You only pay separately for food (10 zł per day the child is actually attending to the daycare).
I switched from a Polish company to a German one (both remote), but my pay is more or less the same.
The difference is that in Poland to get that money I have to be a "top performer" with a lot of stress and not a lot of time, while in Germany I can be just a mid dev.
I think this is a bit pedantic. Obviously the parent you’re replying to is referring to the concept of “in-context learning”, which is the actual industry / academic term for this. So you feed it a paper, and then it can use that info, and it needs steering / “mentoring” to be guided into the right direction.
Heck the whole name of “machine learning” suggests these things can actually learn. “reasoning” suggests that these things can reason, instead of being fancy, directed autocomplete. Etc.
In other news: data hydration doesn’t actually make your data wet. People use / misuse words all the time, and that causes their meaning to evolve.
reply