Hacker Newsnew | past | comments | ask | show | jobs | submit | retsibsi's commentslogin

The article explicitly says that the author looked at the diffs; it distinguishes this from "sitting down and actually reading the code", which they didn't do. So when plastic041 says the author spent 7 months vibe coding "without ever looking at source code", it's not unreasonable for dewey to assume that "looking at source code", in this context, actually means something stronger and excludes just looking at the diffs.

Does the original reply actually make sense in context? I can't see how.

It's a response to someone saying "you can't draw any conclusions of IQ significantly before 1950 from how the line behaves after 1950", and it says "And that’s because IQ is a statistical distribution, not an absolute measurement of intelligence."

This seems like a non sequitur to me. Am I missing something? (Bear in mind that the 'line' under discussion is an increase in unstandardised scores.)


On a given set of 1000 questions, over time the trend has been to answer slightly more of them correct every year, progressively raising unstandardized scores, over the set of all IQ testees, since IQ testing was formalized in the 1950s.

Extrapolation is the most questionable statistical tool, and while extrapolation ad absurdum is a way to show a formal predicate logic argument to be incorrect or underspecified, it is an almost fully general attack against real datasets, which basically always have some trend line that ultimately passes sensible thresholds like zero bounds. Showing this, however you form the trend line, is not saying a whole lot.

Extrapolation prior to 1950 is not a very useful tool to evaluate intelligence trends, and this is entirely separate from the periodic recalibration of IQ tests to keep the average at 100 (however many correct answers out of 1000 this corresponds to).


This is another non sequitur ... it doesn't address retsibsi's point or their question. It has nothing to do with cluckindan's comment, which is what this subthread was about.

It's because there are multiple levels of misconceptions as well as "violent agreements".

retsibsi is correct. You can't draw (meaningful) conclusions about IQ before 1950, because extrapolating from the data after 1950 is dumber the farther back you reach, just for reasons related to the concept of extrapolation.

This has nothing to do with the fact that IQ is a statistical distribution that we keep re-norming, which "should always average 100"; The Flynn Effect is not in serious dispute, it's just an effect that pertains to nonstandardized results.


> And how do you define pain and pleasure?

They're not reducible, but I don't know if that means we don't have definitions; we can describe them well enough that most people (who aren't p-zombies or playing the sceptical philosopher role) know pretty well what we mean. All of our definitions have to bottom out somewhere...

> Do insects feel pain?

Nobody (except the insects) can know for sure. Our inability to know whether X is true doesn't imply X is meaningless, though.


But how can X be a good indicator for something I want to determine if I can’t measure X either?

> But how can X be a good indicator for something I want to determine if I can’t measure X either?

In the comment that started this subthread, qsera was responding to someone who said "Imo we don't even have a definition of [consciousness]". If qsera meant that we can measure consciousness in terms of pleasure and pain, then of course I agree that they were just pushing the problem back a step. But I don't think that's what they meant.


The person they were responding to said "Open models have the same performance on coding tasks now." AFAIK this is bullshit, but I'd love to be corrected if I'm wrong.


I don't mean this in an "I know better" way, just genuine curiosity: why couldn't you record a solution with pauses and then strip them from the replay file?


I tried but the change in behaviour immediately before and after the pause could be seen in the playback.

It's the time it takes to go "uhh, I'm stuck, I'd better pause" and then the bit before your brain kicks in following a pause.


But if winning the game requires you to do shitty science and defraud the public, why play it at all? There's no desperation justification here, because anyone who can succeed in academia almost certainly has the brains and credentials to get a decent non-academic job.


Because, for one thing, some people are shitty frauds, and they're not bothered by it. Those people see messed-up incentives as an opportunity.

Do serious workers tend to get out of the field, if the incentives are wrongheaded enough? Sure. Some. Does that fix the incentives or the outcomes within that field? No, not at all.


Because it's not a requirement, and most people are not intentionally or accidentally defrauding the government.

The issue is that there is no incentive to do the additional work necessary to generate reproducible results because of the pressure to constantly generate sufficiently novel results to publish.

If you spend the additional time required to have fully reproducible results and your competition is not, you're probably going to lose the game (where the game is obtaining more funding).

Not generating reproducible results doesn't mean you're a fraud, but the absence of a requirement to generate them in order to publish means that it's easier for fraudsters to operate that it would be with that requirement.


> anyone who can succeed in academia almost certainly has the brains and credentials to get a decent non-academic job.

I suspect the way this usually gets started is similar to embezzlement schemes. “Oh I’ll just borrow a few dollars from the till and pay it back tomorrow” is akin to “The manuscript is due tonight so I’ll just touch up this microphotograph to look like the other one that had bad focus.”

That escalates into forging invoices on the one hand and completely fabricated data on the other. By that point they’re in too deep to stop until they get caught.


>because anyone who can succeed in academia almost certainly has the brains and credentials to get a decent non-academic job.

That's not obviously true at all.


Because you've just spent 10-15 years studying for a masters, PhD, and postdoc how to do exactly one thing, and probably are IN that system for another 5-10 years before realizing how totally corrupt it is.


It's definitely important to change the game, because there will (sadly) always be a supply of unscrupulous people if dishonesty is rewarded. But I do think the incentive-focused approach sometimes undermines itself. One of the ways to disincentivize dishonesty is to have strong social sanctions against dishonest people, so it's (arguably) pretty stupid to weaken this with a "don't hate the player" attitude. And we tend to work harder to prevent and punish offenses that stir our emotions, so if everyone is blasé about academic dishonesty then we'll probably continue to see lax enforcement and weak penalties.


I think this is the right tension, in that bad incentives matter, but that does not remove personal responsibility. We probably need both stronger accountability for clear misconduct and better systems that make rigor, transparency, and verification easier to pursue in the first place. The second piece gets much less attention than it should. That is a big part of what we’re trying to tackle at Liberata: https://liberata.info/beta-signup


This is definitely a good approach but I don't think it's the only one!

I absolutely agree that the idea that exercise has to be unpleasant is wrong and harmful. But there's a middle ground where the things you actively enjoy aren't sufficient to keep you fit, and so you develop a habit of doing regular exercise even when you don't feel like it and even if it's a bit boring and effortful.

Everyone's different but IME this works well provided you build up the effort level gradually, and never feel the need to push yourself to a really unpleasant degree. Eventually habit, the knowledge that it's good for you in the long run, and the fact that it usually makes you feel better in the short run make it pretty easy to stick with.


"it does suggest that avoiding your triggers [...] provides no benefit"

This is the part I'm sceptical of. When I look this up, I mostly find articles like https://theconversation.com/proceed-with-caution-the-trouble... (and the underlying studies), which mainly address the question of whether reading a trigger warning and then consuming the potentially triggering content is better than just consuming the potentially triggering content without a warning.

(The article also mentions a finding that trigger warnings have "no meaningful effect on an individual's [...] avoidance of this content"; but I think that's entirely compatible with a world where most people consume the content regardless of the warning, some are more drawn to it because of the warning, and some (including the few who are truly vulnerable) avoid it because of the warning. The effect on those vulnerable few is what's most relevant here. The article does briefly mention "unhealthy avoidance behaviours", but in the context of one university's opinion and without supporting evidence.)

What's the best evidence against trigger warnings as a means of enabling traumatised people to make an informed decision on when (and whether) to confront their triggers?


> The article does briefly mention "unhealthy avoidance behaviours", but in the context of one university's opinion and without supporting evidence.)

There's not much additional context here because avoidant behavior is basically universally understood to be a bad thing when it comes to the long term treatment of PTSD (this is separate from immediately/short-term after the event - different situation there) - there's no real serious argument against this idea, so when avoidant behavior is discussed it doesn't require context on why that behavior is a bad thing, in the same way that a an article targeted at cardiologists isn't going to explain why poor ejection fraction is an issue - it's baseline knowledge for the target audience.

The results are mixed on whether it encourages avoidance - some studies like https://www.sciencedirect.com/science/article/abs/pii/S00221... indicate that it does, others found no effect or negligible increases.

To be clear, I'm not definitively stating it causes avoidant behavior - I am saying that it might, which would be one of those 'worst case' scenarios.

Trauma groups have been part of the meta-analysis that indicate no real change in avoidance, and some have had the 'forbidden fruit' impact even in trauma groups, but it's in similar quantities as the ones that show an increase in avoidant behavior.

Fundamentally, trigger warnings just don't make a lot of sense to try and argue in favor of from a 'helping people with their PTSD' standpoint if you believe the science.

1) For them to have the effect you claim is desirable, they would need to avoid the content - but avoidant behavior is a negative when it comes to overcoming PTSD

2) The science largely indicates that it doesn't cause them to change their behavior at all in this manner - so the desired effect, it doesn't seem to do anything.

3) There's some evidence that it might increase avoidant behavior (science would call this bad!) and some evidence it might increase people exposure due to the 'forbidden fruit' effect (which would be bad from the supposed desired effect, and not necessarily good from the scientific standpoint - unnaturally being pushed towards something might also be negative vs. more 'natural' exposure, particularly when coupled with the upcoming point)

4) A variety of studies have shown that they increase anticipatory anxiety in people when they appear, which is of course a negative for anyone. I haven't been able to find any studies particularly engaging on this specific topic of anticipatory anxiety from trigger warnings + follow up exposure from the 'forbidden fruit' effect so this isn't something backed by science like the rest, but my gut instinct is that it would be more likely to be negative vs. something more organic. I could very well be wrong there.

I don't see any combination of piecing together these studies that could lead to a belief that trigger warnings provide value from a therapeutic standpoint.


Can you point me to some strong evidence that it's reliably counterproductive to avoid reading a book or watching a show that contains a trigger? I get that avoidance, in the sense of trying to push away all thoughts of the trauma and avoid all possible reminders, is generally considered counterproductive. And exposure, at the right times and in the right ways, can be very helpful (or absolutely necessary). But there's a big difference between those facts and the idea that it's bad for a PTSD sufferer to have the option of sometimes deciding not to actively expose themselves to triggering media.


https://www.ptsd.va.gov/understand/what/avoidance.asp

> A combat Veteran may stop watching the news or using social media because of stories or posts about war or current military events.

https://www.verywellmind.com/ptsd-and-emotional-avoidance-27...

> The avoidance cluster of PTSD symptoms involves efforts to avoid distressing memories, thoughts, or feelings, and external reminders like discussions about the traumatic event or encounters with people or places associated with it.

I don't see how specifically avoiding content that contains triggers is anything but avoidance behavior as discussed above - avoiding the news or discussions about war is pretty explicitly facilitated by TW - before the clip plays on the news, by people posting it at the top of their social media content, etc. And media with the content would fall in line pretty explicitly as an "external reminder"

Like, I don't think someone who has been physically tortured and dealing with PTSD should watch Hostel or other torture porn, and I don't think a vet with PTSD should watch a compilation video of some of the worst horrors of war. So I'm not arguing for massive exposure or intentional forced exposure, etc. But the fundamental issue is that going out of your way to prevent yourself from being exposed to it at all, which is what TW facilitate if they were to work, is pretty definitionally avoidant behavior.


I've seen evidence that reading a trigger warning and then consuming the content might be worse than just consuming the content without a trigger warning.

But is there any good reason to doubt that trigger warnings can be helpful in the obvious way: someone sees the trigger warning and makes an informed decision to avoid the content?


The nature of that setup makes it incredibly difficult to research. You'd have to answer a negative.

Of course, that won't stop people that are anti-trigger warnings from using the irrelevant research (they don't work if you don't heed them... duh) to push their agenda.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: