Hacker Newsnew | past | comments | ask | show | jobs | submit | curuinor's commentslogin

It is adversely selected, but it's not debt, it's equity, so price action can go real fast and nobody will be burned except folks who soberly-or-not opted into this. Everyone _knows_ Elon is the way he is, so nobody will be _surprised_ at things. No surprise, no crisis.


They're going to force a S&P500 index listing on IPO day so we're all going to be forced to baghold this regardless of if we want to or not unless you've got $0 in any major retirement fund.


So far only Nasdaq has changed its rules and will allow fast entry in 15 trading days. S&P has not changed its rules, not yet at least. Total indexed capital of Nasdaq is 1.4T vs 16T in the S&P500. Stated reason for fast tracking is that the indices are supposed to be a broad representation of the market, and leaving a 2T company out would be a significant tracking error.

I do agree that the optics of this aren’t great, and it’s rather easy to be cynical about motives.


I did a bit of research on this some time ago and it's not as bad as I originally thought. Index funds would need to count only liquid float of the company. So if Space X total valuation is 2 trillion, but float is 5%, then they need to count it as 100 billion for the purposes of index weight. Still more than I want, but not catastrophic.


Oh yes, thanks for reminding me. I’m going to cash out the 401(k).


You’ll pay massive penalties on that, another option is options (heh) but I’m not finance-literate enough to know how to pull it off.


Only penalties if you withdraw from 401k. Most 401k plans have some kind of moneymarket, bond fund, or similar


You can just reallocate away from an index fund.


I’ve made my peace with the “massive penalties”. I benefited from employer match in the past. I want the money now, not when I retire.


You gotta do what you think is best, but I hope for future you's sake you decide to not pull the money out. Or if you do you have other retirement plans.

I'm trying to help my parents now their at retirement age and am seeing first hand what not planning for your future looks like. They hit retirement with nothing but a small social security check every month. Not even enough to cover rent in most places.

I don't know how much you have in your 401k, but it will be worth literally hundreds of thousands more if you pull it out when you retire. You aren't just paying the penalties now, you're paying for potentially decades of compounding.


Retirement plan is rappelling accident before dotage.


Well can't argue with that lol

But if by some tragedy you don't die young, your older self is gonna be pissed at younger you for costing him hundreds of thousands of dollars.


You could just buy deep out of money SP500 puts expiring in 1+ year. That way you would be "insured" against the bubble popping.

The thing is, every dollar you spend on insurance is a dollar (and its interest) you lose. Furthermore, we don't know when it will pop. 1 year? 5 years?

The more reasonable solution is probably gradually reduce exposure to US markets by selling SP500 shares and turning to Europe and emerging markets ETFs. No need to cash out 401k.


You should backtest this strategy over the last 20 years before you make serious decisions off of the vibe from internet comments


20 years is not enough.

If you just look at the past 20 years, the US has had exceptional returns compared to the rest of the world.

The thing is, historically, high PE ratios like what we're seeing in the US do not correlate with short term returns that are as high. Expected future returns decrease as the PE ratios go up in a pretty linear fashion.

https://am.jpmorgan.com/us/en/asset-management/institutional...


Why 20 years? Just because we know, post hoc, the usa outperformed other places in the last 20 years, in no way means the next 20 years will be the same.

If you want a different point to backtest from, try Japan in the 80s and early 90s


What's the point of backtesting? Does backtesting say anything about the future?


The point of backtesting is to allow you to do what you want to do with a veneer of being data driven.


What are you basing this on?

I'm not an expert but it looks to my like 80% of my allocation won't be tracking spacex, because it's mid cap or small cap etc, and the 20% that's in the vanguard growth index might? I assume whoever sets the rules for the fund could change the rules to say companies must be listed for X months if they want to avoid this, right?

And I can change my allocation.

edit: Actually wait, isn't it only nasdaq 100 that's tracking it early, after 15 days rather than 3 months of trading? So 0% of my 401k is exposed to buying it quickly after IPO already, I think.


So far they're only getting fastracked into Nasdaq 100, not S&P 500.


401k rollovers into IRA aren't that hard these days and you could always use that IRA to have a more customized strategy, more specifically direct indexing of a major fund minus key ticker symbols you don't want exposure to. Of course, that all presumes that you won't regret excluding this long term.


The question is, is everyone integrating a special SpaceX correction in their algorithmic trading? Because if a dip in the index due to SpaceX causes old algorithms to think it’s a more structural issue (well, more than it is), and sell on that indicator, will that cause a cascade?


obviously no. if algos work in china, it will work with spacex


If your retirement fund is an IRA you can invest it in any stock you want. For a 401k you probably have some fund options that are not exposed to the S&P500, like emerging markets or fixed income


Maybe this already exists, but it would be great if one of the major index ETFs omitted all the firms with problematic board governance like there is at Tesla, SpaceX.


S&P500 had a rule from 2017 to 2023 that prevented companies with dual classes of shares (the sort that allow them to maintain founder control- like what GOOG and META did) that went public after the rule was instituted from ever being in the index. To be clear, META and GOOG were both in the index, but it was to prevent new companies from coming along and doing it. (I think it was related to SNAP going public?)

They removed it largely because investors wanted higher returns, and the tech companies that had such dual classes (1) were doing really well, and the S&P ended up caving on that rule.

1: Perennial hot button around here Palantir did this in a more extreme fashion than most. The three founders F class shares will always be at 49.9999% of the votes and the early investors B class shares have 10 votes each as compared to the publicly traded A class shares 1 votes.


My money's all in Bitcoin pats himself on the back


Kinda shocked SpaceX hasn't bailed out the DOGE-holders at this point..


the power of yet


The point of a rug pull is for the holders to lose money not to be bailed out.


Friendly reminder that SpaceX is going straight to the index—Elon agitated for it. The 401k of everybody in America is serving as a bailout fund for X and now cursor, and whatever other trash he hovers up


They are going straight to the Nasdaq. Most index investors are invested in the S&P 500


Nasdaq is an exchange. S&P 500 is an index.

S&P 500 includes companies from multiple exchanges. Like Nvidia, which lists on Nasdaq.


Nasdaq 100…

https://www.morningstar.com/funds/spacex-ipo-how-index-funds...

> Nasdaq was the first to consider a rule change that would grant mega IPOs like SpaceX early admission to its flagship Nasdaq-100 index. The exchange and index provider began a consultation period in February to assess the viability of and industry response to a proposed “fast entry” rule. The change was approved on March 30 and will be effective on May 1.


Omnissiah-bothering, I call it.


let's see the site, if we can't have a primary source


Bayesian inference is, to be overly simple, a way to write probabilistic if-statements and fit them from data. The "if" statement in this case is "if the bug is there...", and of course it's often the case that in actual software that if statement is probabilistic in nature. This thing does git bisect with a flaky bug with bayesian inference handling the flakiness to get you a decent estimate of where the bug is in the git history. It seems to be usable, or at least as usable as a Show HN thingy is expected to be.


Less than, not more than


Connectionist models have lots of theory by theoreticians explicitly pissed off about Chomsky's assertion that there is an inbuilt ability for language. Jay McClelland's office had a little corkboard thingy with Chomsky mockery on the side, for example. Putting forth even the implicature that the present direct descendants are intellectual descendants of Chomsky is like saying Protestants are intellectual descendants of Pope Leo X.


Perhaps a failure of communication -- I was indeed attempting to say that Chomsky was wrong and his ideas were interesting, but more or less a dead end.


>Jay McClelland's office had a little corkboard thingy with Chomsky mockery on the side, for example.

I've never understood why the idea of linguistic nativism is so upsetting to people.


Indeed, operating human lips, teeth, tongue, and larynx is far beyond language models.


Apologies if I'm stepping on a joke, but just in case: Nativism is about cognitive capacities, not sensorimotor ones. All apes could easily communicate just as well as Helen Keller, yet none of them have ever asked a question, much less written a book!


No joke. Same sensorimotor neurons in the human speech apparatus have cognitive analogues, developed together over vast expanses of history.


Give language models 500 million years and lets revisit this. One of the reasons robots are harder to reach parity than higher intelligence, evolution has been cooking it a long time.


Well that anecdote is referencing the Scruffies v. Neat war[1], within which the nativism debate was merely a somewhat-archaic undercurrent.

IMHO, a lot of the more specifically anti-nativist sentiments of today are based in linguistics itself rather than philosophy, CS, or CogSci, where again it is part of a broader (and much dumber) debate: whether linguistics is the empirical study of languages or the theoretical study of language itself. People get really nasty when they're told that they work in an offshoot field for some reason, which is why I blame them for the ever-too-common misunderstandings of Chomsky -- the most common being "Universal Grammar has been disproven because babies don't speak English in the womb".

If Chomsky weren't so obviously right, this would be a worrying development! Luckily I expect it to be little more than a footnote in history, so it's merely infuriating rather than depressing.

[1] Minsky, 1991: https://ojs.aaai.org/aimagazine/index.php/aimagazine/article...


metabase.com, but metabase is intended for business analyst types and is AGPL, with shenanigans for embedding and an enterprise edition thing


Man, I've seen the SQL Metabase emits, it's not great. Like, doing a massive join across 10 tables and selecting all the columns from all the tables - to only return the average of one column from one table.



The LLM tics are strong in this writeup:

"No manual overrides, no exceptions."

"Our VDP isn't just a bug bounty—it's a security partnership"


Wow, you hit a nerve with that one. There have been some quick edits on the page.

Another:

> Security isn't just a checkbox for us; it's fundamental to our mission.


They delved deep and spent a whole 2 minutes with ChatGPT 4o getting those explanations and apologies in play.


That’s the part that makes me laugh. If you’re going to try to pass of ChatGPT as your own work at least pay for the good model


Hey CodeRabbit employees

> The researchers identified that Rubocop, one of our tools, was running outside our secure sandbox environment — a configuration that deviated from our standard security protocols.

This is still ultra-LLM-speak (and no, not just because of the em-dash).


A few years ago such phrases would have been candidates for a game of bullshit bingo, now all the BS has been ingested by LLMs and is being regurgitated upon us in purified form...


Absolutely. In my experience every AI startup is full of AI maximalists. They use AI for everything they can - in part because they believe in the hype, in part to keep up to date with model capabilities. They would absolutely go so far as to write such an important piece of text using an LLM.


The NFT smell completely permeates the AI "industry." Can't wait for this bubble to pop.


For anyone following along in the comments here. Code Rabbit's CEO posted some of the details today, after this post hit HN.

The usual "we take full responsibility" platitudes.


I would like to see a diff of the consequences of taking full vs half-hearted responsibility.


I’m sure an “intern” did it.


I wonder how many of these intern-type tasks LLMs have taken away. The type of tasks I did as a newbie might have seemed not so relevant to the main responsibilities but they helped me get institutional knowledge and generally get a feel of "how things work" and who/how to talk to make progress. Now the intern will probably do it using LLMs instead to talking to other people. Maybe the results will be better but that interaction is gone.


I think there is an infinite capacity for LLMs to be both beneficial, or negative. I look back at learning and think, man, how amazing would it have been if I could have had a personalized tutor helping guide me and teach me about the concepts I was having trouble with in school. I think about when I was learning to program and didn’t have the words to describe the question I was trying to ask and felt stupid or an inconvenience when trying to ask to more experienced devs.

Then on the flip side, I’m not just worried about an intern using an LLM. I’m worried about the unmonitored LLM performing intern, junior, and ops tasks, and then companies simply using “an LLM did it” as a scapegoat for their extreme cost cutting.


I would love to know the acceptable version.


Something not copy-pasted from an LLM would be more acceptable.


I feel like that would also be unacceptable.


Not a single mention of env vars. Just shifting the blame to rubocop.


They seem to have left out a point in their "Our immediate response" section:

- within 8 months: published the details after researchers publish it first.


Hmm, is it normal practice to rotate secrets before fixing the vulnerability?


They first disabled rubocop to prevent further exploit, then rotated keys. If they awaited deploying the fix that would mean letting compromised keys remain valid for 9 more hours. According to their response all other tools were already sandboxed.

However their response doesn't remediate putting secrets into environment variables in the first place - that is apparently acceptable to them and sets off a red flag for me.


"According to their response all other tools were already sandboxed."

Everything else was fine, just this one tool chosen by the security researcher out of a dozen of tools was not sandboxed.


Yeah, I thought the same. They were really unlucky, the only analyzer that let you include and run code was the one outside of the sandbox. What were the chances?


> putting secrets into environment variables in the first place - that is apparently acceptable to them and sets off a red flag for me

Isn't that standard? The other options I've seen are .env files (amazing dev experience but not as secure), and AWS Secrets Manager and similar competition like Infisical. Even in the latter, you need keys to authenticate with the secrets manager and I believe it's recommended to store those as env vars.

Edit: Formatting


You can use native authentication methods with Infisical that don't require you to use keys to authenticate with your secrets manager: - https://infisical.com/docs/documentation/platform/identities... - https://infisical.com/docs/documentation/platform/identities...


Duh. Thanks for pointing that out.


That post happened after the HN post?


They weren't published together. They managed to get the researchers to add CodeRabbit's talking points in after the fact, check out the blue text on the right hand side.

https://web.archive.org/web/diff/20250819165333/202508192240...



If I were a CodeRabbit customer, I'd still be pretty concerned after reading that.

How can CodeRabbit be certain that the GitHub App key was not exfiltrated and used to sign malicious tokens for customer repos (or even used for that in-situ)? I'm not sure if GitHub supports restricting the source IPs of API requests, but if it does, it'd be a trivial mitigation - and one that is absent from the blog post.

The claim that "no malicious activity occurred" implies that they audited the activities of every repo that used Rubocop (or any other potential unsandboxed tool) from the point that support was added for it until the point that the vulnerability was fixed. That's a big claim.

And why only publish this now, when the Kudelski article makes it to the top of HN, over six months after it was disclosed to them?


> No customer data was accessed and the vulnerability was quickly remediated within hours of disclosure

How do they know this -- Do they have any audit logs confirming this? A malicious actor could have been using this for months for all they know


> How do they know this

They know because it would affect their fundraising, so obviously customer data wasn't affected.


hey, this is Howon from CodeRabbit. We use a cloud-provider-provided key vault for application secrets, including GH private key.


what does that mean? Were the leaked keys irrelevant?


This is the third or fourth time you’ve spammed this exact comment in response to people’s perfectly legitimate questions. What is this clown-show bullshit?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: