It is adversely selected, but it's not debt, it's equity, so price action can go real fast and nobody will be burned except folks who soberly-or-not opted into this. Everyone _knows_ Elon is the way he is, so nobody will be _surprised_ at things. No surprise, no crisis.
They're going to force a S&P500 index listing on IPO day so we're all going to be forced to baghold this regardless of if we want to or not unless you've got $0 in any major retirement fund.
So far only Nasdaq has changed its rules and will allow fast entry in 15 trading days. S&P has not changed its rules, not yet at least. Total indexed capital of Nasdaq is 1.4T vs 16T in the S&P500. Stated reason for fast tracking is that the indices are supposed to be a broad representation of the market, and leaving a 2T company out would be a significant tracking error.
I do agree that the optics of this aren’t great, and it’s rather easy to be cynical about motives.
I did a bit of research on this some time ago and it's not as bad as I originally thought. Index funds would need to count only liquid float of the company. So if Space X total valuation is 2 trillion, but float is 5%, then they need to count it as 100 billion for the purposes of index weight. Still more than I want, but not catastrophic.
You gotta do what you think is best, but I hope for future you's sake you decide to not pull the money out. Or if you do you have other retirement plans.
I'm trying to help my parents now their at retirement age and am seeing first hand what not planning for your future looks like. They hit retirement with nothing but a small social security check every month. Not even enough to cover rent in most places.
I don't know how much you have in your 401k, but it will be worth literally hundreds of thousands more if you pull it out when you retire. You aren't just paying the penalties now, you're paying for potentially decades of compounding.
You could just buy deep out of money SP500 puts expiring in 1+ year. That way you would be "insured" against the bubble popping.
The thing is, every dollar you spend on insurance is a dollar (and its interest) you lose. Furthermore, we don't know when it will pop. 1 year? 5 years?
The more reasonable solution is probably gradually reduce exposure to US markets by selling SP500 shares and turning to Europe and emerging markets ETFs. No need to cash out 401k.
If you just look at the past 20 years, the US has had exceptional returns compared to the rest of the world.
The thing is, historically, high PE ratios like what we're seeing in the US do not correlate with short term returns that are as high. Expected future returns decrease as the PE ratios go up in a pretty linear fashion.
Why 20 years? Just because we know, post hoc, the usa outperformed other places in the last 20 years, in no way means the next 20 years will be the same.
If you want a different point to backtest from, try Japan in the 80s and early 90s
I'm not an expert but it looks to my like 80% of my allocation won't be tracking spacex, because it's mid cap or small cap etc, and the 20% that's in the vanguard growth index might? I assume whoever sets the rules for the fund could change the rules to say companies must be listed for X months if they want to avoid this, right?
And I can change my allocation.
edit: Actually wait, isn't it only nasdaq 100 that's tracking it early, after 15 days rather than 3 months of trading? So 0% of my 401k is exposed to buying it quickly after IPO already, I think.
401k rollovers into IRA aren't that hard these days and you could always use that IRA to have a more customized strategy, more specifically direct indexing of a major fund minus key ticker symbols you don't want exposure to. Of course, that all presumes that you won't regret excluding this long term.
The question is, is everyone integrating a special SpaceX correction in their algorithmic trading? Because if a dip in the index due to SpaceX causes old algorithms to think it’s a more structural issue (well, more than it is), and sell on that indicator, will that cause a cascade?
If your retirement fund is an IRA you can invest it in any stock you want. For a 401k you probably have some fund options that are not exposed to the S&P500, like emerging markets or fixed income
Maybe this already exists, but it would be great if one of the major index ETFs omitted all the firms with problematic board governance like there is at Tesla, SpaceX.
S&P500 had a rule from 2017 to 2023 that prevented companies with dual classes of shares (the sort that allow them to maintain founder control- like what GOOG and META did) that went public after the rule was instituted from ever being in the index. To be clear, META and GOOG were both in the index, but it was to prevent new companies from coming along and doing it. (I think it was related to SNAP going public?)
They removed it largely because investors wanted higher returns, and the tech companies that had such dual classes (1) were doing really well, and the S&P ended up caving on that rule.
1: Perennial hot button around here Palantir did this in a more extreme fashion than most. The three founders F class shares will always be at 49.9999% of the votes and the early investors B class shares have 10 votes each as compared to the publicly traded A class shares 1 votes.
Friendly reminder that SpaceX is going straight to the index—Elon agitated for it. The 401k of everybody in America is serving as a bailout fund for X and now cursor, and whatever other trash he hovers up
> Nasdaq was the first to consider a rule change that would grant mega IPOs like SpaceX early admission to its flagship Nasdaq-100 index. The exchange and index provider began a consultation period in February to assess the viability of and industry response to a proposed “fast entry” rule. The change was approved on March 30 and will be effective on May 1.
Bayesian inference is, to be overly simple, a way to write probabilistic if-statements and fit them from data. The "if" statement in this case is "if the bug is there...", and of course it's often the case that in actual software that if statement is probabilistic in nature. This thing does git bisect with a flaky bug with bayesian inference handling the flakiness to get you a decent estimate of where the bug is in the git history. It seems to be usable, or at least as usable as a Show HN thingy is expected to be.
Connectionist models have lots of theory by theoreticians explicitly pissed off about Chomsky's assertion that there is an inbuilt ability for language. Jay McClelland's office had a little corkboard thingy with Chomsky mockery on the side, for example. Putting forth even the implicature that the present direct descendants are intellectual descendants of Chomsky is like saying Protestants are intellectual descendants of Pope Leo X.
Perhaps a failure of communication -- I was indeed attempting to say that Chomsky was wrong and his ideas were interesting, but more or less a dead end.
Apologies if I'm stepping on a joke, but just in case: Nativism is about cognitive capacities, not sensorimotor ones. All apes could easily communicate just as well as Helen Keller, yet none of them have ever asked a question, much less written a book!
Give language models 500 million years and lets revisit this. One of the reasons robots are harder to reach parity than higher intelligence, evolution has been cooking it a long time.
Well that anecdote is referencing the Scruffies v. Neat war[1], within which the nativism debate was merely a somewhat-archaic undercurrent.
IMHO, a lot of the more specifically anti-nativist sentiments of today are based in linguistics itself rather than philosophy, CS, or CogSci, where again it is part of a broader (and much dumber) debate: whether linguistics is the empirical study of languages or the theoretical study of language itself. People get really nasty when they're told that they work in an offshoot field for some reason, which is why I blame them for the ever-too-common misunderstandings of Chomsky -- the most common being "Universal Grammar has been disproven because babies don't speak English in the womb".
If Chomsky weren't so obviously right, this would be a worrying development! Luckily I expect it to be little more than a footnote in history, so it's merely infuriating rather than depressing.
Man, I've seen the SQL Metabase emits, it's not great. Like, doing a massive join across 10 tables and selecting all the columns from all the tables - to only return the average of one column from one table.
> The researchers identified that Rubocop, one of our tools, was running outside our secure sandbox environment — a configuration that deviated from our standard security protocols.
This is still ultra-LLM-speak (and no, not just because of the em-dash).
A few years ago such phrases would have been candidates for a game of bullshit bingo, now all the BS has been ingested by LLMs and is being regurgitated upon us in purified form...
Absolutely. In my experience every AI startup is full of AI maximalists. They use AI for everything they can - in part because they believe in the hype, in part to keep up to date with model capabilities. They would absolutely go so far as to write such an important piece of text using an LLM.
I wonder how many of these intern-type tasks LLMs have taken away. The type of tasks I did as a newbie might have seemed not so relevant to the main responsibilities but they helped me get institutional knowledge and generally get a feel of "how things work" and who/how to talk to make progress. Now the intern will probably do it using LLMs instead to talking to other people. Maybe the results will be better but that interaction is gone.
I think there is an infinite capacity for LLMs to be both beneficial, or negative. I look back at learning and think, man, how amazing would it have been if I could have had a personalized tutor helping guide me and teach me about the concepts I was having trouble with in school. I think about when I was learning to program and didn’t have the words to describe the question I was trying to ask and felt stupid or an inconvenience when trying to ask to more experienced devs.
Then on the flip side, I’m not just worried about an intern using an LLM. I’m worried about the unmonitored LLM performing intern, junior, and ops tasks, and then companies simply using “an LLM did it” as a scapegoat for their extreme cost cutting.
They first disabled rubocop to prevent further exploit, then rotated keys. If they awaited deploying the fix that would mean letting compromised keys remain valid for 9 more hours. According to their response all other tools were already sandboxed.
However their response doesn't remediate putting secrets into environment variables in the first place - that is apparently acceptable to them and sets off a red flag for me.
Yeah, I thought the same. They were really unlucky, the only analyzer that let you include and run code was the one outside of the sandbox. What were the chances?
> putting secrets into environment variables in the first place - that is apparently acceptable to them and sets off a red flag for me
Isn't that standard? The other options I've seen are .env files (amazing dev experience but not as secure), and AWS Secrets Manager and similar competition like Infisical. Even in the latter, you need keys to authenticate with the secrets manager and I believe it's recommended to store those as env vars.
They weren't published together. They managed to get the researchers to add CodeRabbit's talking points in after the fact, check out the blue text on the right hand side.
If I were a CodeRabbit customer, I'd still be pretty concerned after reading that.
How can CodeRabbit be certain that the GitHub App key was not exfiltrated and used to sign malicious tokens for customer repos (or even used for that in-situ)? I'm not sure if GitHub supports restricting the source IPs of API requests, but if it does, it'd be a trivial mitigation - and one that is absent from the blog post.
The claim that "no malicious activity occurred" implies that they audited the activities of every repo that used Rubocop (or any other potential unsandboxed tool) from the point that support was added for it until the point that the vulnerability was fixed. That's a big claim.
And why only publish this now, when the Kudelski article makes it to the top of HN, over six months after it was disclosed to them?
This is the third or fourth time you’ve spammed this exact comment in response to people’s perfectly legitimate questions. What is this clown-show bullshit?