Hacker Newsnew | past | comments | ask | show | jobs | submit | lupire's commentslogin

Fall guys.

Highest Profile Individuals Convicted Kareem Serageldin (Credit Suisse): Widely recognized as the only high-level Wall Street executive to serve prison time directly related to the GFC.


No that's still about a decade out of date. TARP jailed IIRC 30 bank CEOs, it's just the cases took until 2017 or so and the meme had already implanted itself in people's brains. DoJ got so tired of people saying this that they put a database of all their convictions up but unfortunately it got DOGEd last year.

Many of the TARP convictions (the ones that involved the SEC) can still be found here, though:

https://www.sec.gov/enforcement-litigation/litigation-releas...


Very cool website. Looking through a few of those examples, holy Jesus there is a lot of fraud out there.

Fun read.


I'm sad DOGE killed the TARP litigation database because there was some wild stuff on it


What about the fraud that led up to the GFC -- pre-TARP? I think that's what people meme about.


The TARP investigations jailed people for that; that was it's main purpose. Taking the funding window required an audit, and people either lied on the audit (and got busted for that) or admitted to illegal lending or valuation on it (and got busted for that)


Yahoo doesn't have journalism. It's a syndication portal. https://www.benzinga.com/markets/tech/26/04/51828848/ubers-a...


Syndicated from https://www.benzinga.com/markets/tech/26/04/51828848/ubers-a...

It's a poorly written junk article upvoted based on Uber/Anthropomorphic sentiment. I recommend flagging it.


In 2026? Absolutely.


My AI assistant is configured to monitor the face-side camera feed and switch windows when someone standing enters the frame.


Syntax is a stupid thing to test in trade school. That's compiler stuff.


You're telling me the path to an A is to hook up the Teacher AI to a Student AI.


I think you're missing the point because PP addressed that concern.


Yes, and as you'd expect, this is how LLMs work today, in general, for control codes. But different elems use different control codes for different purposes, such as separating system prompt from user prompt.

But even if you tag inputs however your this is good, you can't force an LLM to it treat input type A as input type B, all you can do is try to weight against it! LLMs have no rules, only weights. Pre and post filters cam try to help, but they can't directly control the LLM text generation, they can only analyze and most inputs/output using their own heuristics.


How are you defining "banner blindness"?

The foundation of LLMs is Attention.


"Banner blindness [...] describes people’s tendency to ignore page elements that they perceive (correctly or incorrectly) to be ads." https://www.nngroup.com/articles/banner-blindness-old-and-ne...

So people can focus their attention to parts of content, specifically parts they find irrelevant or adversarial (like ads). LLMs on the other hand pay attention to everything or if they focus on something, it is hard to steer them away from irrelevant or adversarial parts.


Banner blindness is a phenomenon where humans build resistance to previously-effective ad formats, making them much less effective than they previously used to be.

You can find a "hook" to effectively manipulate people with advertising, but that hook gets less and less effective as it is exploited. LLMs don't have this property, except across training generations.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: