There are likely many, many definitions of what "maximizing learning" means. And that drives different approaches to education. Not "whether education should seek to maximize students’ learning."
That could be taken as a fairly naive statement -- there are countless successful examples of non test-driven education out there.
When was this? "Startup" suggests recent, but aloha shirts have been the most common outfit downtown every day of the week (except for maybe lawyers in court) since the nineties at least.
Or are you saying that you were super casual M-Th and put on a collared shirt on Friday?
> At the same time, according to the Puget Sound Business Journal, the Seattle metro area spends more than $1 billion fighting homelessness every year. That’s nearly $100,000 for every homeless man, woman, and child in King County, yet the crisis seems only to have deepened, with more addiction, more crime, and more tent encampments in residential neighborhoods. By any measure, the city’s efforts are not working.
I'm familiar with stats like this for other cities, but in my experience, it's common for those statistics to be misinterpreted.
Often these funds go towards housing people so they are no longer homeless, so you might have X people who are currently homeless, but Y people relying on that $1B to remain housed.
Because of that, it's inaccurate to divide $1B by X and then claim that's how much going towards individuals who are still homeless anyway, when it is more accurate to divide that figure by X+Y.
This stat isn't even being misinterpreted, it's being lied about. The $1billion is an imaginary number that includes theoretical lost tourism revenue, a pretense at annual spending of the capital costs for buildings used by welfare services, etc.
What that, obscenely slanted, first link doesn't mention, is that each year around 10k homeless people in Seattle get off the streets.
It isn't like that money is being flushed down a hole, results are being had on an individual basis, but for reasons that are nationwide in scope, the overall problem (# of homeless people on the streets of Seattle) isn't getting any better.
National problems require national solutions, but half the senate is perfectly happy to offload the cost of social ills onto coastal metros.
The chronic homeless cases (the ones that can’t recover quickly) eat up around 80% of that money, so it still isn’t that far off. The low hanging fruit cases just need housing, and aren’t going to destroy or use it to make meth, they don’t require anywhere bear $100k/year in services.
>> Seattle metro area spends more than $1 billion fighting homelessness every year.
Think about how monumentally worse the problem might be if they didn't spend that money. There are very likely tens of thousands more people who would be homeless if not for government support/programs. Just imagine how many people would be on the streets in a week if EI and disability income streams were cut.
No, only the ones that are acting nuts on the streets, the ones you are more likely notice more and the ones that aren’t helped without a lot of resources.
How inefficient is this system, actually? I doubt one homeless man in seattle can cost 2 times as much as I earn per year. What are you spending all these dollars on? A free visit to a prostitute every week? Oh, I forgot, thats illegal in the US...
They come from Christopher Rufo, a terrible person who is funded by creationist right wingers and these days is spending his time working with DeSantis to whip up fear of Critical Race Theory in schools.
Give ten kids from the same class a self study online course and give another ten a private tutor, and they won't see the same score distribution. Where's the signal there?
SAT results don't have a line item that notes the amount of wealth or privilege that went into preparing.
> Immigrant families eligible for reduced price lunch are able to scrounge up the money for these tests.
Some families can't. Other families aren't aware, or aren't interested. But we judge the kids in the family for that.
That said... I don't know how the _new_ system will work at fighting that privilege -- there are still lots of ways for it to disguise itself. But we have to at least acknowledge the issues with the SAT.
But, to me at least, this goes beyond privilege. This is about diversity of skills and diversity of learner profiles and moving away from linear quantification of potential.
The effects of studying "for the test" as you put have been measured, improved test taking skills tends to be worth ~30 points which is not that significant. This matches my anecdotal experience and that of people I know who run SAT prep courses.
It's far more effective to actually teach students the material, either by teaching them new concepts or by firming up their understanding to ones they've already been exposed to. Particularly in Math, many students in high school have shaky understandings of fractions or algebra. Firming up these foundations can often lead to >100 point increase (given sufficient lead time). Those foundations are something the test is actually looking for since numeracy and strong algebra skills are a strong predictor of success in Calculus.
It's true that tutoring grants unfair advantages but this is going to be true in any system that uses skills as part of a selection criteria.
> The effects of studying "for the test" as you put have been measured, improved test taking skills tends to be worth ~30 points which is not that significant. This matches my anecdotal experience and that of people I know who run SAT prep courses.
I see this often but I suspect that it is lumping "Took a prep class for 1 hour on a Saturday" and "Spent 6 hours a week for 52 weeks with a tutor" in the same category.
Any tutor who only gets a 30 point increase won't be seeing much business among the folks I know.
However, I do agree with you that firming up skills is a remarkably quick way to get a significant boost. Being able to add 2 + 2 and come up with 4, repeatedly and accurately is often a big deal on these tests even with a calculator.
I'm familiar with that 30 point differential because it shows up in research.
You're right that 30 points isn't that much if you're thinking about the whole distribution, but I guarantee it can be significant around the selection threshold. That threshold might be implicit or explicit, but it's there, and if it's enough to nudge applicants past it, it's significant.
Sure, but having read a large chunk of educational literature I'm not aware of any alternatives with fewer distortions from parental aid. Grades correlate more highly with a good home life than test scores do, for example.
As long as we have "prestige" universities there's going to be some form of skills testing, and no one has ever designed an un-gameable test that can be administered nationally. The question we have to ask ourselves then is how we can reduce game-ability and I doubt we can make improvements that are more than incremental.
The JEE solved this problem by changing the format of the test each year. It's not disclosed before the exam. So it is really hard to form meaningful strategy that consistently helps you
> That said... I don't know how the _new_ system will work at fighting that privilege -- there are still lots of ways for it to disguise itself. But we have to at least acknowledge the issues with the SAT.
I'm in favor of using tests like that SAT as cheaper diagnostic tests, to help with student placements and accommodations, not for admissions. It's too bad this is being lost with the removal of the testing requirements, but I guess it doesn't matter much as the tests were never used this way in the first place, despite providing this information. https://cepa.stanford.edu/sites/default/files/ACT%20Paper%20... (note that I don't agree with the conclusions of this paper, merely the identified diagnostic criteria)
> This is about diversity of skills and diversity of learner profiles
I might believe that if I didn't believe that the diversity would mostly be token, with the majority of students in selective schools fitting a handful of templates.
I built one of those Wordle clones that made it into the app store. Built it mostly to see how long it would take me to do it with flutter (it didn't take long). The app doesn't collect any data or connect to the interwebs.
Not really sure how I made it through the review process, but once it got through I was subsequently blocked from updating it, with the same policy cited as the OP.
So now it's up there with a double letter bug, the stock flutter icon, and 80k downloads :)
Side note: I did get a takedown notice, which I was expecting, but it wasn't from the NYT. It was from a company purporting to hold trademarks for lingo for both look and feel and game mechanics. I tried to contact them but nobody has responded, and it actually looks like the trademark was abandoned. Plus, you can't trademark a game mechanic afaik.
The comments on these articles seem to generally come out in favor of standardized testing (specifically the SAT). I find that interesting because college admissions seemingly generalizes to candidate evaluation. And if you were to suggest a standard leetcode test for software engineering job candidates? Man, this place would go up in flames.
Singular leetcode exam with large enough cohort repeated let's say twice a year and then graded on curve done in reasonably secure and controlled testing center would seem pretty reasonable alternative. Do it one time, get high enough grade and then use it for rest of your career.
It starts to sound pretty enticing. Essentially a certification that at least one time you performed at certain level. Ofc, verification is bit more complicated, but not impossible anymore.
I’m not for or against these leetcode style tests, but I think the difference is that they see a correlation between admission test score and how well they’d actually do on the course. Can the companies using “leetcode” tests prove the same?
A common thing I see missed in arguments against like leetcode is that good hiring pipelines would not look at just leetcode scores. You want leetcode + good project + system design. Leetcode is excellent for testing data structures and algorithms, reasoning about code and edge case etc. These are absolutely necessary but not sufficient to build good software.
My comment was to encourage commenting when there's a meaningful thing to say regardless if it seems like a bikeshedding pattern.
These extra comments are noise and I don't even want to write this but note that we should avoid comments that don't have a positive purpose. Hopefully this one will lead to fewer of these.
That could be taken as a fairly naive statement -- there are countless successful examples of non test-driven education out there.