The other week my wife and I were disagreeing over whether a house was green or blue. I was shocked when every passerby we asked agreed with her that it was green. I was absolutely 100% sure it was blue. Turns out according to this site, my boundary is greener than 95% of the population! Funny to see this proved out here!
Speaking of, I'd be curious about a similar experiment but one that compares how grotesque, for lack of a better word, certain words sound. The word bleen makes me uncomfortable, I think because my brain automatically goes to spleen; grue isn't my favorite either but I prefer it to bleen.
I'm curious how universal that is though. Do others have similarly aligned preferences for one word over the other, or are our feelings about them more evenly spread?
Not a native speaker, bleen for me got auto corrected by my brain to green. It doesn't make me uncomfortable, but I'd prefer grue because my brain will immediately understand we're talking about the umbrella term. If grue is said out of context, I'd imagine Gru from despicable me, when written I'd imagine gruel, but, again, because I'm not a native speaker, instead of yucky food I'd instead think about that episode of Masha and the Bear where they end up with a houseful of the porridge.
In the sitcom Mad About You there is an episode where Jamie tells Paul to put on a tie. Specifies the "navy blue one". "I don't own a navy tie." "Yes you do, it's the one that you think is dark green."
My wife and I go round and round about what is and isn't blue and/or green.
That’s amusing because I am the converse: my boundary is bluer than 98% of the population. To a first approximation, blue is a very specific thing and all the other colors appear strongly non-blue to me. I do wonder where this preference came from, but it explains all the puzzling interactions between my wife and I over the years.
That's fascinating. I think this has to be biological. When I call something blue I don't think it has to do with just what I've learned but also that the color feels more like deep blue than it does like deep green.
...I get different numbers depending on which eye I use, but both are fairly center. I didn't expect blue-green to be affected though! My left eye can't see certain shades of red as well as my right eye. Bright sunlight makes it more noticeable, but my own skin looks weirdly (sickly) yellowish with one eye and normal with the other.
Whenever it's come up at home, my spouse simply insists "I don't need to know the difference between aqua, turquoise, and seafoam. They're all blue." At this point I just nod and agree, it's not worth the fight anymore. ;)
...I never found another person with the same experience. Here we are. For me though, it's not that sunlight makes it more noticeable, it's that I will see the same shades until I've had too much sunlight—eventually my left eye gets tired, I guess, and sees a lot less red than my right eye. After sleeping it resets and I see the same shade in both eyes.
Maybe i should talk to a researcher about this..
I realized at a young age that one of my eyes receives a more blue-shifted image and the other's image is more red. It's difficult to tell by rapidly opening/closing one eye at a time, but by using my fist positioned with my thumb resting on my brow between my eyes, then rolling it left and right quickly to cover up one eye and focusing on what I'm looking at, it's a stark difference. I do it every so often to see if it's changed with age.. I especially enjoy looking at the sky or white sheets of paper.
Blue his house
With a blue little window
And a blue Corvette
And everything is blue for him
And himself and everybody around
'Cause he ain't got nobody to listen (to listen)
My boundary is also greener than 95% of the population. I think it's because I mentally separate cyan from green and blue, but still see cyan as a shade of blue. If you asked me what color it was without forcing green or blue, I'd have answered cyan on most of them.
I have this with a coat, but it's blue vs gray. Would be interesting to generalize this tool not just for other colours, but for other colour properties like saturation not just hue.
I had the same discussion with the color of a river in Albania with my wife. The test says my boundary is a bluer than 85% of the pop - sounds about right!
Here's something I don't understand about the argument that all conscious experience is some kind of illusion.
An illusion necessarily implies an entity experiencing the illusion. So what's that entity? It can't be illusions all the way down.
I think some people would rather claim that they themselves don't exist rather than admit that there might be something spiritual or unexplainable about existence. And I find that very strange.
The brain is like a sail thats catching the wind of the soul. The wind pushes and shapes the sail, and the sail limits the shape of the wind inside it. If the sail is in bad condition, it changes how the wind catches it, or prevents it from catching altogether.
So the brain is animated by the soul, and also limits and shapes its experience. When we're affected by anesthetic, or we're badly injured, or have a stroke, our conscious experience is impacted, while we're here on this plane. Eventually we leave this form and experience reality more truly. This could be one reason why NDEs happen - the brain is so badly damaged that it fails to even contain the soul and we approach a more death-like state.
It is a nice thought, but still has a pre-requisite that there are "souls" floating around that get ensnared by human bodies. If that is true then life is a kind of prison we endure before being liberated by death.
Even if we assume those giant leaps of faith are true, it still means "I" go away when I die. My soul will only remember "me" as a brief torture where I was forced into a human shell and had to endure being "me" before being released once more to be a higher being. All my struggles, battles, self-improvement, etc. will be meaningless and my kids will be a cruel trick I played that imprisoned other souls.
I won't deny it takes giant leaps of faith. I think the questions are inherently unanswerable so any theory that's not in conflict with things we can know scientifically, and which doesn't lead to harming others, is as good as it can get. We can't really do better than thoughtful faith in this domain (and indeed I would (controversially) argue that the notion of inevitable scientific progress in this area is also a sort of faith, since there hasn't been any progress on the "hard problem" of consciousness..).
To your points, I would say that
> life is a kind of prison we endure before being liberated by death
Not exactly. More like a journey that we go through for mysterious reasons. Maybe so our soul can grow by learning lessons and having challenges that can only be when the stakes are real.
> "I" go away when I die
Not necessarily. Certain aspects of you, which are more contingent on brain traits, e.g. intelligence, some temperament. But the deepest self wouldn't disappear.
> All my struggles, battles, self-improvement, etc. will be meaningless
I see what you mean, but I wouldn't call it meaningless just because it wasn't "completely real." Also, I do believe that lessons learned here are learned for good. The soul is here to grow.
> my kids will be a cruel trick I played that imprisoned other souls.
I don't really see how you got here. In this theory, your kids are also here for their own purpose. Relationships and love are still meaningful and real. Life is cruel and involves a great deal of suffering, but that doesn't mean that existence is inherently bad.
Lots to say there. The last few centuries have shown that many things which previously seemed inexplicable have been convincingly explained without resort to the supernatural. So a material basis of conscious experience seems a good bet.
Related, and hinted at by my original comment: the brain is capable of generating truly profound experiences. There is a tendency to ascribe them to something 'beyond ourselves' but again, advances in medicine and neuroscience have shown that these are explicable, subject to manipulation by chemical and electrical signals, which again suggests a material basis for conscious experience.
It's true that many things have yielded to science. And yet, what we discuss (the "hard problem of consciousness") hasn't. In my opinion, the burden is on you to prove that progress in other questions implies inevitable progress on an unrelated question that hasn't budged at all.
I said this in my other comment but, when you say the brain generates truly profound experiences, you beg the question (in the philosophical sense of the phrase). It's all in the word "experience." For in order for an experience to happen, some entity has to be experiencing. For there to be an illusion, there has to be an entity being deceived. And then how do you explain that entity? It can't be illusory experiences all the way down..
Any honest person has to see the connection between experience and the material brain. But I don't think it's honest to say it's obvious that experience is entirely material. The connection is deeply mysterious and may never be understood. I personally would rather accept that than claim that I don't really exist just so that everything can be explained.
The evidence is abundant and continues to chip away at the "hard problem". For example, we can through anesthesia turn on and off conscious experience. Through various drugs we can manipulate the character of conscious awareness, inducing ecstasy, visions, abiding serenity, terror, pain, grief... all states that were previously described as ineffable.
To say we haven't made progress on understanding consciousness is to move the post; we continue narrowing the 'hard problem' and eventually it seems like there will be nothing left other than a misunderstanding, something like the resolution of Xeno's paradox.
I don't mean to be insulting but, you don't seem to understand what the hard problem is. It is not "is the brain intimately linked with conscious experience?" I would agree we've made progress on that question. It is the harder question of "why is there conscious experience at all? Why does it feel the way it does?" I would argue no progress has been made on this whatsoever, and possibly can't be done.
You can try to claim that this question is meaningless, but that doesn't seem principled to me, not to mention that it completely ignores the fact that gestures broadly all this is happening.
In light of the fact that the entire universe is perceptible only through conscious awareness, the 'hard' question is equivalent to the question "why is there anything instead of nothing?" When asked this way, it's clearly not answerable. Everything short of that seems to have a material answer.
Edit: happy to chat more about this, as it's deeply interesting to me and I do want to understand your perspective. It may need a longer form than this thread allows. I've added a link to get in contact with me on my about page.
I'd be happy to talk more as I am passionate about this. I think the idea that there is no soul is actually extremely dehumanizing, and involves someone essentially saying "I don't really exist" (even if they redefine "I exist" to mean something more Materialist, it is, in my view, still saying that). I'll ping you on bluesky.
This is cool but I'm confused - it says it generates the bones at "build time" not at "runtime." But you're calling this headless bone generator while your app is running, right? That sounds like runtime.
Insider trading is prevalent on both sides. But the brazen daily market manipulation done by this administration is different. If you dont see that you are willingly blind
It’s not different because of who’s doing it but due to scale: creating news which steers markets is worse than trading based on insider information, and there’s a scale question as well (billions versus millions). I want them both prosecuted but in terms of priorities I’d favor the police going after armed robbers over porch thieves.
I think this sort of thinking is why we have people who continue to vote against their own self interests. The fact is we really don't have enough information to make meaningful conclusions about which party is causing more harm with insider trading. It would take a team of accountants and lawyers years to confidently measure this. A random individual isn't going to assess this well.
But because people confidently draw often incorrect or baseless conclusions based on vibes and what largely democrat controlled corporate news media tells them, they're going to fall into us versus them mentality at party lines instead of better understanding that both sides are screwing us over tremendously and not accepting a perceived lesser evil
I am not a fan of the democrats - I think they are corrupt and have in many cases given up on being democratic, totally beholden to their donors.
That said, Trump is unbelievably obviously corrupt and doing immense, immediate, and obvious damage to the country in pursuit of personal enrichment. If you dont see this you are willingly blind.
> what largely democrat controlled corporate news media tells them
If you believe this, ask why and which side benefits from you being so misinformed. There’s a reason why the right-wing spends billions of dollars and encouraging people to blame “both sides” is a key part of it.
Yes of course we are all brainwashed by the famously democrat controlled media at _the Wall Street Journal_ which is owned by prominent leftist Rupert Murdoch. bro you are being robbed in broad daylight.
> unless you believe in magic, it's only a matter of time until we reach the point at which machine intelligence is indistinguishable from human intelligence
I find this flippancy about the greatest mystery in the universe extremely arrogant and incurious and wish it wouldn't be so prevalent.
In theory a computer should be able to model any physical process so I do agree it's only a matter of time. That said I don't think I will be alive to see it honestly.
The current tech won't get us there just like the steam engine or the internal combustion engine didn't get us to the Moon. And getting to AGI is probably more like getting to Mars.
I personally think we have some tremendous energy problems (and the negative externalities derived from “solutions”) to contend with before we even sniff AGI.
Considering we can only approximate irrational numbers, I’m not sure that’s a given. Maybe we’ll have a breakthrough with some type of analog computing, but we could also just hit physical limits on energy or precision.
Yeah, church turing suggests that a computer can compute any computable function. Or the universality of computable substrate. Maybe there's a confusion that computation universality implies everything universality?
> > In theory a computer should be able to model any physical process
> Wait, which theory is that?
The Church-Turing-Deutsch Principle. (Which isn’t a theory in the empirical sense, but somewhat more speculative.)
> Or are we suggesting that you can build a computer out of whatever physical process you want to model?
Well, you obviously can do that. Whether that computer is Turing equivalent, more limited, or potentially a hypercomputer is...well, Church-Turing-Deutsch says the last is always false, but good luck proving it.
Hans Moravec introduced the idea of the "landscape of human competence" , a topology representing the peaks and valleys of human capabilities. Art, writing, coding, game playing. Elevation corresponds to cognitive difficulty, and the landscape maps to everything humans are capable of doing. AI is represented as the rising waterline - when Moravec created the idea, AI was more or less constrained to a few scattered lakes, with humans clearly demonstrating superiority nearly everywhere. After transformers, the waterline began to rise, and today we no longer have a vast contiguous majority, but are left with a scattered handful of islands, and the waterline continues to rise.
It's not arrogant or incurious to acknowledge the flood, but it might be to deny that flood is happening.
If you think there are fundamental human qualities or capabilities that AI can't ever have, you might put in the work to articulate that, instead of projecting negativity onto people who have watched the vast majority of the human competencies landscape get completely submerged over the last 10 years. The islands we have remaining don't really suggest any unifying principle underlying things that AI is still bad at, but instead they highlight the lack of technical capabilities and various engineering tracks to solve for. Many of the problems are solved in principle, but are economically infeasible; for all intents and purposes, you might consider those islands completely submerged as well.
I think you would need to work very hard to prove that the topology you are describing is well-formed enough for this analogy to make sense. For one: "cognitive difficulty" is not really a crisply defined quantity such that expressing it as a function of some input vector makes obvious sense (to me anyways). What's the cognitive difficulty of deciding what to have for dinner? What's the cognitive difficulty of making my 5 year plan? What's the cognitive difficulty of imagining a nice gift to get my wife for her birthday? There are so many things humans do which are heavily 'contingent' (in the sense of having sensitivity to the local culture, history, personal experience, etc) that the idea of being able to assign everything a single, decidable scalar to represent 'difficulty' seems like an extremely tall order to me. And that's setting aside whether the ambient vector space of 'human capabilities' is even really a sensible construct (a proposition that I also doubt quite heavily).
All this to say that describing what's happening as a 'rising tide' seems misleading to me. Techno-sociological development is super messy already, let's not make it more complex by pinning ourselves to inaccurate and potentially misleading analogies. The introduction of the car did not 'push humans higher onto a set of capability peaks', it implied a total reorganization of behavior and technologies (highways, commuting, and suburban sprawl); using the terms of your analogy humans built new landmasses on top of the water.
1. Implying that there are only "a few islands left" shoes a strong bias towards assuming that only thins humans do in the digital realm is relevant, when in fact, the vast majority of things humans do are not in the digital sphere at all.
2. It's pretty clear when most people say that machine intelligence is close, right now, they are alluding to LLM or Deep Learning based approaches. I don't think you should assume they mean machines will catch up in a 100 years. They seem to imply it will be by 2030 or sowmthing.
To address both points - there appear to be no individual, well defined tasks that humans can do that you cannot train a machine to do. Some tasks are inefficient, uneconomical, and other impractical, but there appear to be no tasks that in principle machines cannot do. What is missing is broad generalization, human equivalent time horizons, continuous learning, and embodiment.
Robotics has passed the point of superhuman performance for any given task. Software has passed the point of superhuman performance for any given task.
Regardless of the particular technique or embodiment, the constraints aren't "is it possible in principle" but "is it too expensive" and "is this allowed by the pertinent principles and regulations and laws"
We don't have AGI that learns and adapts in real time like humans. We do have incredibly powerful algorithms that can learn from whatever data we throw at them, but many domains where it's impractical, ruinously expensive, illegal, or otherwise not possible to use AI for some other good reasons.
The few islands left to humanity are not fundamental barriers. We haven't solved intelligence, or achieved RSI or ASI or AGI yet; those were never the important thresholds.
AI has always been a question about good enough, and it looks like we've gone solidly past the good enough line into "we can probably automate everything" even if we don't solve the big problems over 5 or 10 years or beyond. I think it's very unlikely we don't solve intelligence by 2030, but even if AI stalls out where it's at right now, and all we get is the incremental improvements and engineering optimizations on current SOTA, we have enough to automate anything humans do at levels exceeding human capabilities.
What AGI and ASI do is make humans economically obsolete. Good enough AI means there might be some places where humans are needed for generalization and adaptability until the exhaustive tedious work gets done for a particular application that enables a robot or software system to be competent enough to handle the work.
A hiker on a mountain might as well imagine that at the end of their journey they will step off onto the moon. But it's just a mirage. As us humans have externalized more and more of our understanding of the world into books, movies, websites and the like, our methods of plumbing this treasury for just the needed tidbits have developed as well. But it's still just working off that externalized collective understanding. This includes heuristics for combining different facts to produce new ones, sure, but still dependent on brilliant individuals to raise the "island peaks" which ultimately pulls up the level of the collective intelligence as well.
While a 2 dimensional projection of intelligence may be a satisfying rhetorical device, I think it’s an extremely mathematically naive interpretation.
Not only is intelligence probably most accurately modeled as something extremely high dimensional, it’s probably also extremely nonlinearly traversed by learning methods, both organic and artificial. Not a topology very easily “flooded”.
It wasn't a formal model or a theorem, it was an observation about reality. Humans are indeed gradually being overtaken on almost all fronts by AI. But by all means, if you want to take issue with Moravec's framing of the issue, feel free.
Explaining it as something like "realizable instantiation of physical computation occurring in the universe mapping to an ultra-sparse, discrete point cloud embedded in the Euclidean parameter space of all computable functions" could definitely be more precise, but you're either going to need a topology like a landscape or a bumpy sphere to visualize it, and then you're going to need to spend more time showing the effects of things like scaling laws, available compute, where the known boundaries of human intelligence lie, and so on, and so forth, and by then you've lost everyone, probably even the ML professor.
It's a good enough metaphor that maps to a real thing.
> It's a good enough metaphor that maps to a real thing.
My entire point, which I’m not sure you addressed is that no, it’s not a good metaphor. Water “floods” a 3d topology in a predictable manner with regards to the volume the topology can contain. The entire argument is that progress is observable, predictable, and limitless, and the “islands” are a rhetorical device. My argument was turning the rhetorical device around and pointing out that we know so little about intelligence and AI that describing it in this way is not meaningful beyond sounding intellectual.
Sure, but it's entirely possible this point lies way past the expiry date of the universe itself (if there is such a thing). Plus, I do believe in magic - the magic of Life, the Universe, and Everything. And "42" doesn't dispel it for me.
Yeah I was thinking this to, but he did say "Indistinguishable". I guess if you are a intellectual you can buy into that. Fortunately, consciousness and intelligence is much bigger than we can comprehend as human beings. We want to break everything down into understandable bites, but the truth is we are barley scratching the surface of what the brain does and what constitutes as intelligence.
>I find this flippancy about the greatest mystery in the universe extremely arrogant and incurious and wish it wouldn't be so prevalent.
There's absolutely nothing mysterious about human intelligence unless you refuse to give it a clear definition. All the people waffling on about AGI refuse to give a clear, measurable definition of intelligence, because if they defined it exactly then it's possible to clearly determine whether a given machine does or does not meet that criteria. It's just 21st century woo peddling.
> There's absolutely nothing mysterious about human intelligence unless you refuse to give it a clear definition
This begs the question[0] by assuming that it can be given a clear, measurable definition. A large part of the mystery of consciousness and intelligence is that it's hard to define, measure, or explain; the most characteristic aspects (i.e. those relating to a subjective experience) are, in principle, impossible to measure or verify[1]. To say that it's not mysterious once you give it a clear, measurable definition is basically saying "it's not mysterious once you remove all aspects that make it mysterious."
Somethings broken - the most expensive ball, "TaylorMade 2021 TP5x (3+1 Box) 4DZ" is listed as $169.95 for 1 ball. Clicking through, this is actually for 4 dozen balls, making the actual per ball $3.54
Yeah. I'm trying to figure out how to combat these inconsistencies. Right now, I have some manual overrides, but not sure it's sustainable to keep manually overriding inconsistent listings.
Any thoughts? Should I default to what's in the product title instead of the unit count? Not sure the best way to combat this.
Yeah... I'm just now realizing how relying on unit count in the listing is more problematic than I thought. Sellers say their unit count is 1, but the product title says it is 4 dozen. I need to figure out how to fix these inconsistencies.