Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Particle physics experiments have stopped answering to grand theories (aeon.co)
97 points by pseudolus on June 21, 2018 | hide | past | favorite | 58 comments


I see a human problem with an incremental experiment-driven model, instead of a theoretical model.

It's easier to get money to build a new collider if there's something specific to look for. "We're expecting to find the Higgs, but we need a better tool to look for it."

Trying to do the same with a data-driven model is a hard sell. "We haven't found anything, please give us a better machine so we might find something, but we don't know what"


> Trying to do the same with a data-driven model is a hard sell

Just finished a SLAC Public Lecture [1]. Dr. Arkani-Hamed mentioned that the LHC produces about 1 billion collisions every second. Of those, about 10 per second are top quarks. Certain theoretical particles are expected to be detected at a rate of one per minute or hour or even day.

Blindly data mining amidst those kinds of frequencies doesn't make sense. There is a reason the scientific method starts with hypotheses.

[1] https://youtu.be/t-C5RubqtRA?t=33m18s


> Trying to do the same with a data-driven model is a hard sell

Ex-physicst here. JumpCrisscross is absolutely right. It's a hard sell and it should be. Saying "I want a better machine but I don't know what I'm looking for" is only worse than "I want to put objects together but I don't know what machine I'm building."

The space of what's concievably possible is too large to explore if you're not guided by an hypothesis, even if you didn't have money constraints. It's not a human problem, reality is just complex.


Since you called yourself an "ex-physicist", would you mind saying why you quit and/or what you are doing now?


I quit because A) I realized that the academia is a bit rotten in some specific aspects (the linked article and the post by peterburkimsher are not instances of what I have in mind; I think both are mostly wrong and/or pointless), and B) money.

Regarding what I'm doing now, let me know if you want to talk in private. Nothing too exciting though.


This is not that atypical. I am on the same situation: PhD in physics (particle BTW) and left for industry.

The main reasons were: I am not that good of a physicist, feudalism in the academia (socially speaking), I fell in love with IT, extraordinary opportunity in industry. All the stars were aligned for a week and I took the opportunity.

I love physics (and science in general) but what I do now in industry is fantastic (in broadly speaking IT). The opportunities for advancement other than age based are better too (not that this is a rule in academia, but it was where I was).

And money is such that my family is secure no matter what happens to me.


This comment exhibits an increasingly common fallacy in discourse -- glorifying "data" in the spirit of overly naive empiricism.

Suppose I seek funding from the NSF for the following experiment: I'm going to smash peas into a wall (or collide peas traveling in opposite directions), and note down the splatter patterns. Using Advanced Machine Learning (TM) I shall extract correlations in that data, and make new discoveries!

It is only theory that can decide whether this experiment is worth pursuing or not, and unavoidably so. That's why I would be laughed off for wanting to perform the peas experiment, but we fund billions of dollars of experiments which do the same thing but with protons instead of peas!

One needs to have a theoretical framework which guides the experiments we perform and the way in which we interpret data. There is always this subtle symbiotic interplay between theory and experiment, where each one guides the next step of the other.

Wanting to get rid of theory and just do experiments is pointless and stupid. If anyone believes otherwise, I have a bridge to sell, cuz you know... empiricism is all about experiments... you can do experiments on the bridge to generate data... and data is awesome and will lead to discoveries! :-)


No chance of getting funding by justifying 'this $multi-billion collider may or may not find new physics outside the Standard Model'.

The LHC only got funding because it was guaranteed to answer the question 'is there a Higgs Boson'.


Nobody guaranteed anything with the LHC.

It's good to have a clear hypothesis for an experiment (especially an expensive one), and the predictions for the mass of the Higgs boson was a good starting hypothesis for the LHC.

But there was also the expectation that, colliding at higher energies than ever before, there would be the opportunity for unexpected discoveries.

The proof that the brief for the LHC was more than just "find the Higgs" is that the LHC found the Higgs, yet is still operating--and is just about to get a time-consuming and expensive upgrade in luminosity.


Your last statement needs a citation; I think that's the line mainly used to sell it to the public in pop sci headlines.


The people who funded it were politicians who cater to public who read the pop sci magazines. I wonder if any significant number of people who actually made the funding decision looked at anything beyond "this will make you look like you support Science"


Funding on large science projects that I've seen, at least how the NSF does it, "filters up" where the proposal is passed through technically aware committee before being passed up to less technically aware people. And there is also the P5 committee which makes high-level recommendations about general directions and projects that can more holistically drive "important" (or at least what they deem to be so) science goals forward, where their advice tends to guide many levels of the funding process pretty strongly (if your proposal gets a low grade from them, you aren't getting funded, period). At the level of really big science, this "filtering up" process will go all the way to Congress, where of course you then have people with average-zero science knowledge making a final decision. But to get to that point, I'd wager most if not all science proposals will have passed many valid technical hurdles presented by knowledgeable people.


Experiment first physics has worked perfectly well since pre-history up to to the first half of the XX century. In fact, it has a much better track record than theory first, so much better that they are barely comparable.

Also, theory first is barely working nowadays too. The entire field has a problem, and that problem looks much more like "all the easy problems are solved already" than "we must think things through before testing them".

Anyway, I do agree that experiment-first is much harder to sell. People dig for a good looking theory, even if it's worthless.


I tend to agree (as an ex particle physicist who spent 8 years hunting for hypothetical supersymmetrical particles at the LHC), but to be honest what works is the scientific method and that requires both blind evidence and theory driven falsification to produce new knowledge. In particle physics we switched paradigm after the invention of QCD and the electroweak theories, and now it’s difficult getting back to pure discovery driven research as the risk seems too high to both sponsors and researchers.


Hats off for the author and physicists. Imagine a blind and deaf person painting and writing songs.

I highly recommend Feynman's QED book [0], its easy to read and at the same time blows your mind off. You'll be proud of yourself after :)

[0] en.wikipedia.org/wiki/QED:_The_Strange_Theory_of_Light_and_Matter


There is a blind (since birth) painter who is considerably famous - https://www.youtube.com/watch?v=JTDQcSS809c


It's a good article, but I have one big complaint: the terminology of bottum-up and top-down is reversed!

In the context of politics, magister-dixit or top-down has a negative connotation, and democracy or grass-roots or bottom-up has a positive connotation.

In physics we should disregard the political connotation.

Yes historically most new physics were top-down observations, i.e. noticing small (unexplained) deviations, and first modeling the deviation, and then eventually realising the underlying cause. A good example is the discovery that some planet motions: while most did seem to move in perfect circles, one of them had a measurable deviation, and after exhaustively trying to fit "state of the art" mathematics (degree 4 polynomialss or ovals), because Kepler incorrectly thought others would have already tried the simpler conic sections (ellipses for closed orbits). Only after the ovals kept failing did he try the too-obvious-otherwise-it-would-already-have-been-discovered conic sections, and found a very precise match.

Examples of bottom-up physics are the early theories of statistical mechanics like Ludwig Boltzmann's, or the atomistic theory of chemistry: with little experimental evidence, they could derive physically realistic behaviours for large ensembles of particles, and only much later were molecules and atoms discovered. Bottom-up is postulating smaller particles and investigating how they would influennce the behaviour of bigger collections, and then trying to prescribe experiments to prove their predictions.

So when the author of this article talks about bottom-up he is actually describing the return to top-down, explain as you measure deviations, and when he talks about top-down he seems to actually refer to the bottom-up postulates of new physics with hopefully falsifiable tests...

Again, a good article, but its nomenclature of top-down and bottom-up seems reversed to me.


Not quite. When talking about bottom-up and top-down, one can talk about small to large scales in size (microscopic to macroscopic) or or can talk about small energies to large energies. The two descriptions are inversely related because small length scales correspond to large energies, and vice versa (roughly due to the Heisenberg uncertainty principle). Conventionally, when particle physicists talk about top-down and bottom-up, they are talking about energy scales: "top" is high energy, more fundamental, smaller distances and "bottom" is low energy, more emergent, larger distances.


first, thank you very much for responding,

but still I have the impression that bottom refers to the more fundamental perspective, and top refers to the more general perspective...

i.e. the fundamental equations for electromagnetism are the maxwell equations with epsilon, mu and c for vacuum, and then one can use these as microscopic equation for other media and generate effective equations for other media so that the new maxwell equations are similar but modified, i.e. different scalar epsilon, mu, c or perhaps for birefringent crystals the epsilon and mu are genralized to matrices/tensors, or for nonlinear optics theres second and third order tensors... and those new effective equations are top and the fundamental is bottom... at least thats how I think most people would denote top and bottom...

I kind of see what you mean, but I don't think top annd bottom on energy scales is very significant, I think the derivabilitiy of new effective constitutive equations/propeerties from fundamental equations/properties is more top vs bottom...

but I agree, at a certain point its not a discussion of physics but just dictionary wars...


Next big movement in understanding the physical laws of the universe is going to come from the areas we least understand. Right now, that's astrophysics.

As John Mather put it, right now we're not even sure what the right questions to ask are... but we definitely to understand it yet.


A completely clueless question: do they dramatically reconfigure the LHC and other colliders for each type of experiment?

Or can you do something like Google did with Tri Alpha Energy [1] to explore interesting parts of the state space, and just generate tons of data for people to chew on?

Or does reconfiguring it for human-directed experiments effectively explore the state space enough that there's data to chew on just fine?

Asking out of curiosity. I know the smartest people in the world are working on this stuff; not trying to “hey what if they just tried X…” on this :-)

[1] https://ai.googleblog.com/2017/07/so-there-i-was-firing-mega...


The latter. They generate much more data than what they can even retain, so they filter it in multiple stages.


I think your answer (with which I agree) is a confirmation of the former. The questions-to-be-answered must be posed first, so that the low level filtering can filter out what we aren't interested in, because we can't record all the hits for each event and store them. (although I think one should consider it, perhaps if a huge part of the budget had always gone to storage, we might have petabyte databases for lower energies, which could now be stored on consumer HDD's and new patterns in old experiments could be found, while at the time of this old experiment this may have seemed senseless as no individual physicists could back then hope to have a local copy. Similarily today it would generate preposterous amounts of data, but if we record them now (even if expensive) then physicists 20 years from now might be able to store them on their future personal media...


IANAP, but as far as I know, it's both.

There are filters that throw away most (nearly all) of the data generated by the LHC as soon as it's acquired. Some experiments require adapting those filters to gather different data. But most of the time the filters in place do collect the relevant data, and experimental analysis consists on walking through tons of already collected data.


The point is near the end:

"All these challenges arise because of physics’ adherence to reductive unification. Admittedly, the method has a distinguished pedigree. During my PhD and early career in the 1990s, it was all the rage among theorists, and the fiendishly complex mathematics of string theory was its apogee. But none of our top-down efforts seem to be yielding fruit. One of the difficulties of trying to get at underlying principles is that it requires us to make a lot of theoretical presuppositions, any one of which could end up being wrong."

"Instead, many of us have switched from the old top-down style of working to a more humble, bottom-up approach. Instead of trying to drill down to the bedrock by coming up with a grand theory and testing it, now we’re just looking for any hints in the experimental data, and working bit by bit from there."

In practice, both ways to look at the evidence are needed. And in pure science, sometimes a lot has to be done in many different directions before some of non-obvious directions bear fruits. One of big dangers is inventing the experiments that will surely "confirm." The big insights come also when something expected by the most is not confirmed, like the famous

https://en.wikipedia.org/wiki/Michelson%E2%80%93Morley_exper...

Without these experiments Einstein wouldn't be able to invent General Relativity. The popular culture talks too much about Einstein but sadly doesn't understand and hardly even knows Michelson.

https://en.wikipedia.org/wiki/Albert_A._Michelson

His experiment, from the end of the 19th century, is also the basis of the LIGO experiments that just recently confirmed gravitational waves:

https://www.ligo.caltech.edu/page/ligos-ifo

"Although much more sophisticated, at their cores, LIGO's interferometers are fundamentally Michelson Interferometers, a device invented in the 1880's."

The 1880's experiments didn't "confirm" what was "expected" but they are one of the most impressive science success stories, looking from what we know now.

Of course, LIGO itself is a technological marvel compared to what was possible in 1880, and we should celebrate all the advanced experiments where our reach extends. Not "confirming" expected can be even much bigger story, even if it disappoints those whose "pet" "expected" theory is not confirmed.


Kind of tangential but this reminds me of Feynman:

> It doesn't matter how beautiful your theory is, it doesn't matter how smart you are. If it doesn't agree with experiment, it's wrong.


The funny thing about that quote is that Feynman ignored it all the time. Back during the post-WWII explosion, Feynman would get told about some exciting new result and regularly said that the result was wrong because it contradicted theory and was experimental error. And it usually was error. A lot of those anecdotes in his oral history: https://www.aip.org/history-programs/niels-bohr-library/oral...


An even funnier aspect is that he was well aware of a today-still-unsolved problem, which he thoroughly expounds in Feynman Lecture chapter 28 volume 2: the mass for an electron, with all the historical attempts at fixing the theory, and he mentions (correctly) but does not show that even in quantum version the problem remains, and even though he knows theres an inconsistency (i.e. electron mass if finite, and infitine, a simple contradiction), he goes through all different attempts at resolving the problem, and for each attempt at solution describes why that way of trying to remove the inconsistency is doomed... he keeps the best argument to the end... the sledge hammer, that there is another electron like particle (muon), with same charge, but totally different mass... the only conclusion is that point charges must be composite!

EDIT: my comment seems random the way it is, but the quote implies that a theory must be correct/consistent in order to matter at all... while he knows there is an unresolved inconsistency!


Experimental data comes with confidence levels. There's no point in being excited at low confidence extravagant data.

This does not detract from his phrase in any way.


Yes, it does. He didn't say, "it doesn't matter how beautiful your theory is, unless of course, it's sufficiently beautiful that it has high posterior probability such that you will dismiss an indefinite but potentially large number of experiments showing it's wrong until enough experiments have been done that you have to admit that it's wrong because it disagrees with them".

Of course, Feynman was right to dismiss those experiments. It's his quote which is cute, oversimplified, and wrong. It's Duhem-Quine all over.


I have a better one :)

"You have to worry about your own work and ignore what everyone else is doing" - stepsandleaps.wordpress.com/2017/10/17/feynmans-breakthrough-disregard-others/


> The popular culture talks too much about Einstein but sadly doesn't understand and hardly even knows Michelson.

I'm not sure if that's true. At the very least, his story is given a lot of attention in my country's high school physics syllabus.


So, LIGO experiment confirmed that Einstein theory is wrong. Right?


I’m not sure if you are referring to something specific in that post but no, quite the opposite - LIGO confirmed Gravitational waves, predicted by Einstein’s general relativity.

The Michelson-Morley experiments were a failure in that they didn’t find what they were looking for (Aether), but relativity came out of trying to understand that “failure” (rather, the null results of the experiment - calling not seeing something you expect a failure is an oversimplification)


Look, we have 3 very similar experiments (Michelson-Morley, Stagnac effect, and LIGO) and 3 very different and complex interpretations of their results while 1 simple explanation can fit them all.

I must note that Michelson-Morley didn't find Aether _wind_, which is expected if you take in account Sagnac effect.


Maxwell's equations show that the speed of light is invariant in all inertial reference frames. In other words, the speed of light is the same no matter what speed you are going. This was a bigger inspiration to Einstein than the MM experiments.


Without the experiment the equations are just some symbols that can but don’t have to have any connection to the reality.


Speed of sound is invariant in all intertial reference frames. Speed of sound of supersonic plane is constant. Speed of sound in supersonic plane is constant. Air does not exists: it's contraction and expanding of space-time, not an air. (Joke)


Wow, an article explaining standard model and unification but also mentioning "E=mc2 (energy equals the square of the mass)"!


You just forced me to skim the article for that nugget!


Oooh, ouch.


Minor point, but:

> A test case for the bottom-up methodology is the bottom meson, a composite particle made of something called a bottom quark and another known as a lighter quark.

I thought mesons were just a quark + an antiquark, I suppose they meant another (anti)quark that's lighter than the bottom quark?


Mesons have a wide variety. Some are quarks plus different anti-quarks (e.g. up + anti-down). Neutral mesons are not just quarks plus their anti-quarks, they are usually quantum superpositions of multiple quark/anti-quark pairs. For example, the pi-0 meson is a superposition of up + anti-up minus down + anti-down, all divided by the square root of 2 (quantum mechanics is weird). However, it gets even more complicated when spin comes into play, an up + anti-down meson could be a charged pion with 0 spin and a mass of 139 MeV or it could be a rho meson with a spin of 1 and a mass of 775 MeV.


Just have to add a comment on some things.

It is unfortunate that they refer to the 26 dimensions...

> [...] particles as tiny vibrating loops of string that exist in somewhere between 10 and 26 dimensions.

... of what is known as "Bosonic string theory". It is called bosonic because it only has bosons (e.g. photons) and no fermions (e.g. electrons). This is obviously not a realistic theory because of that reason, and it also suffers other serious problems. But, this was the first formulation of string theory, not meant as a fundamental theory of gravity even, and if you do open some of the famous text books in string theory, you do find that it starts with the bosonic string theory. This is because it is a more gentle introduction.

The "between 10 and 26" comment is also a bit unfortunate. String theory is ten dimensional (space-time, meaning I include time in there). A lot of physics is formulated in terms of perturbation theory, meaning you have that the full result is expressed as an infinite sum of smaller and smaller terms, and you can truncated this infinite series and get a approximate result. This holds if the parameter you are expanding indeed is ever-smaller, which it isn't necessarily. One of those parameters (string theory has two of them built in) is the string coupling "g_s". If you start taking this parameter large than one, so the perturbation breaks down, string theory (type IIA in particular) grows an extra dimension into a theory known as M-theory. Note that this theory has no strings, it only has other fundamental objects. Similarly, there is an F-theory that is in some sense 12D, which also describes non-perturbative physics.

So, if physics in our universe is described by this non-perturbative physics, then sure, it's 11D or so, but we do not know which parameter regime of string theory our universe is in ( yet ;) ). But it is not a choice willy-nilly.

Then regarding effective theories against fundamental ones. Effective theories, or models rather, are things like: the inflationary model, cosmological constant to explain dark energy, standard model, minimally-supersymmetric standard model, F(R) gravity, DBI gravity, and so on. The problem is that there are too many of them. Claudia de Rham had a talk a month or so back in which she said something along the lines of (this is how I remember it) "We are quite good at excluding effective gravitational models, but we are however better at constructing new ones.". We need some deeper understanding of what is allowed when it comes to model building, and even theory building. But the point is, for gravity for example, that there are several models out there that are consistent with observations, but we do not know which ones can be consistently included in a fundamental theory.

And theory gives us ideas of what to look for. In this thread "seeing extra dimensions" are discussed, but it is misrepresented a bit. There are potentially several ways that we could start seeing evidence for extra dimensions, at least in principle. For example, "compactification" which means that we make the extra dimensions small, hidden for us, comes with the so-called "Kaluza-Klein tower" of particles, in which particles are essentially separated in mass inversely to the size of the size of the extra dimensions (small extra dimensions -> high mass). So this is one indirect way of how one could in principle see them (then they may be very massive, and virtually undiscoverable, but space-time warping brings down these masses... so we don't know).

Some of the particle physics experiments are looking for, in a sense, "anything that deviates" from the standard model. Note that for such experiments, any fundamental theory would have the same "problem" as string theory: it must show new physics at higher energies than already explored. LHC results are often seen as a "string theory is wrong"-result, which is not true, but what it rather shows is how boring the universe is at those energies, independently of the theory. Hopefully theory can give predictions in other places as well (in addition to the predictions of susy, extra dimensions, etc), like of what gravitational waves can say something fundamental about black holes.


If they are going to use up to 26 dimensions in their theories, of what use is a machine that only senses the 4 dimensions we normally perceive? Maybe all these missing particles are in some of the other 22? I'm sorry, but I don't think solid answers to some of the big questions are ever going to be found using these experiments.

I also question whether thinking in terms of particles is even the right paradigm. We may be just attempting to mold observations into our own incorrect model.


It's fun to speculate, but it's worth pointing out that science progresses via actual work.

If you know how to build a machine that senses more than 4 dimensions, please build it and run some experiments! If you can't, then recognize the limitations of critique. Anyone can sit back and say "hey, what if we're doing things wrong?" Scientists already spend most of their time in this area of thinking.

The operative question isn't "what use are our limited experiments?" Scientists know their experiments are limited. The real question is, "how, specifically, can we do new experiments?"

Or for theoreticians, it's not "is our model incorrect?" Scientists know the theories aren't correct... not totally correct, anyway. The real question for a theoretician is, "what new model can I propose that matches all known evidence, and also opens the door for new understanding?"


I guess I was just thinking out loud, but then I did read the article and have read many other like it over the last 30+ years.

I can see it must be fun to speculate, but also seemingly profitable for some of these guys too, eh? I wonder how much a "theoretical physicist" gets paid, anyways?


It doesn't work like that ("in some of the other 22").

One crude way to think about it is that particle has a location in each dimension. In other words, position is a vector whether in 3D or 26D or whatever. It's not possible to be "in other dimensions" any more than a normal point in Cartesian space can lack an X coordinate.

The popular sci-fi idea that other dimensions constitute an "elsewhere", e.g. dimensions or "planes" outside our reality, is not what is meant when particle physicists talk about 26D space.


This is what I’ve always assumed, but then I’ve heard from more than one place that some of string theory’s 26 dimensions are too small to observe, or are in between other dimensions, or other (seemingly) weird stuff like that. I obviously don’t know too much myself about physics :-), but do you know what that means?


The smallness issue is pretty straightforward. https://en.wikipedia.org/wiki/Compactification_(physics)

Assuming that the universe is closed like the surface of a sphere, heading off in any direction will eventually bring you back to where you started. Compactified dimensions are roughly like directions in which, for example, that trip is very much shorter, maybe too short to observe. It gets complicated and I'm not a physicist, but the idea that the size of the universe is different on different dimensions is the key one.

As far as in-between dimensions goes, I don't know. Maybe it was a reference to fractal dimension.


Here's a bit about Kaluza-Klein extra dimensions, as one finds in string theory. K-K theory started with just one extra dimension in an effort to unify gravitation and electromagnetism.

https://plus.maths.org/content/10-dimensions-and-more-string...

I'll try to generalize and simplify a bit:

There are other types of theory with extra spacetime dimensions, too. A physical interpretation (for many of them, anyway) about why we don't see the extra dimensions is that we can't move in them at all, or only very very little.

Consider what you're doing right now, sitting in place at your computer. In a standard 3-coordinate system of your room, with the origin of Cartesian coordinates (x,y,z) = (0,0,0) the floor beneath the centre of your chair, and measuring distances in light-nanoseconds, your centre of mass is a couple light nanoseconds in the z direction, so let's label that (0,0,3). We can extend from Cartesian (spatial) 3-coordinates to Minkowskian (spacetime) 4-coordinates by extending (x,y,z) to (x,y,z,t). In that case, we might decide your centre of mass right now compared to an extension of our previous origin is (0,0,3,3 * 10^10), where we measure t in nanoseconds rather than light-nanoseconds[1]. If there is a fifth coordinate, and it's spacelike, we'd measure in light-nanoseconds and then you're now at (0,0,3,4 * 10^10,0), because you're still not moving in space, only in time.

If you were to raise or lower your chair, z would increase or decrease respectively, t would continue to tick along in nanoseconds, and all the other dimensions would remain 0. Likewise, if you slide your chair to the right, you'd increase the x dimension so now you could be (3,0,3,6 * 10^10,0).

You can keep moving arbitrarily within this local system of coordinates, and try as hard as you like, you would not be able to go backwards in the time coordinate, and you would not be able to move in either direction in the extra spacelike dimension.

In theories that import extra spacelike dimension theories typically allow for extremely small movements through them, usually as a small oscillation, say (using nano light seconds still) 0->10^-35->0. That's so small you won't notice, and it usually happens across a tiny interval along the timelike dimension (e.g., you're only nonzero for a tiny fraction of a nanosecond). Moreover, even such a small oscillation is only allowed under extreme (usually high) energy conditions.

Kaluza-Klein's original theory represented this sort of minor deviation from zero as a loop with a circumference on the order of 10^-33 cm, compared to the line-like (uncurled/unlooped) other axes which extend in principle to infinity. [2]

There's the opposite form of inaccessibility, where everything in the universe that we can observe propagates identically along the extra dimension as everything else, so for example you can't see the difference in the "w" dimension between you and Voyager 2, even though you can certainly see differences in the other four dimensions. [3] For an example of how this could be useful, one can write down a theory that in such a background a very weakly-interacting particle (like a graviton or a neutrino-like particle) might propagate differently from everything else.

Finally, from various precision tests of inverse square laws, and careful observation of the orbits of various massive bodies at many different mass scales, we can make some pretty strong bounds on how accessible small and large dimensions are. They're basically inaccessible to humans at present level of technology. Consequently, it is still entirely sufficient to use 3 spatial and 1 timelike dimensions, and safest to assume (thanks to the weight of a lot of evidence) that that's all there is. If high energy particle physics or the study of extreme gravitational physics provides evidence that 3+1 is insufficient, that'll be really cool for theoreticians. But that's a huuuuuge if.

- --

[1] ignoring issues of sign

[2] this assumes a flat spacetime highly similar to Special Relativity; importantly KK theory deliberately breaks that similarity, but this conceptual comparison of the shape of the five axes will have to do unless I can think of some better way of putting it non-mathematically

[3] ignoring general relativistic effects


Wow, it all makes sense now.... Thanks for the detailed (and accessible) explanation!


If we live in a world of three spatial dimensions, plus one of time, what use is a camera that only captures a two-dimensional image?


For some reason it's very hard to build 26-dimensional machines in a 4-dimensional universe.


[If the theory is true ...] The machine has 26 dimensions because this is a 26 dimensional machine. The difference is that in 3 dimensions the size is about a few inches, in the "time" dimension the size is a few years and in the other dimensions are very very small (I don't remember the actual guess, because nobody is sure, ¿10E-100inches?)

So for everyday calculations you can use the 3+1 dimensions and ignore the others. For calculations about the effects in tiny elementary particles the other dimensions are important.

Imagine that you are living in the surface of a long garden hose. Is it a 1 dimensional word or a 2 dimensional word?


Yeah, but is an "inch" even a thing in dimension #18?

And this smallness aspect wouldn't disprove what I originally theorized here anyways...these subatomic particle are fucking small as shit, aren't they?????? lol....sorry


[If the theory is true ...]

> Yeah, but is an "inch" even a thing in dimension #18?

Yes. The idea is that locally the "distance" called "s" is

-s² = -c²t² + x² + y² + z² + w² + v² + ... + q²

where w, v, ..., q are the other coordinates, this are not the official names, I just made up some names because it's difficult to use subscripts.

This is only local, because if you move an inch in the "w" direction you will wrap many many many times, so the distance is more complicated (like the complicate calculation to transform latitude and longitude into the distance you must walk over the Earth surface).

So an inch in "w" is something clear, but moving an inch in "w" will not take you very far away, because the universe is wrapped and you will only make a gazillion of turns like in a carousel.

> And this smallness aspect wouldn't disprove what I originally theorized here anyways...these subatomic particle are fucking small as shit, aren't they?????? lol....sorry

The particles are punctual, the size is "0". The problem is that you can't keep them perfectly aligned in the hyperplane where w=0, v=0, ..., q=0. When you build a machine, you somewhat fix the position of the particles in x, y and z, but you have no control of the position in w, v, ..., q. So the machine is in the location 0<x<1, 0<y<1, 0<z<1, w=whatever, v=whatever, ..., q=whatever.

[In string theory, the size is not 0 because the particles are part of a tiny string that has a tiny but not 0 size. So wait a few decades/centuries until we are sure.]


thanks




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: