Optical interferometers have resolved Betelgeuse down to 9 milliarcseconds (4 x 10^-8 radians), so this isn't even the highest resolution image yet of this star[1].
ALMA consists of 66 antennas, most of which are 12 meters in diameter. That's about 7000 square meters of receiving area.
Betelgeuse is 642 light years away, which is 6x10^18 meters. The area of a sphere with that diameter is about 10^38 square meters. So 10^-34 of the power emitted from Betelgeuse ends up falling on the ALMA array.
According to Wikipedia, the luminosity of Betelgeuse is 90-150 thousand solar luminosity units, which is about 4x10^26 watts. Let's call it 10^31 watts. So the total power received from Betelgeuse by ALMA is about a milliwatt.
But that's the total power, and the ALMA array only receives at 0.32 to 3.6 mm. To figure out what proportion of Betelgeuse's power falls in this range we need to integrate over Betelgeuse's spectrum, both the total spectrum and then this range in order to find the ratio. That part of the calculation is not so easy. But let's see what we can do. Let's assume that Betelgeuse has a blackbody spectrum. Its temperature is 3500K. We can use this handy dandy blackbody spectrum calculator:
When you crunch the numbers it turns out that about 10^-7 of the total power falls in the range 0.32 to 3.6mm. So the total power received by ALMA is about 10^-10 watts.
0.32-3.6mm is in the far infrared. A photon at this wavelength has an energy of about one meV, or about 10^-22 Joules. So 10^-10 watts is about 10^12 photons per second.
I don't know how long the exposure times are, but my guess is that they are measured in hours.
At the risk of uncovering my ancientness, I remember reading astronomy books as a kid which specified that stars are so far away that they can't appear as anything more than dots of light even when viewed through the largest telescopes. Always makes me wonder what can be achieved in the future, especially since we're probably somewhere on an exponential progress curve. Of course, assuming a lot of optimism about not cutting off the branch we're sitting on before that happens...
Regarding visible spectrum observations, I've been waiting to see if anyone can come up with a way to develop a consumer-accessible instrument that can sample a high enough resolution, to image all of the moon landing sites.
For as long as I can remember, the same thing has been said about the surface of the moon, which is the primary fuel for hoax narratives.
With all the buzz about high-resolution arrays being cobbled together from current-generation megapixel digital cameras, I'd love to see someone pull this off. It'd be pretty cool to know that for a budget of maybe tens of thousands of dollars, and some software skills, it'd be within the reach of hobbyists to snap some legit photos of the original moon landing artifacts as they exist.
I'm not an expert on this stuff, but my understanding is the real issue isn't the sensor, it's the lens you put in front of it. You essentially have two ways to see smaller things from a fixed vantage point--put a longer focal length in front of the existing sensor, or put a higher resolution sensor behind the existing focal length lens. The problem is longer focal length gets big and expensive very quick, and existing lenses would limit the ability of a high resolution sensor. In many cases, high end professional camera with 36mp or higher sensors are hampered by the lenses that can't resolve that much detail.
Now, maybe in a few decades the CalTech lensless sensor will be commercially available and will work well enough that we won't have to worry about optics anymore, and it will all be silicon, but CalTech's sensor currently has something like 16 pixels total, so it has a long way to go.
Right, if you have phase information you have a lot more options for (cheaply) making a synthetic aperture that's way bigger than your possible physical aperture.
I think it was in Cixin Liu's 'The Dark Forest' where there's a giant telescope at the edge of the solar system with twelve independently floating lenses focused and corrected by tiny thrusters...
What a terrible book. Naively written, leaves the impression that the author is more showing off that he can write about every sci-fi meme in existence ("I'm a big boy now, I know about that topic too!"). Unfocused, meandering plot. Nonsensical end. I regret wasting my time with it.
No, it's totally the image; I see it too and I didn't drink any coffee recently.
It's one of those weird image effects that sometimes happen. Related, a combination of red and dark blue text on a black background tends to jump out from the screen for me, seemingly gaining a third dimension. I wonder how sensitive are those effects to things like ambient light levels and your display's color calibration. I'm also curious if anyone tried to explain them with reproducible steps that could be used for crafting such images on purpose?
> Related, a combination of red and dark blue text on a black background tends to jump out from the screen for me, seemingly gaining a third dimension.
Strangely, the above patented system notwithstanding, I recall from being a kid this (or something similar) being marketed (with glasses too) as part of a really cheap and cheesy comic book (I most likely still have copies of that comic book at home), sometime in the 1980s. Unless I am mis-remembering the timeframe (possible), it was long before ChromaDepth (I also recall it being used for firework displays, too).
I think it's caused by a mix of 2 things, first when you stare at something for a while the details tend to fade away, and this image is very susceptible to it since it has very soft colors on the border, and the gradients are from the outside towards the inside so that causes the perceived object to shrink. Second, the eye constantly has small involuntary saccadic movements, and whenever that happens the first effect gets "reset", and the perceived image grows to its real size again.
Look just at some point in the middle of it, without moving your eyes, head. Then, after some 30 seconds, slowly move your head towards the screen, and you'll see that the edges get stronger/fainter as you move.
Space telescopes like the James Webb are not actually as good as the ground-based arrays that were used here, which put together multiple receivers over a distance to create a much wider "eye".
I'm hoping that some day we'll have space-based arrays for this. Imagine if the virtual "eye" on the array was as wide as the orbit of the Moon!
Since I had to look it up out of curiosity/paranoia, I figure someone else might also be interested/relieved to know:
Betelgeuse has frequently been the subject of scare stories and rumors suggesting that it will explode
within a year, leading to exaggerated claims about the consequences of such an event. The timing
and prevalence of these rumors have been linked to broader misconceptions of astronomy, particularly
to doomsday predictions relating to the Mayan calendar. Betelgeuse is not likely to produce a gamma-ray
burst and is not close enough for its x-rays, ultraviolet radiation, or ejected material to cause
significant effects on Earth.
>the star where the Elder Gods came from to battle the Great Old Ones [...]. Betelgeuse is also mentioned as the homeworld of the 'Ithria, a star-faring fungoid race.
Sure - if you take Derleth's mankind-involving Manichaeism as canon. That's something I don't recommend doing, because it rather misses the point, which is that we are as ants under the careless tread of powers far older, far greater, and far stranger than anything we can possibly imagine.
What is the right way to think about this? If we observe a supernova 600LY away, do we say that event is happening "now" from our frame of reference? Or should we think of it as happening "600 years ago", and the light from the event is only now reaching us?
If you think of causality itself moving at the speed of light (which of course it does), and think in light cones rather than referring to a nonexistent universal frame of reference with an authoritative clock, the "now" language seems more appropriate. Intuitively, it also feels quite wrong — but I attribute this more to a failure of intuition than of language.
No, this is not a good way to think about relativity and its implications.
When we talk about time in the sense of things happening "now" or "in the past" or "after", we have to think about reference frame, _not_ location. Reference frames are inherently global, as opposed to an "event", which encodes both location and time relative to all possible inertial reference frames.
The typical metaphor is imagine that you infiltrate space with a three-dimensional grid of clocks that are kept a fixed distance from each other (say by a rigid rod). Those clocks are not moving relative to each other, and it is trivial to synchronize them, because the distance between them is fixed -- fire a light pulse to your neighbor with the current time, and you neighbor will know when you sent the symbol by subtracting off the time it takes light to travel the distance. This grid, that covers all of space, represents a single reference frame.
So if Betelgeuse had in fact exploded six hundred years ago, then the clock grid in Earth's inertial frame would have recorded the event of the Battle of Orewin Bridge on Earth at the same time as the event of the beginning of the Betelgeusian supernova in the Betelgeuse system.
The complexity of this comes in the fact that the inertial frame travelling towards Betelgeuse from Earth at some significant fraction of the speed of light would, with its clock grid, measured Orewin Bridge well before the supernova's start. That's independent of the amount of time it would take for those two clocks to communicate with each other -- we can almost imagine a scientist who finally downloads the logs for all the clocks in a given reference frame collating the data.
What we _can_ say, though, is that once the event of someone on Earth seeing the supernova occurs, then that event is strictly _after_ the supernova -- no inertial reference frame will ever see that event, "observation of supernova", occur before the "initiation of supernova".
It is happening in our NOW and that's all that matters from a cosmic perspective, since no signal can travel faster than light. In that sense there's only the NOW, even when you're observing your friend a few meters away. We might say that physics itself travels at light speed.
Think of it this way: If we knew that star is going to go nova within the next 100 years, could we (science-fiction hat and faster-than-light travel on) send help?
If the radius of Betelgeuse is 1400 times that of our sun, it would accommodate 1400^3, or 2,744,000,000, spheres of that size. Humongous doesn't begin to describe just how big it is.
I wonder what it looks like up close. Betelgeuse's mean density is a few milligrams per cubic meter, probably way less so in the photosphere, comparable to the density of the atmosphere at the edge of space. And yet it still gives light. What would it look like up close? Wispy and ghostlike? Would you even notice you're inside the star?
That's only the mean density. I doubt its mass is evenly distributed. You'd probably see through the wispy outer layers and mostly be able to see the denser interior. Much like looking at Earth from space, you don't see the atmosphere (mostly so, anyway) and see the point where the atmosphere transitions to solid planet.
That's amazing resolving power, even if it's been done (maybe not in the exact same way) for quite a while now. Hopefully newer telescopes like the James Webb Telescope will be able to resolve even the _planets_ around other stars, which we are already able to do with the biggest of exoplanets today (good example -> [1]).
JWST, like ground-based telescopes now, will be able to separate the light from massive Jupiter-like exoplanets from the light of their host stars. But it won't be able to see things at a resolution like this image of Betelguse (so no surfaces of planets... that's quite a long time away).
That image is quite stunning. As the 2008 press release [1] states, this image was one of the first successes at direct imaging an exoplanet. It raised some interesting questions, such as why such a massive planet could be found so far out (330 AU!) The scientific paper for this observation can be found in [2] for those interested more astrophysical detail.
I feel compelled to offer an astronomer's clarification though. The planet in this image is not "resolved" in the technical sense. A resolved image usually means that fine details about the object are discernible spatially. For example, unresolved images of Betelgeuse provide a point source image, without details; a resolved image of Betelgeuse allows you to find spatial features such as that enormous bubble. Another example is, say, Jupiter: by eye or with a very modest telescope, Jupiter is a (bright) point of light. But with a moderate increase in resolving power, you can see all sorts of interesting features, such as the Great Red Spot, and the various cloud layers that vary with latitude.
Individual exoplanets are simply too small to resolve, even with JWST. Even being generous - assuming that the planet is bright enough to detect and that the host star doesn't overwhelm the signal - the angular sizes of exoplanets are miniscule. Lets assume some very generous numbers: a hypothetical exoplanet ten times the diameter of Jupiter (very large), and very, very close to Earth - let's say, 10 lightyears for simplicity and generosity. In arcseconds, the angular diameter of such an object on the sky is about 0.003". Smaller planets at more reasonable distances are even smaller. (The angular size of an object is just small angle trigonometry: in radians, about the width of the object divided by its distance.) Currently, science-class telescopes usually require about 1" resolution. JWST has about 0.1" resolution [3]; an interferometer like ALMA can, at its very best, achieve maybe 0.02" [4], though interferometers (as mentioned in other answers) sacrifice some things in exchange for spatial resolution.
This isn't to say you can't just detect exoplanets - you can, even with a ground based telescope like Gemini - but you probably won't resolve them, at least in this generation of telescopes, including JWST. But you can do a lot without spatial resolution - for example, you don't need to resolve the object to measure its spectrum, and spectral analysis can tell you a great deal.
I think maybe your definition of "resolved" is a little skewed. It is not about the features of the object, but more by the Rayleigh Criterion [1][2]
So we can already (and have been able to for a long time) to "resolve" things as apparently-small as exoplanets, but for resolving _surface details_ we are one order of magnitude away for interferometers and two orders of magnitude away for standard single-mirror telescopes. Right?
I gently disagree that this is a skewed definition. By convention, a "resolved" image of an object implies an extremely high quality measurement. On the other hand, we can resolve the separation of the star and planet in the Gemini image, but it would be misleading to claim that this is a resolved image of the planet. It may seem like a petty distinction, but I think it is better - for clarity's sake - to reserve the term "resolved" for its most natural contextual definition. Perhaps I am oversensitive to this as many non-astronomers are often led to believe that artistic renditions of exoplanets are actual images, not conceptions.
This type of direct detection was one of the first of its kind, so I wouldn't characterize this as an old capability - 2008 is relatively recent. Telescope turnover time is very long; Gemini remains a prominent telescope for science-class observations. Additionally, most new telescope generations don't achieve an order-of-magnitude improvement in resolution, or at least, not anymore. There are a lot of serious, decadal-scale barriers to improving resolution that must be overcome.
In terms of angular resolution, the order-of-magnitude estimates are the minimum improvements, assuming that such a close and large exoplanet exists. (AFAIK, there is no such system.) In practice it is likely that we need even better angular resolution, as there are not many systems within 10 ly away, and extremely large exoplanets are not very common (relatively speaking.)
Despite its immense size, Betelgeuse can be described as a "red hot vacuum." The density of its outer atmosphere / corona is very thin compared to our sun, so it's readily distorted into irregular shapes by the star shedding gas/material.
> In this picture, ALMA observes the hot gas of the lower chromosphere of Betelgeuse at sub-millimeter wavelengths — where localised increased temperatures explain why it is not symmetric.
I'm guessing the "explain why it is not symmetric" part is related to this.
> What's with the weird shape in the bottom left? I don't suppose stars can get squished like that, so it must be some sort of optical artifact?
It's possible that it could be something physical, as some of the other commenters have mentioned. But it also could be a result of the response of the telescope. Interferometers like ALMA do not directly measure the distribution of emission on the sky. Instead they sample the fourier transform of the sky brightness. Because there are discrete pairs of antennas, the full fourier transform cannot be measured. When images such as this are created, the sampling of the fourier plane is deconvolved. But that process does not create a perfect recovery of the sky emission distribution. One would need to look at the raw data, but another explanation for that extension is that it is an artifact of the fourier plane sampling (e.g., analgous to Gibbs Ringing; https://en.wikipedia.org/wiki/Gibbs_phenomenon).
I could be interpreting the image wrong, but the star appears to be expelling some material on the left. It's not flattened on the bottom so much as bulging in the middle.
I guess it is the "vast plume of gas" that the article mentions:
The star has been observed in many other wavelengths, particularly in the visible, infrared, and ultraviolet. Using ESO’s Very Large Telescope astronomers discovered a vast plume of gas almost as large as our Solar System. Astronomers have also found a gigantic bubble that boils away on Betelgeuse’s surface.
I also thought maybe that was some gravitational lensing effect but TFA says there's a "giant bubble" on the surface so maybe it really is shaped like that.
IIRC we're talking about it reaching apparent magnitude ~ -12, given or taken a couple magnitudes (or about as bright as the full Moon - Just imagine that much light coming from an infinitely tiny dot instead). BTW with a declination of ~ +7° Betelgeuse is very close to the celestial equator, so the supernova will be visible from anywhere on Earth, save a very small region within 7° from the South Pole.
It's fun to ponder how it will be that bright at every point that is the same distance. Imagine a sphere centered on Betelgeuse with a 600 light year radius with Earth on the surface of this sphere. The fact that enough photons reach my eyeballs to be able to see it at night is difficult to comprehend - the amount of energy needed to sprinkle every square millimeter of a sphere that size is just unimaginable. Now make that 100,000 times brighter during a supernova event -- crazy town. I feel like imagining the energy from a star spread out on a galactic scale helps me understand and appreciate the magnitudes better than any large number of luminosity or watts or photos can.
Well, when you think that the Sun is now about 5 thousand million years old and happily living the quiet times of its middle age, you can see you "young" is Betelgeuse.
"Millimeter" refers to the wavelength of light. "Continuum" is a shorthand that in this context refers to thermal emission.
All matter emits thermal radiation. The spectral energy distribution of this radiation is determined by the Planck's law [1]. If you measure the spectrum of an object, some part of it will be from this thermal emission, which is a continuous function of wavelength/frequency. In many cases, the conditions are right for spectral lines [2] to be produced, either in emission or absorption. Because these features are centered at specific wavelengths, they are not usually thought of as "continuous" features in the spectrum. (This isn't strictly accurate, as all spectral lines suffer some broadening into extremely narrow, but still continuous, features. Additionally, there are sometimes finite width continuous features called "bands" that arise due to so many lines being present that they blend together.) Generally the continuous part of the spectrum is called "continuum" while the other parts are "lines."
That's roughly equivalent to looking at an object 250nm wide at arm's length. A red blood cell is approximately 8000nm wide.
Crazy resolving power.