In particle physics, large quantities of data is a mixed blessing. In the most obvious case, petabytes of data means that you can get a statistically significant result, but it also means that analysis is complicated and time-consuming. But more than that, with accelerators like the Large Hadron Collider at CERN, not only do experiments collect tons of data, it can arrive from many events at once, which means that even with the most advanced trackers, particle tracks overlap. For instance, in the LHC, bunches of protons circulate in the accelerator at nearly the speed of light until opposing bunches are directed into a collision at the center of one of the various detectors. At relatively low energies, most of the protons in the bunch don't actually collide, and you get just one or two proton-proton collisions. When they ramp up the energy, however, they end up with dozens of nearly simultaneous proton-proton collisions and interactions, all of which send showers of particle debris into the surrounding detector. Pileup occurs when two or more particles hit the same part of the detector at very nearly the same time, and it somewhat complicates the event analysis.
Luckily, experiments like the LHC have highly segmented detectors that are capable of tracking individual particles over fairly long distances, and it turns out that by a process called event reconstruction, it is possible to extrapolate tracks back to the location of the collision. Since not all of the proton-proton collisions are in exactly the same location in the beamline, such an extrapolation can match each particle to the collision in which it was produced. Based on that, analysts can distinguish the individual collisions and conduct analysis for each collision separately. This is particularly useful when there's one interesting collision and 30+ boring ones, as it allows easy removal of much of the background.
Other experiments aren't quite so lucky as the LHC, though. One experiment near and dear to me is Muon g-2 (pronounced gee minus two), an experiment at Fermilab that will be measuring the magnetic moment of the muon with unprecedented precision. Because it has a slightly smaller budget than the giant accelerators elsewhere, its detectors primarily consist of calorimeters, which capture the decay products of the muons (positrons, in the case of positively-charged muons) and measure their energies. Since most calorimeter stations have no tracking capability, they are vulnerable to pileup, in which two low-energy positrons hit the same calorimeter at almost exactly the same time, which may erroneously be interpreted as a single higher-energy event. This issue wreaks all sorts of havoc with the final experimental analysis, so it's a major goal for the experiment to reduce the systematic uncertainty caused by pileup in the final result. This is being done in a variety of ways.
The most obvious way of reducing pileup systematic uncertainties is by increasing experimental resolution. This can be done both spatially and temporally. Spacial resolution is increased by increasing the segmentation of each calorimeter. In the most recent previous iteration of the experiment at Brookhaven National Lab, the calorimeters were divided into a 5 by 7 grid of sub-detectors. Each of these is capable of detecting a hit and measuring the energy a particle deposits within it, so some pileup can be identified by the spacial separation of two particles. In the upcoming iteration, the detectors remain the same size, but will be segmented in a 6 by 9 grid, which further reduces the amount of pileup by increasing the spacial resolution of the detector. There's a limit to how much you can segment the detector, though; the material has a very definite Molière radius, which describes how much of the material is lit up by a single high-energy positron collision. If you segment the detector so that the individual components are smaller than the Molière radius, then you haven't actually gained anything, as each event will light up multiple segments, rather than allowing you any higher resolution.
Another technique is to increase temporal resolution, which improves physicists' ability to distinguish events that occur at nearly the same time. Improving the rate at which data is collected goes a long way towards reducing the systematic error caused by pileup.
I previously mentioned that tracking is a powerful tool for identifying and analyzing pileup. Because of this, Muon g-2 is also planning to use tracking chambers in front of two of their 24 calorimeters to determine the trajectories of positrons before they hit the calorimeters. Since the whole experiment is conducted in a magnetic field, lower-energy positrons will follow curvier paths than high-energy ones. The plan here is to get information about pileup from the two stations with trackers, then apply that to the other calorimeters in the experiment.
Finally, analytical techniques are being developed to help subtract out the pileup events. Simulations can help with that, as can large-scale analysis of the data set as a whole. There are also some analysis techniques that work not by identifying and then analyzing individual events, but instead by integrating the calorimeter signal over time in order to obtain a similar result. It's a really interesting area, with (as far as I can tell) no prescribed solution, which means that people are actively working on it and coming up with novel analysis techniques.
I'm here to live, to learn, to laugh, to love. Join me in my exploration of the awesomeness of life.
Friday, May 31, 2013
Wednesday, May 29, 2013
Monte Carlo
Math is pretty awesome, but there are some problems that just have no closed-form solution. Similarly, there are many problems in physics that are most easily solved by numerical simulation. There are many approaches to numerical simulation. One of the easiest (quite probably the easiest, actually) is called Monte Carlo, and it operates by injecting randomness in a system, seeing how it pans out, and repeating many times to reduce statistical uncertainty. That's how I would define it, in any case - you'll get different answers if you ask different people. The presence of randomness means that by increasing the number of samples, you can decrease your error, which contrasts sharply with deterministic methods, for which you're pretty much stuck with what you have.
Some examples of the usefulness of Monte Carlo techniques include risk analysis in investment, particle physics simulations (how do you think I heard about it?), and randomizing the parameters of a simulation in order to determine sources of error. It's a useful technique in many fields.
A simple application of Monte Carlo techniques is in numerical integration. Some functions have no friendly antiderivative, which means we're stuck with one method of numerical integration or another. I'm by no means saying that Monte Carlo is the right way to go on this one; there are plenty of deterministic methods that will also work quite well, but it's an easy example to get your head around the power of randomness in numerical simulation. And it turns out that once you get to around 20-dimensional integrals (not too uncommon, it turns out, in theoretical physics), Monte Carlo results in lower uncertainties than deterministic methods.
Here's the plan: we have a function $f(x)$ that we want to integrate from $x_0$ to $x_1$. To do so, we'll generate a lot of random points in the rectangular region from $x_0$ to $x_1$ and $y=0$ to $y_{max}$ (suppose for the sake of simplicity that $f(x)$ is positive everywhere - the method extends to negative functions as well with a little more hassle). We'll then check each point to determine whether it's under the curve. In the end, we take the ratio of the number of points under the curve to the number of points we generated, scale by the size of the rectangular region, and the result is our numerical integration! Here's a demonstration of how it works:
The function was then called like this:
I can't quite figure out how to get MATLAB syntax highlighting in there, but you get the idea. The marvelous thing is that it takes only six lines of code to do the whole integration numerically. The rest is just plotting the results.
Furthermore, you can easily see that the statistical uncertainty decreases as you increase the number of samples:
Cute, huh? And that's barely scratched the surface of what Monte Carlo can do.
Some examples of the usefulness of Monte Carlo techniques include risk analysis in investment, particle physics simulations (how do you think I heard about it?), and randomizing the parameters of a simulation in order to determine sources of error. It's a useful technique in many fields.
A simple application of Monte Carlo techniques is in numerical integration. Some functions have no friendly antiderivative, which means we're stuck with one method of numerical integration or another. I'm by no means saying that Monte Carlo is the right way to go on this one; there are plenty of deterministic methods that will also work quite well, but it's an easy example to get your head around the power of randomness in numerical simulation. And it turns out that once you get to around 20-dimensional integrals (not too uncommon, it turns out, in theoretical physics), Monte Carlo results in lower uncertainties than deterministic methods.
Here's the plan: we have a function $f(x)$ that we want to integrate from $x_0$ to $x_1$. To do so, we'll generate a lot of random points in the rectangular region from $x_0$ to $x_1$ and $y=0$ to $y_{max}$ (suppose for the sake of simplicity that $f(x)$ is positive everywhere - the method extends to negative functions as well with a little more hassle). We'll then check each point to determine whether it's under the curve. In the end, we take the ratio of the number of points under the curve to the number of points we generated, scale by the size of the rectangular region, and the result is our numerical integration! Here's a demonstration of how it works:
function result = nIntegrate(func, minX, maxX, maxY, n) % Use maxY instead of finding the maximum to avoid evaluating the % function unnecessarily. totalArea = (maxX - minX) * maxY; % Generate and scale the random points xs = (rand(n,1) * (maxX - minX)) + minX; ys = rand(n,1) * maxY; pts = [xs ys]; % Figure out which ones are under the curve filtered = pts(ys < func(xs),:); nUnderCurve = length(filtered); result = (nUnderCurve / n) * totalArea; % Make a nice plot to go with the analysis. figure; hold on; title(sprintf('Monte Carlo integration: %d points', n)); xlabel('x'); ylabel('y'); % All points scatter(xs, ys,1); % The function realX = linspace(minX, maxX, 100); realY = func(realX); plot(realX, realY,'Color','red','LineWidth',2); % The points under the curve scatter(filtered(:,1), filtered(:,2), 15, 'k'); hold off; end
The function was then called like this:
nIntegrate(@(x)gaussmf(x,[.3,0]),-1,1,1,1000)(the at sign makes an anonymous function; in this case, a function that will return the value of the Gaussian centered at zero with standard deviation 0.3 at a given value of x.)
I can't quite figure out how to get MATLAB syntax highlighting in there, but you get the idea. The marvelous thing is that it takes only six lines of code to do the whole integration numerically. The rest is just plotting the results.
Furthermore, you can easily see that the statistical uncertainty decreases as you increase the number of samples:
The error in the integration decreases with an increase in the number of points. |
Monday, May 27, 2013
Hadrons
The stuff we tend to call matter is made primarily of protons, neutrons, and electrons. While we can describe electrons' behavior just based on the weak and electromagnetic interactions, we need to bring in another force to describe the behavior of protons and neutrons. It's called the strong nuclear force, and it is peculiar to particles called hadrons. There are two general classes of hadrons: mesons and baryons (meaning middle-weight and heavy, respectively, I believe), which differ in their quark content. Baryons, which include protons and neutrons, consist of three quarks. Mesons consist of a quark-antiquark pair.
The two hadrons we deal with most regularly are made of just up and down quarks. Unlike leptons and conglomerate hadrons, quarks can have fractional electric charge, which is what allows three quarks to combine for charges of 0, 1, or 2 times the electron's charge. Up quarks have +2/3 charge, while down quarks have -1/3 charge. Thus the combination $uud$ yields the positive charge of a proton, while $udd$ gives the neutral neutron. But as we've seen, there's more to life than just up and down quarks. After a while dealing with those, physicists started observing other particles that behaved strangely. They were given an attribute called strangeness, and it turns out that it's a property of a third kind of quark called the strange quark. A few years and many experiments later, physicists have concluded that like leptons, there are three generations of quarks. Each generation has a +2/3 and a -1/3 charge quark, and the weak force can couple these triples, so an up quark can be converted to a down quark and a $W^+$ boson if there's enough energy involved. It can also, with much lower probability, be converted into a strange or a bottom quark. The probabilities of such transformations are given by the Cabbibo-Kobayashi-Maskawa (CKM) matrix, which is sort of like the neutrino oscillation matrix in the lepton sector. The difference is that neutrinos mix with very high probability, while quark generations mix only rarely.
Quarks also have a funky property called color, which couples to the strong force in the same way that gravity acts on mass and electromagnetism acts on charge. The three colors are called red, green, and blue, but they have nothing to do with the colors that we see; it's just a (somewhat unfortunate) naming convention. It turns out that it's impossible to see an isolated colored object; instead, we only see colorless conglomerations. As a result, we can never detect an isolated quark. The smallest possible colorless combinations of quarks are those of three quarks (or anti-quarks), one of each color, or of a quark and an anti-quark with opposite colors. Those correspond to baryons and mesons, respectively.
The strong force is also unique in that its force carrier, the gluon, has color, and therefore feels the strong force. My understanding is that this leads to runaway strength and generally complicates the lives of theorists.
Quarks also have a funky property called color, which couples to the strong force in the same way that gravity acts on mass and electromagnetism acts on charge. The three colors are called red, green, and blue, but they have nothing to do with the colors that we see; it's just a (somewhat unfortunate) naming convention. It turns out that it's impossible to see an isolated colored object; instead, we only see colorless conglomerations. As a result, we can never detect an isolated quark. The smallest possible colorless combinations of quarks are those of three quarks (or anti-quarks), one of each color, or of a quark and an anti-quark with opposite colors. Those correspond to baryons and mesons, respectively.
The strong force is also unique in that its force carrier, the gluon, has color, and therefore feels the strong force. My understanding is that this leads to runaway strength and generally complicates the lives of theorists.
Tuesday, May 21, 2013
Leptons
For no particular reason, I felt like writing about leptons, so here you go! I suspect this will be followed up by a discussion of hadrons and possibly gauge bosons, but no guarantees! I'm also hoping to learn a bit about colloidal materials in the next few days and start writing about things other than particle physics and astrophysics.
Along with the electron and its corresponding neutrino that we all know and love, we also have antiparticles for both (unless the neutrino is a Majorana particle, which is a whole 'nother kettle of fish). These are the positron ($e^+$) and the anti electron neutrino.
In any case, the electron and the electron neutrino form the first 'generation' of the leptons. There are two more: the muon $\mu^-$ and the tau lepton $\tau^-$, along with their corresponding neutrinos. The muon has about 200 times the mass of the electron, and decays by
\[
\mu^- \to e^- + \nu_\mu + \overline{\nu}_e
\]
with a fairly reasonable lifetime of 2 microseconds. This makes it an awesome tool for experimental particle physics, but that is very much a topic for another day.
Similarly, the tau lepton has a lifetime of $10^{-13}$ seconds and can decay to either a muon or an electron, but it also has enough mass to produce some quarks/hadrons, so there are all sorts of interesting (nonleptonic) decay modes to look out for.
So here are the particles we have so far:
\[
\begin{array}{|ccc|} \hline e^- & \mu^- & \tau^-\\ \nu_e & \nu_\mu & \nu_\tau\\ \hline \end{array}
\]
The leptons all have a quality called flavor, so they can interact by the weak force. The charged leptons can, of course, interact with other particles electromagnetically, while the neutrinos cannot. None of the leptons can interact strongly, though - that's just quarks and gluons.
Along with the electron and its corresponding neutrino that we all know and love, we also have antiparticles for both (unless the neutrino is a Majorana particle, which is a whole 'nother kettle of fish). These are the positron ($e^+$) and the anti electron neutrino.
In any case, the electron and the electron neutrino form the first 'generation' of the leptons. There are two more: the muon $\mu^-$ and the tau lepton $\tau^-$, along with their corresponding neutrinos. The muon has about 200 times the mass of the electron, and decays by
\[
\mu^- \to e^- + \nu_\mu + \overline{\nu}_e
\]
with a fairly reasonable lifetime of 2 microseconds. This makes it an awesome tool for experimental particle physics, but that is very much a topic for another day.
Similarly, the tau lepton has a lifetime of $10^{-13}$ seconds and can decay to either a muon or an electron, but it also has enough mass to produce some quarks/hadrons, so there are all sorts of interesting (nonleptonic) decay modes to look out for.
So here are the particles we have so far:
\[
\begin{array}{|ccc|} \hline e^- & \mu^- & \tau^-\\ \nu_e & \nu_\mu & \nu_\tau\\ \hline \end{array}
\]
The leptons all have a quality called flavor, so they can interact by the weak force. The charged leptons can, of course, interact with other particles electromagnetically, while the neutrinos cannot. None of the leptons can interact strongly, though - that's just quarks and gluons.
Saturday, May 18, 2013
Neutron decay
In case you're still a little leery about the neutrino's existence or
conservation of lepton number, take a look at the decay of a neutron:
\[
n \rightarrow p + e^- + \overline{\nu}_e
\]
If lepton number conservation weren't a requirement, we could also see the reaction
\[
n \rightarrow p + e^-
\]
These look pretty similar at first, but it turns out they're easily distinguishable. If the neutron starts at rest, then momentum conservation dictates that the momenta of the products (three in the reaction we observe, two in the second one above) sum to zero. In the second reaction, since there are only two products, they go ricocheting off in exactly opposite directions. In the first, while we are very unlikely to detect the neutrino, the electron and positron no longer have opposite momenta, which is easily detectable (assuming you start with a slow enough neutron). And it turns out (physics is, after all is said and done, an empirical science) that the first reaction occurs and the second doesn't. This demonstrates the existence of the little neutral particle, and other similar experiments allow physicists to conclude that lepton number is always conserved.
\[
n \rightarrow p + e^- + \overline{\nu}_e
\]
If lepton number conservation weren't a requirement, we could also see the reaction
\[
n \rightarrow p + e^-
\]
These look pretty similar at first, but it turns out they're easily distinguishable. If the neutron starts at rest, then momentum conservation dictates that the momenta of the products (three in the reaction we observe, two in the second one above) sum to zero. In the second reaction, since there are only two products, they go ricocheting off in exactly opposite directions. In the first, while we are very unlikely to detect the neutrino, the electron and positron no longer have opposite momenta, which is easily detectable (assuming you start with a slow enough neutron). And it turns out (physics is, after all is said and done, an empirical science) that the first reaction occurs and the second doesn't. This demonstrates the existence of the little neutral particle, and other similar experiments allow physicists to conclude that lepton number is always conserved.
Thursday, May 16, 2013
Neutrino Astrophysics: PeV-energy neutrinos!
Well, I thought I was done with neutrino astrophysics yesterday, but then I read about the recent discovery of two extremely high-energy neutrinos by the IceCube collaboration. As I've mentioned before, IceCube is a Cerenkov detector. Its 'detector' consists of about a cubic kilometer of ice in the Antarctic, and it has 86 strings of detectors suspended in the ice. They typically detect solar and atmospheric neutrinos, but in an article submitted to Phys. Rev. Letters last month, they report the detection of two peta-electron volt neutrinos. That's $10^{15}$ electron volts! These high-energy events are very unlikely to have resulted from cosmic ray interactions with the atmosphere (though the collaboration is investigating the possibility that they resulted from the decay of charmed particles produced by high-energy cosmics), and the collaboration is fairly sure that they're actually a product of some extremely high-energy astrophysical events. In the paper, they suggest gamma ray bursts or active galactic nuclei as potential sources, and they're hopeful that further analysis will reveal more similarly energetic neutrinos or give clues to their origin.
Oh, and just as proof that physicists have a sense of humor, the detected neutrinos have been named Bert and Ernie. It's even in the graphics for the paper!
The above graphic (taken from the paper) provides a visualization of the events in question. Each sphere shows the data from one photomultiplier tube embedded in the ice. The size of the sphere shows how many photoelectrons were detected, and the color shows the time at which the detection occurred, from red for the first detections to blue for the last. The energy of the event can be determined from the total number of photoelectrons detected, and the collaboration calculated energies of 1.04 and 1.14 PeV for Bert and Ernie, respectively.
For those interested, the paper in question can be found here.
Other neutrino-related posts can be found here.
Oh, and just as proof that physicists have a sense of humor, the detected neutrinos have been named Bert and Ernie. It's even in the graphics for the paper!
From First observation of PeV-energy neutrinos with IceCube, IceCube collaboration, 2013 |
For those interested, the paper in question can be found here.
Other neutrino-related posts can be found here.
Wednesday, May 15, 2013
Neutrino Astrophysics: Outstanding mysteries
As you've probably guessed by now, there's a lot we don't know about neutrinos and how they function in astrophysics. They have a lot of mysteries in store for us. Here are a few:
- Value of $\theta_{13}$ and the neutrino mass hierarchy. Remember the mixing angles from the discussion of neutrino oscillations? Well, we know two of them fairly well, including their signs, but we have very little information on $\theta_{13}$. This is particularly interesting because the sign of that particular mixing angle will tell us how the masses of the mass eigenstates are related. There are two possibilities: the natural mass hierarchy, in which $m_{\nu_1}<m_{\nu_2}<m_{\nu_3}$ (shown at left below), and the inverted mass hierarchy, in which $m_{\nu_3}<m_{\nu_1}<m_{\nu_2}$ (shown at right below).
From Nishikawa, K. 2010
Recent Status of Accelerator Neutrino Experiments - Dirac/Majorana neutrinos. There is a bit of an outstanding question on whether the neutrino has a distinct antiparticle (Dirac neutrino) or is its own antiparticle (Majorana neutrino). If it is a Majorana particle, then processes such as neutrinoless double-beta decay could occur, in which two neutrinos annihilate in joint beta decay of two neutrons. This is a highly sought-after reaction, and there are legions of physicists working to find or rule out such a reaction.
From wikipedia.org. - Other neutrino sources. We've spotted neutrinos from a supernova before, but we should also be able to detect some from other astrophysical sources, like active galactic nuclei (really bright spots caused by accretion around a supermassive black hole) and gamma ray bursts (possibly bright bursts from rapidly rotating supernovae - is physics awesome, or what?). Detecting neutrinos from such events may help us understand what's going on in high-energy events.
In summary, neutrinos are pretty awesome little particles. They can be useful in understanding phenomena that light can't easily bring us information about, and they have a lot of mysteries left for us to sort out.
This is the seventh and final post in a series on neutrino astrophysics. Other neutrino-related posts can be found here.
Tuesday, May 14, 2013
Neutrino Astrophysics: Cosmic neutrino background
Once upon a time, almost 14 billion years ago, there was a Big Bang. In the milliseconds afterwards, everything in the Universe (mostly electrons, positrons, photons, and neutrinos) was in thermal equilibrium. What that means is that the reaction
\[
\nu+\overline{\nu}\leftrightarrow e^+ + e^-
\]
could occur in both directions. That means the energy of the neutrinos has to be high enough to produce two more massive particles, and the cross section for the interaction has to do with the temperature of the Universe. At a temperature of around $10^{10}$ K, the temperature of the Universe dropped below the critical temperature for the reaction shown above to proceed in both directions, so neutrinos could fly free through the Universe without having to worry about interacting with much else.
These neutrinos play a role in things like the ratio of protons to neutrons and the frequency components of the cosmic microwave background, and we believe that they are still around to this day. They're called the cosmic neutrino background. While general relativity is needed to understand the time evolution of the neutrino background, essentially what happens is that the neutrinos cool as the Universe expands. Theorists predict the current energy of the neutrinos in this background to be around $10^{-4}$ eV. To fully appreciate how cold these guys are, let's take another look at our favorite neutrino-related graph, the one showing flux vs. energy of neutrinos produced in hydrogen fusion.
Note that the neutrinos we were so excited to detect with gallium detectors have energies around 0.3 mega electron volts. So the detection of the cosmic neutrino background is, for the time being, an insurmountable experimental challenge.
Even though we can't directly see the neutrino background, we have some good reasons to believe that it exists. For one thing, the presence of neutrinos affects the ratio of protons to neutrons, based on reactions like the following:
\[
n\leftrightarrow p+e^-+\overline\nu_e\\
p+e^-\leftrightarrow n + \nu_e
\]
As such, the presence of neutrinos throughout the Universe has dramatic effects on nucleosynthesis, the process by which nucleons combine into larger nuclei, and the current abundances give indirect evidence for the cosmic neutrino background.
For another thing, apparently the angular frequency components of the cosmic microwave background's temperature fluctuations are "damped" relative to predictions in the absence of the neutrino background. I believe that has to do with the way in which vibrations propagate through the very early Universe. Neutrinos provided a certain amount of diffusion, which served to make the Universe more isotropic than it would have been in their absence.
Why do we care about this? We know very little about the early stages of the Universe's evolution. Some of our most useful information comes from observations of the cosmic microwave background, which decoupled from the rest of matter when the Universe was around 300 years old. The neutrino background, if detected, can tell us quite a bit about the intervening time period, which gets us that much closer to understanding the highest-energy event in the history of the Universe.
This is the sixth post in a series on neutrino astrophysics. Other neutrino-related posts can be found here.
\[
\nu+\overline{\nu}\leftrightarrow e^+ + e^-
\]
could occur in both directions. That means the energy of the neutrinos has to be high enough to produce two more massive particles, and the cross section for the interaction has to do with the temperature of the Universe. At a temperature of around $10^{10}$ K, the temperature of the Universe dropped below the critical temperature for the reaction shown above to proceed in both directions, so neutrinos could fly free through the Universe without having to worry about interacting with much else.
These neutrinos play a role in things like the ratio of protons to neutrons and the frequency components of the cosmic microwave background, and we believe that they are still around to this day. They're called the cosmic neutrino background. While general relativity is needed to understand the time evolution of the neutrino background, essentially what happens is that the neutrinos cool as the Universe expands. Theorists predict the current energy of the neutrinos in this background to be around $10^{-4}$ eV. To fully appreciate how cold these guys are, let's take another look at our favorite neutrino-related graph, the one showing flux vs. energy of neutrinos produced in hydrogen fusion.
Image from Bahcall, Solar Neutrinos. http://www.sns.ias.edu/~jnb/Papers/Popular/Wiley/paper.pdf |
Note that the neutrinos we were so excited to detect with gallium detectors have energies around 0.3 mega electron volts. So the detection of the cosmic neutrino background is, for the time being, an insurmountable experimental challenge.
Even though we can't directly see the neutrino background, we have some good reasons to believe that it exists. For one thing, the presence of neutrinos affects the ratio of protons to neutrons, based on reactions like the following:
\[
n\leftrightarrow p+e^-+\overline\nu_e\\
p+e^-\leftrightarrow n + \nu_e
\]
As such, the presence of neutrinos throughout the Universe has dramatic effects on nucleosynthesis, the process by which nucleons combine into larger nuclei, and the current abundances give indirect evidence for the cosmic neutrino background.
For another thing, apparently the angular frequency components of the cosmic microwave background's temperature fluctuations are "damped" relative to predictions in the absence of the neutrino background. I believe that has to do with the way in which vibrations propagate through the very early Universe. Neutrinos provided a certain amount of diffusion, which served to make the Universe more isotropic than it would have been in their absence.
Why do we care about this? We know very little about the early stages of the Universe's evolution. Some of our most useful information comes from observations of the cosmic microwave background, which decoupled from the rest of matter when the Universe was around 300 years old. The neutrino background, if detected, can tell us quite a bit about the intervening time period, which gets us that much closer to understanding the highest-energy event in the history of the Universe.
This is the sixth post in a series on neutrino astrophysics. Other neutrino-related posts can be found here.
Monday, May 13, 2013
Neutrino Astrophysics: Supernova 1987A
Apart from the neutrinos from the Sun, we can also observe neutrinos from high-energy cosmic events. The best example of this is supernova 1987A, a stellar explosion in the Large Magellanic Cloud, a nearby galaxy, whose light reached us in February of 1987.
As we've seen, in the early stages of a supernova, the iron core's electron degeneracy pressure isn't enough to oppose gravitational collapse, and electron capture ensues, in which the reaction $p+e^-\rightarrow n+ \nu_e$ turns the core into an enormous atomic nucleus called a neutron star. This produces a huge flux of neutrinos. And to make matters even more interesting, as these neutrinos propagate through the incredibly dense core, they lose energy, much of which is released in the form of neutrino-antineutrino pairs. (At least, that's what I get from reading bits and pieces of the literature on the subject.) The net result is an immense flux of neutrinos heading away from the collapsing star. Outside the extremely dense core, the neutrinos can happily propagate through just about everything, whereas light from the ensuing explosion has to bounce around for quite a while before escaping. As a result, the neutrinos leave the immediate vicinity of the star long before the actual light, and carry well over 95% of the collapse's total energy.
Luckily for astrophysicists, there were several neutrino detectors in operation on February 23, 1987. These detected an enormous flux of neutrinos for a short period of time. In fact, Kamiokande-II saw such high detection rates that they were able to use more of their detector than usual for detection, because the background was such a low fraction of events compared to normal.* Just because I found this figure, I'm going to inflict it on you as well. It's the original data from Kamiokande-II showing the huge number of events. They detected 11 events in a matter of minutes, as compared with just a couple each day under normal circumstances. Another neutrino detector, IMB, near Lake Erie, detected 8 neutrinos at the same time.
Furthermore, these neutrinos were detected around three hours before the light from the explosion reached Earth, which confirms once again how awesome neutrinos are as astrophysical tools. It also allowed physicists to place upper limits on the mass of the neutrino, since more massive particles would have had to travel much slower than light, and would likely have been overtaken by the explosion's light on the way to Earth.
* The way that Kamiokande operates, as far as I can tell, is that it only uses the central portion of the water chamber. The outer layers are simply there to filter out background, like cosmic rays and radiation from detectors or other nearby objects. Only a small portion can be used for the real detection of neutrinos because of the incredibly low interaction rates. Any false positives would have a major impact on the resulting data.
This is the fifth post in a series on neutrino astrophysics. Other neutrino-related posts can be found here.
As we've seen, in the early stages of a supernova, the iron core's electron degeneracy pressure isn't enough to oppose gravitational collapse, and electron capture ensues, in which the reaction $p+e^-\rightarrow n+ \nu_e$ turns the core into an enormous atomic nucleus called a neutron star. This produces a huge flux of neutrinos. And to make matters even more interesting, as these neutrinos propagate through the incredibly dense core, they lose energy, much of which is released in the form of neutrino-antineutrino pairs. (At least, that's what I get from reading bits and pieces of the literature on the subject.) The net result is an immense flux of neutrinos heading away from the collapsing star. Outside the extremely dense core, the neutrinos can happily propagate through just about everything, whereas light from the ensuing explosion has to bounce around for quite a while before escaping. As a result, the neutrinos leave the immediate vicinity of the star long before the actual light, and carry well over 95% of the collapse's total energy.
Luckily for astrophysicists, there were several neutrino detectors in operation on February 23, 1987. These detected an enormous flux of neutrinos for a short period of time. In fact, Kamiokande-II saw such high detection rates that they were able to use more of their detector than usual for detection, because the background was such a low fraction of events compared to normal.* Just because I found this figure, I'm going to inflict it on you as well. It's the original data from Kamiokande-II showing the huge number of events. They detected 11 events in a matter of minutes, as compared with just a couple each day under normal circumstances. Another neutrino detector, IMB, near Lake Erie, detected 8 neutrinos at the same time.
Kamiokande-II spots supernova 1987A. Nhit is the number of photomultipliers that recorded an event; any more than 20 is considered a neutrino detection. M. Koshiba et al., 1988 |
* The way that Kamiokande operates, as far as I can tell, is that it only uses the central portion of the water chamber. The outer layers are simply there to filter out background, like cosmic rays and radiation from detectors or other nearby objects. Only a small portion can be used for the real detection of neutrinos because of the incredibly low interaction rates. Any false positives would have a major impact on the resulting data.
This is the fifth post in a series on neutrino astrophysics. Other neutrino-related posts can be found here.
Sunday, May 12, 2013
Neutrino Astrophysics: Neutrino oscillations
When last we left our heroes, the Sudbury Solar Neutrino Observatory (SNO) had concluded that while predictions for the total flux of neutrinos at the Earth were accurate, only about a third of those that reach us are electron neutrinos. This naturally begs the question: what are the rest of them? Subsequent experiments, using both astrophysical and accelerator neutrinos, concluded that they oscillated into muon- and tau-flavored neutrinos. This process is highly analogous to strangeness oscillations in the neutral kaons I've written so much about. In essence, neutrinos are produced and interact in flavor eigenstates, as the electron neutrino $\nu_e$, the muon neutrino $\nu_\mu$, and the tau neutrino $\nu_\tau$, but propagate through space in mass eigenstates creatively named $\nu_1$, $\nu_2$, and $\nu_3$. Since these particles have distinct masses, they also have distinct time evolution, and based on oscillation frequencies, we can determine the mass differences between various eigenstates.
Unlike in the kaon system, however, in which the $K^0$ had an equal probability of being a $K_1$ compared to a $K_2$, it turns out the $\nu_e$ is much more likely to be a $\nu_1$ than any other mass eigenstate, so in order to describe the probabilities associated with converting between the two bases, we have to introduce the concept of mixing angles. For simplicity, let's just look at two of the three neutrino types in both bases: $\nu_e$ and $\nu_\mu$ for the flavor eigenstates and $\nu_1$ and $\nu_2$ for the mass eigenstates. We could choose two constants to represent the components of $\nu_e$ in the two mass eigenstates, but in order to ensure normalization, we instead use the sine and cosine of an angle, $\theta_{12}$, and end up with
\begin{align}
\nu_e &= \cos(\theta_{12})\nu_1 + \sin(\theta_{12})\nu_2\\
\nu_\mu &= -\sin(\theta_{12})\nu_1 + \cos(\theta_{12})\nu_2
\end{align}
We can similarly convert from the mass eigenstates to the flavor eigenstates. Throwing in a third state complicates matters somewhat: we have to add two new mixing angles in order to describe the pairwise relationship between the states, and also have to add what's called a CP violating phase $\delta$. Overall, the relationship is a little messy:
\[
\left(\begin{array}{c}\nu_e\\ \nu_\mu\\ \nu_\tau \end{array}\right)= \left(\begin{array}{ccc}
c_{12}c_{13} & s_{12}c_{13} & s_{13}e^{-i\delta}\\
-s_{12}c_{23}-c_{12}s_{23}s_{13}e^{i\delta} &
c_{12}c_{23}-s_{12}s_{23}s_{13}e^{i\delta} & s_{23}c_{13}\\
s_{12}s_{23}-c_{12}c_{23}s_{13}e^{i\delta} &
-c_{12}s_{23}-s_{12}c_{23}s_{13}e^{i\delta} & c_{23}c_{13}
\end{array}\right)\left(\begin{array}{c}\nu_1\\ \nu_2\\ \nu_3 \end{array}\right)
\]
And if that weren't crazy enough, the matrix above uses shorthand: $c_{ij}=\cos(\theta_{ij})$ and $s_{ij}=\sin(\theta_{ij})$. If you set these equations in motion and let time run for a bit, you find that an initial electron neutrino, like one produced in the sun, propagates like this:
There are periods of time during which it is far more likely to detect this neutrino as a muon or tau neutrino than as an electron-flavored one! This explains the deficit of electron neutrinos observed from the sun, and relieved (astro)physicists of much distress.
This is the fourth post in a series on neutrino astrophysics. Other neutrino-related posts can be found here.
Unlike in the kaon system, however, in which the $K^0$ had an equal probability of being a $K_1$ compared to a $K_2$, it turns out the $\nu_e$ is much more likely to be a $\nu_1$ than any other mass eigenstate, so in order to describe the probabilities associated with converting between the two bases, we have to introduce the concept of mixing angles. For simplicity, let's just look at two of the three neutrino types in both bases: $\nu_e$ and $\nu_\mu$ for the flavor eigenstates and $\nu_1$ and $\nu_2$ for the mass eigenstates. We could choose two constants to represent the components of $\nu_e$ in the two mass eigenstates, but in order to ensure normalization, we instead use the sine and cosine of an angle, $\theta_{12}$, and end up with
\begin{align}
\nu_e &= \cos(\theta_{12})\nu_1 + \sin(\theta_{12})\nu_2\\
\nu_\mu &= -\sin(\theta_{12})\nu_1 + \cos(\theta_{12})\nu_2
\end{align}
We can similarly convert from the mass eigenstates to the flavor eigenstates. Throwing in a third state complicates matters somewhat: we have to add two new mixing angles in order to describe the pairwise relationship between the states, and also have to add what's called a CP violating phase $\delta$. Overall, the relationship is a little messy:
\[
\left(\begin{array}{c}\nu_e\\ \nu_\mu\\ \nu_\tau \end{array}\right)= \left(\begin{array}{ccc}
c_{12}c_{13} & s_{12}c_{13} & s_{13}e^{-i\delta}\\
-s_{12}c_{23}-c_{12}s_{23}s_{13}e^{i\delta} &
c_{12}c_{23}-s_{12}s_{23}s_{13}e^{i\delta} & s_{23}c_{13}\\
s_{12}s_{23}-c_{12}c_{23}s_{13}e^{i\delta} &
-c_{12}s_{23}-s_{12}c_{23}s_{13}e^{i\delta} & c_{23}c_{13}
\end{array}\right)\left(\begin{array}{c}\nu_1\\ \nu_2\\ \nu_3 \end{array}\right)
\]
And if that weren't crazy enough, the matrix above uses shorthand: $c_{ij}=\cos(\theta_{ij})$ and $s_{ij}=\sin(\theta_{ij})$. If you set these equations in motion and let time run for a bit, you find that an initial electron neutrino, like one produced in the sun, propagates like this:
Mathematica source code from en.wikipedia.org |
This is the fourth post in a series on neutrino astrophysics. Other neutrino-related posts can be found here.
Friday, May 10, 2013
Neutrino Astrophysics: Solar neutrino problem
The neutrino detectors discussed in the last post are all well and good, but there's a bit of a problem. It's substantial enough to have earned itself a catchy name: the solar neutrino problem. See, the theory of electroweak interactions and astrophysical models of the Sun makes fairly precise predictions about the fluxes of neutrinos that should be measured by these various detectors. But when the detectors go and look for these neutrinos, they find substantially fewer - around a third to a half as many as theorists predicted. This is true all across the board, from the chemical to the Cerenkov detectors, and caused physicists quite the headache.
Initially, it seemed like this was bad news for us. Neutrinos come to us straight from the center of the sun, as opposed to photons, which take thousands to tens of thousands of years to bounce their way out of the center, so it seemed possible that the lack of neutrinos meant that against all odds, the sun's fusion was dying!
Luckily, SNO came to the rescue. As you can see at the right of the image above, the electron-neutrino reaction monitored by SNO, like all the other detectors, finds less than 30% as many neutrinos as expected. But, when we look at the other reaction, the one insensitive to neutrino flavor, we find around 90% of the expected flux, within uncertainties. So it seems that somehow the neutrinos produced in the center of the Sun, which are all electron neutrinos, somehow morph into other types as they travel to us. This phenomenon is called neutrino oscillations, and will be discussed briefly in the near future.
This is the third post in a series on neutrino astrophysics. Other neutrino-related posts can be found here.
From Bahcall, Solar Neutrinos. |
Luckily, SNO came to the rescue. As you can see at the right of the image above, the electron-neutrino reaction monitored by SNO, like all the other detectors, finds less than 30% as many neutrinos as expected. But, when we look at the other reaction, the one insensitive to neutrino flavor, we find around 90% of the expected flux, within uncertainties. So it seems that somehow the neutrinos produced in the center of the Sun, which are all electron neutrinos, somehow morph into other types as they travel to us. This phenomenon is called neutrino oscillations, and will be discussed briefly in the near future.
This is the third post in a series on neutrino astrophysics. Other neutrino-related posts can be found here.
Thursday, May 9, 2013
Neutrino Astrophysics: Detectors
Neutrino detection is a tricky enterprise in the best of cases. Since neutrinos interact only via the weak interaction, their interactions have fantastically low cross-sections, which means that detectors end up seeing only a couple of neutrinos per day in some of the better cases.
The very first neutrino detector is typically called the Homestake or Davis experiment, and involved ${}^{37}$Cl in the form of perchloroethylene, a common cleaning chemical. The interaction monitored by the experiment was $\nu_e+{}^{37}\text{Cl}\rightarrow e^- + {}^{37}\text{Ar}$. For those of you without a periodic table handy, this is just the conversion of a neutron in chlorine to a proton in argon. The lowest-energy neutrinos that could participate in this reaction had energies of around 0.8 MeV. Luckily for science, the results of the experiment were interesting enough to warrant a new generation of detectors, based on gallium-71.
Gallium detectors monitor the reaction $\nu_e+{}^{71}\text{Ga}\rightarrow e^- + {}^{71}\text{Ge}$, but have a much lower threshold energy: just 0.2 MeV, which is low enough to detect pp chain neutrinos. Oh, and another fun fact: one of the gallium neutrino detectors contained 60 tons of gallium, at a time when the world production of gallium was just 10 tons per year! The threshold energies for these detector types are shown in the figure below. Both of these experiment types are chemically based: physicists set up the detectors, leave them alone for a couple of months, then chemically separate out the desired atoms, and somehow count them one by one. While these can detect solar neutrinos, this experimental model has some drawbacks: it can't tell you exactly when an interaction occurred, and it gives little to no information about the direction or energy of the incoming neutrino. These drawbacks led to the next generation of neutrino detection: Cerenkov detectors.
The first Cerenkov detectors looked for elastic scattering between neutrinos and electrons. In such a process, the electron may be accelerated to faster than the speed of light in the detector medium (water or ice). When this occurs, Cerenkov radiation is emitted as the electron decelerates. This is the source of that beautifully toxic blue you see around nuclear waste in underwater facilities. The basic structure for a Cerenkov detector is a huge tank of water (or a cubic kilometer of Antarctic ice, in the case of Ice Cube) for neutrinos to interact with, surrounded by hundreds or thousands of photomultiplier tubes to detect the Cerenkov radiation. Elastic scattering of neutrinos and electrons requires fairly high-energy neutrinos (around 7 MeV), so it can't detect the pp chain neutrinos that gallium detectors can, but it has the advantages of time resolution and directional resolution, since Cerenkov radiation is produced in a cone around the accelerated electron. Note that the neutrinos that scatter with electrons are primarily electron neutrinos, though small fractions of the interactions can involve the other flavors.
Based on this idea is a fourth detector, the Sudbury Solar Neutrino Observatory (SNO). It is a Cerenkov detector with heavy water, which contains a lot of deuterium, instead of the normal old hydrogen. The beauty of this detector is that it's sensitive to two reactions: $\nu_e+D\rightarrow e^-+p+p$, which only detects electron neutrinos, and $\nu+D\rightarrow \nu'+n+p$, which can detect all three flavors of electrons. This ability will turn out to be very beneficial in the resolution of the solar neutrino problem.
This is the second post in a series on neutrino astrophysics.
The very first neutrino detector is typically called the Homestake or Davis experiment, and involved ${}^{37}$Cl in the form of perchloroethylene, a common cleaning chemical. The interaction monitored by the experiment was $\nu_e+{}^{37}\text{Cl}\rightarrow e^- + {}^{37}\text{Ar}$. For those of you without a periodic table handy, this is just the conversion of a neutron in chlorine to a proton in argon. The lowest-energy neutrinos that could participate in this reaction had energies of around 0.8 MeV. Luckily for science, the results of the experiment were interesting enough to warrant a new generation of detectors, based on gallium-71.
Gallium detectors monitor the reaction $\nu_e+{}^{71}\text{Ga}\rightarrow e^- + {}^{71}\text{Ge}$, but have a much lower threshold energy: just 0.2 MeV, which is low enough to detect pp chain neutrinos. Oh, and another fun fact: one of the gallium neutrino detectors contained 60 tons of gallium, at a time when the world production of gallium was just 10 tons per year! The threshold energies for these detector types are shown in the figure below. Both of these experiment types are chemically based: physicists set up the detectors, leave them alone for a couple of months, then chemically separate out the desired atoms, and somehow count them one by one. While these can detect solar neutrinos, this experimental model has some drawbacks: it can't tell you exactly when an interaction occurred, and it gives little to no information about the direction or energy of the incoming neutrino. These drawbacks led to the next generation of neutrino detection: Cerenkov detectors.
The first Cerenkov detectors looked for elastic scattering between neutrinos and electrons. In such a process, the electron may be accelerated to faster than the speed of light in the detector medium (water or ice). When this occurs, Cerenkov radiation is emitted as the electron decelerates. This is the source of that beautifully toxic blue you see around nuclear waste in underwater facilities. The basic structure for a Cerenkov detector is a huge tank of water (or a cubic kilometer of Antarctic ice, in the case of Ice Cube) for neutrinos to interact with, surrounded by hundreds or thousands of photomultiplier tubes to detect the Cerenkov radiation. Elastic scattering of neutrinos and electrons requires fairly high-energy neutrinos (around 7 MeV), so it can't detect the pp chain neutrinos that gallium detectors can, but it has the advantages of time resolution and directional resolution, since Cerenkov radiation is produced in a cone around the accelerated electron. Note that the neutrinos that scatter with electrons are primarily electron neutrinos, though small fractions of the interactions can involve the other flavors.
Based on this idea is a fourth detector, the Sudbury Solar Neutrino Observatory (SNO). It is a Cerenkov detector with heavy water, which contains a lot of deuterium, instead of the normal old hydrogen. The beauty of this detector is that it's sensitive to two reactions: $\nu_e+D\rightarrow e^-+p+p$, which only detects electron neutrinos, and $\nu+D\rightarrow \nu'+n+p$, which can detect all three flavors of electrons. This ability will turn out to be very beneficial in the resolution of the solar neutrino problem.
Modified image from Bahcall, Solar Neutrinos. http://www.sns.ias.edu/~jnb/Papers/Popular/Wiley/paper.pdf |
Other neutrino-related posts can be found here.
Neutrino Astrophysics: Rationale and production
This is the first in a series of posts about neutrinos and astrophysics.
Other neutrino-related posts can be found here.
At first, neutrinos seem like puzzling particles; they interact only by the weak force, carry almost no mass, and seem insignificant to the grand scheme of things. On the contrary, they've played a fundamental role in the evolution of the Universe, and are a uniquely powerful tool in our understanding of certain astrophysical phenomena. This is precisely because of their low interaction rates; they can propagate effortlessly through optically thick material and bring us information that light cannot.
As you may know, there are three types of neutrinos, the electron, muon, and tau flavors, which correspond to specific leptons. There are a variety of neutrino sources. The nearest and most accessible is the Sun, which sends neutrinos to us at a rate of around a billion per square centimeter per second. Then there are energetic events like supernovae and gamma ray bursts, which provide short-lived fluxes of neutrinos. There's also this thing called the cosmic neutrino background, which is analogous to the cosmic microwave background. Finally, there are atmospheric neutrinos, which are produced by cosmic ray interactions with the atmosphere.
In order to discuss neutrino detection, it is beneficial to examine the largest source of neutrinos in our neighborhood: fusion in the center of the Sun (discussed in much greater depth in the post here). The plot below shows the neutrino flux as a function of its energy as a result of various steps in hydrogen fusion.
The solid lines show neutrino fluxes versus the energy of the produced neutrinos for various steps in the proton proton chain. In particular, the lowest-energy neutrinos shown (the spike on the left) are produced in the first step of the pp chain, while the other reactions, like boron-8 and He-p fusion, occur in only a small fraction of solar fusion reactions. The dashed lines show predicted neutrino fluxes from the CNO cycle, which isn't terribly relevant to the Sun, since its core is too cool for the CNO cycle to function.
Other neutrino-related posts can be found here.
At first, neutrinos seem like puzzling particles; they interact only by the weak force, carry almost no mass, and seem insignificant to the grand scheme of things. On the contrary, they've played a fundamental role in the evolution of the Universe, and are a uniquely powerful tool in our understanding of certain astrophysical phenomena. This is precisely because of their low interaction rates; they can propagate effortlessly through optically thick material and bring us information that light cannot.
As you may know, there are three types of neutrinos, the electron, muon, and tau flavors, which correspond to specific leptons. There are a variety of neutrino sources. The nearest and most accessible is the Sun, which sends neutrinos to us at a rate of around a billion per square centimeter per second. Then there are energetic events like supernovae and gamma ray bursts, which provide short-lived fluxes of neutrinos. There's also this thing called the cosmic neutrino background, which is analogous to the cosmic microwave background. Finally, there are atmospheric neutrinos, which are produced by cosmic ray interactions with the atmosphere.
In order to discuss neutrino detection, it is beneficial to examine the largest source of neutrinos in our neighborhood: fusion in the center of the Sun (discussed in much greater depth in the post here). The plot below shows the neutrino flux as a function of its energy as a result of various steps in hydrogen fusion.
Image from Bahcall, Solar Neutrinos. http://www.sns.ias.edu/~jnb/Papers/Popular/Wiley/paper.pdf |
The solid lines show neutrino fluxes versus the energy of the produced neutrinos for various steps in the proton proton chain. In particular, the lowest-energy neutrinos shown (the spike on the left) are produced in the first step of the pp chain, while the other reactions, like boron-8 and He-p fusion, occur in only a small fraction of solar fusion reactions. The dashed lines show predicted neutrino fluxes from the CNO cycle, which isn't terribly relevant to the Sun, since its core is too cool for the CNO cycle to function.
Liquid mirror telescopes
While the first telescopes to be used by astronomers used lenses in a long tube, most significant telescopes nowadays are reflecting telescopes. This means they have a large parabolic mirror that collects light from much of the sky and reflects it all to the focal point of the parabola, where there's some sort of light-collecting device that constructs the final image.
The problem with these mirrors is that they're very expensive to make. The resolution of the image is directly related to the smoothness and paraboloid-ness of the mirror, so it's crucial to polish the shape down just right. In most cases, I believe that the base shape is made of glass. It is cut, smoothed, and polished to near perfection, then coated with a thin layer of a reflective material like aluminum. The problem with this is that obtaining a good sample of glass and the process of polishing it takes time and costs a substantial amount. The cost of the mirror is often comparable to the cost of the rest of the telescope!
One possible alternative to the standard mirrored telescopes is the liquid mirror telescope. The premise is that a spinning cylinder of liquid naturally forms a paraboloid surface. By coating this liquid with a thin layer (about a millimeter thick) of mercury, which is both reflective and a liquid of the right viscosity, it is possible to obtain a paraboloid reflective surface for a telescope. Such telescopes were first proposed and used in the early nineteenth century, but they fell into disfavor with improving technology for more conventional techniques. More recently, Ermanno Borra, an astrophysicist based in Canada, revisited the idea and continues to explore it.
The resolution offered by such telescopes is high, and costs are just 1-10% that of a normal telescope mirror. The biggest downside is that a liquid mirror telescope cannot easily be tilted to track a star, since the liquid would slosh out of the correct mirror shape. Thus liquid mirror telescopes are pointed directly upward and used for large sky surveys for which tracking of an individual star or cluster is irrelevant. They've also proven useful for atmospheric studies, and Professor Borra is exploring the use of more viscous materials in order to enable some degree of tilting without damaging the function of the telescope.
The problem with these mirrors is that they're very expensive to make. The resolution of the image is directly related to the smoothness and paraboloid-ness of the mirror, so it's crucial to polish the shape down just right. In most cases, I believe that the base shape is made of glass. It is cut, smoothed, and polished to near perfection, then coated with a thin layer of a reflective material like aluminum. The problem with this is that obtaining a good sample of glass and the process of polishing it takes time and costs a substantial amount. The cost of the mirror is often comparable to the cost of the rest of the telescope!
One possible alternative to the standard mirrored telescopes is the liquid mirror telescope. The premise is that a spinning cylinder of liquid naturally forms a paraboloid surface. By coating this liquid with a thin layer (about a millimeter thick) of mercury, which is both reflective and a liquid of the right viscosity, it is possible to obtain a paraboloid reflective surface for a telescope. Such telescopes were first proposed and used in the early nineteenth century, but they fell into disfavor with improving technology for more conventional techniques. More recently, Ermanno Borra, an astrophysicist based in Canada, revisited the idea and continues to explore it.
The resolution offered by such telescopes is high, and costs are just 1-10% that of a normal telescope mirror. The biggest downside is that a liquid mirror telescope cannot easily be tilted to track a star, since the liquid would slosh out of the correct mirror shape. Thus liquid mirror telescopes are pointed directly upward and used for large sky surveys for which tracking of an individual star or cluster is irrelevant. They've also proven useful for atmospheric studies, and Professor Borra is exploring the use of more viscous materials in order to enable some degree of tilting without damaging the function of the telescope.
Monday, May 6, 2013
Backward compatibility in astrophysics
Computer science is well known for having all sorts of backward compatibility issues. The reason is that users are more likely to use a new program (or a new release of a program) if their existing code will still run on the new version. It often leads to unnecessary overhead and occasionally to somewhat unpleasant code gymnastics to ensure that older functionality will continue to work in newer versions of a product.
Backward compatibility shows up in other fields, too. One great example of this is terminology. A researcher will have an interesting idea and give it a name, mostly just to be able to discuss it. Then the name starts to creep into the literature on the subject, since it's the best way to describe the idea. Before you know it, the name is everywhere, even though that wasn't the researcher's original idea. In some cases, this works fine. For instance, "quark" is as good a name as any for the subcomponent particle of hadrons. In some cases, though, a name doesn't quite fit. Color, for instance, is a three-valued "charge" that is often confusing to people learning about the strong force and nuclear interactions.
There are (at least) two interesting examples of this phenomenon in astrophysics: spectral types and galaxy classifications.
Once upon a time, in the late 1800's, scientists were scrambling to classify stars by their spectra. One classification scheme (there were plenty) used the letters A-N to describe the strength of the H-alpha line, a prominent spectral line produced by hydrogen. Later on, these classifications were reorganized to reflect the temperature of stellar surfaces, which provides a much more useful framework for astrophysical organization. This leaves us with the wonderful spectral types: OBAFGKMN... not exactly alphabetical order.
From O to N-type stars, surface temperature decreases, and the stars grow redder. The reason that the temperature of the star isn't exactly reflected in the strength of the hydrogen lines is that different temperatures produce different ionization states in hydrogen. Clearly, if a hydrogen atom is ionized, its electron can't change energy levels and thus emit the characteristic light, because it doesn't have an electron. Similarly, only singly-ionized calcium emits the characteristic K spectral line, so the K line doesn't appear in stars too cold to ionize calcium, or stars hot enough to doubly ionize it.
On a different note, galaxies were originally classified according to the Hubble Fork. Edwin Hubble called elliptical galaxies early-type, and spirals (divided into spiral and barred spiral galaxies) late-type. The logic here was that galaxies started as elliptical blobs and evolved into spirals. As it turns out, galactic evolution actually happens the other way around, with spiral galaxies turning into ellipticals in their old age. Also, we have only the fuzziest of ideas about the mechanisms that allowed for the production of so many spiral galaxies. Thin disks should be fairly fragile, so that galaxy mergers produce blobbier structures, but astrophysicists also believe that most existing galaxies formed via mergers. Yet another mystery.
Backward compatibility shows up in other fields, too. One great example of this is terminology. A researcher will have an interesting idea and give it a name, mostly just to be able to discuss it. Then the name starts to creep into the literature on the subject, since it's the best way to describe the idea. Before you know it, the name is everywhere, even though that wasn't the researcher's original idea. In some cases, this works fine. For instance, "quark" is as good a name as any for the subcomponent particle of hadrons. In some cases, though, a name doesn't quite fit. Color, for instance, is a three-valued "charge" that is often confusing to people learning about the strong force and nuclear interactions.
There are (at least) two interesting examples of this phenomenon in astrophysics: spectral types and galaxy classifications.
Once upon a time, in the late 1800's, scientists were scrambling to classify stars by their spectra. One classification scheme (there were plenty) used the letters A-N to describe the strength of the H-alpha line, a prominent spectral line produced by hydrogen. Later on, these classifications were reorganized to reflect the temperature of stellar surfaces, which provides a much more useful framework for astrophysical organization. This leaves us with the wonderful spectral types: OBAFGKMN... not exactly alphabetical order.
From O to N-type stars, surface temperature decreases, and the stars grow redder. The reason that the temperature of the star isn't exactly reflected in the strength of the hydrogen lines is that different temperatures produce different ionization states in hydrogen. Clearly, if a hydrogen atom is ionized, its electron can't change energy levels and thus emit the characteristic light, because it doesn't have an electron. Similarly, only singly-ionized calcium emits the characteristic K spectral line, so the K line doesn't appear in stars too cold to ionize calcium, or stars hot enough to doubly ionize it.
On a different note, galaxies were originally classified according to the Hubble Fork. Edwin Hubble called elliptical galaxies early-type, and spirals (divided into spiral and barred spiral galaxies) late-type. The logic here was that galaxies started as elliptical blobs and evolved into spirals. As it turns out, galactic evolution actually happens the other way around, with spiral galaxies turning into ellipticals in their old age. Also, we have only the fuzziest of ideas about the mechanisms that allowed for the production of so many spiral galaxies. Thin disks should be fairly fragile, so that galaxy mergers produce blobbier structures, but astrophysicists also believe that most existing galaxies formed via mergers. Yet another mystery.
Friday, May 3, 2013
Astrophysical botany
Astrophysics is a pretty neat field. Even so, when we start discussing the really big picture - the structure of the Universe, galaxy clusters, and so on - it starts to sound a lot like botany. We know what we see, and we can classify and describe it, but not in any sort of elegant way. There's some fundamental understanding that we still lack. On the one hand, this can be a little disappointing. On the other, it means that we have things left to learn, and my guess is that we'll be figuring bits and pieces out in the next couple of decades.
Even just within our galaxy, we can't fully explain the stellar orbits we observe. Star systems orbiting the center of the galaxy behave as if there's much more mass than we can see. That mass is called dark matter, and it makes up a huge fraction of the total mass of the Milky Way. Astrophysicists have watched the orbits of nearby stars to determine the distribution of dark matter, and particle physicists are conducting all sorts of searches for this mysterious dark matter, which doesn't emit light and only interacts with regular matter via gravity and the weak nuclear force.
And matters only get worse as we look at larger-scale structures. The 2011 Nobel Prize in physics was awarded to three astrophysicists "for the discovery of the accelerating expansion of the Universe through observations of distant supernovae." As far as we know, the Universe started in a big bang and has been expanding ever since. The bizarre thing is that it seems like the expansion should be slowing down, since all the matter is gravitationally attracted to all the other matter, but instead it's speeding up! We call the source of the extra energy to increase the expansion rate dark energy. It seems to be some sort of property of spacetime, but we have no idea what's really causing it.
Even just within our galaxy, we can't fully explain the stellar orbits we observe. Star systems orbiting the center of the galaxy behave as if there's much more mass than we can see. That mass is called dark matter, and it makes up a huge fraction of the total mass of the Milky Way. Astrophysicists have watched the orbits of nearby stars to determine the distribution of dark matter, and particle physicists are conducting all sorts of searches for this mysterious dark matter, which doesn't emit light and only interacts with regular matter via gravity and the weak nuclear force.
And matters only get worse as we look at larger-scale structures. The 2011 Nobel Prize in physics was awarded to three astrophysicists "for the discovery of the accelerating expansion of the Universe through observations of distant supernovae." As far as we know, the Universe started in a big bang and has been expanding ever since. The bizarre thing is that it seems like the expansion should be slowing down, since all the matter is gravitationally attracted to all the other matter, but instead it's speeding up! We call the source of the extra energy to increase the expansion rate dark energy. It seems to be some sort of property of spacetime, but we have no idea what's really causing it.
Thursday, May 2, 2013
Inertial and gravitational mass
An interesting quandary in general relativity is the equivalence of inertial and gravitational mass. These are two theoretically distinct concepts that turn out to be entirely identical. There are two ways of defining mass: according to its interaction with gravity, and according to the force necessary to accelerate it.
Inertial mass is the mass acted on in $\vec F=m_i\vec a$. One way to look at this is that if you have a relatively frictionless table, it takes a lot more force to get a very massive object moving than a lighter one, independent of the force of gravity.
Gravitational mass is the thing on which gravity acts (umm...duh?). The magnitude of gravitational force between two objects of mass $M$ and $m_g$ is $G\frac{Mm_g}{r^2}$. In this way, gravitational mass is the equivalent of charge for the electromagnetic interactions; more massive objects feel more gravitational force.
Intuitively, there is no reason for the gravitational mass $m_g$ and the inertial mass $m_i$ of an object to be the same. But as we all know, they are. This gives us convenient results such as an acceleration due to gravity that doesn't care about the mass of the object.
In general relativity, this equivalence is explained by the equivalence principle, which states that acceleration and gravity are locally indistinguishable.
One of the tests of general relativity is therefore checking whether these two masses are really equivalent. Tests of the equivalence principle have been carried out for a long while, starting with Galileo dropping objects of different masses off the leaning tower of Pisa to check that they accelerated in the same way. More recently, experiments have checked this equivalence more precisely, typically using torsional springs to examine the acceleration of various objects due to gravity.
Inertial mass is the mass acted on in $\vec F=m_i\vec a$. One way to look at this is that if you have a relatively frictionless table, it takes a lot more force to get a very massive object moving than a lighter one, independent of the force of gravity.
Gravitational mass is the thing on which gravity acts (umm...duh?). The magnitude of gravitational force between two objects of mass $M$ and $m_g$ is $G\frac{Mm_g}{r^2}$. In this way, gravitational mass is the equivalent of charge for the electromagnetic interactions; more massive objects feel more gravitational force.
Intuitively, there is no reason for the gravitational mass $m_g$ and the inertial mass $m_i$ of an object to be the same. But as we all know, they are. This gives us convenient results such as an acceleration due to gravity that doesn't care about the mass of the object.
In general relativity, this equivalence is explained by the equivalence principle, which states that acceleration and gravity are locally indistinguishable.
One of the tests of general relativity is therefore checking whether these two masses are really equivalent. Tests of the equivalence principle have been carried out for a long while, starting with Galileo dropping objects of different masses off the leaning tower of Pisa to check that they accelerated in the same way. More recently, experiments have checked this equivalence more precisely, typically using torsional springs to examine the acceleration of various objects due to gravity.
Wednesday, May 1, 2013
So many kaons...
I've been polishing up a final paper on kaons all day, so I'm a little low on energy (and time) at the moment. Instead, have prettier Feynman diagrams for the neutral kaon oscillations!
Both of these are second-order weak interactions, as I've mentioned before. They show the strangeness oscillations I mentioned previously, but it turns out to be way easier to calculate how much oscillation there by converting between strangeness and CP eigenstates. I still find the Feynman diagrams useful, though, since they give a concrete physical way for the strangeness of a kaon beam to evolve, in addition to the somewhat abstract mathematical approach.
Subscribe to:
Posts (Atom)