Wednesday, July 31, 2013

Muon g-2: The muon

I've previously discussed quite a few topics in particle physics, including a brief introduction to leptons and a discussion of parity-violating weak decays. Now I'd like to go into a little more depth regarding my current favorite elementary particle: the muon.

As mentioned previously, the muon is a sort of strange second cousin of the electron. The two interact with other particles and forces very similarly, so the two major differences are the muon's mass (207 times that of the electron) and its lifetime (a mere 2.2 microseconds - yes, that's millionths of a second). Like the electron and quarks, the muon is a 'spin-one-half' particle, which for our purposes means it has a spin of either 1/2 (unitless) or $\hbar/2$ (unitful) either along or against any axis you choose to measure. But of course, the muon doesn't actually take up any space (it's a point particle), so it's not really spinning...that's an interesting quantum phenomenon that will have to await another post for a deeper explanation.

One interesting property of many subatomic particles is called the magnetic moment, generally denoted as $\mu$. Macroscopically, a loop of current has a magnetic moment, which determines the amount of torque it will feel from a magnetic field. Similarly, a muon's magnetic moment determines its behavior in a magnetic field. Just as a top spinning on a table spins a bit (precesses) as it starts to slow down, a muon in a magnetic field experiences a precession of its spin direction. The magnetic moment of a particle is determined by a particle-specific constant called $g$, which is short for the gyromagnetic ratio (thus the 'g-2' in the experiment's name), and this determines how fast the particle's spin precesses in a magnetic field. We can talk specifically about the muon's gyromagnetic ratio as $g_\mu$. Another consequence of throwing muons at a magnetic field is that they end up on a curved path, as charged particles in magnetic fields are wont to do. In the Muon g-2 experiment, we inject muons into a very uniform magnetic field. It's tuned just right so that the muons go around in a circle inside our storage ring, and we can (indirectly) measure the spin angle of the muon when it decays. By studying a lot of muons, we can see how the muon's spin compares to its momentum over time, and after 20-30 turns, it's overtaken the momentum by a full turn. Well, that's pretty neat, but I can very clearly hear you asking "so what?"

Let me take a brief step back in time. Once upon a time, many years ago, a brilliant physicist named Paul Dirac came up with a very elegant equation describing the behavior of a free charged particle, which we now (very creatively) call the Dirac Equation. In this model, the value of the gyromagnetic ratio was exactly two. Well, it turns out that nothing in particle physics is so simple, and so the value of $g$ is actually a tiny bit higher. This is a result of these crazy things called virtual particles. They're sort of like nature's own particle accelerator, and basically pairs of particles are constantly popping into and out of existence - quarks, bosons, muons, electrons - all the time, everywhere. It turns out that while these little fellows may be short-lived, they have an effect on the value of the gyromagnetic ratio $g$. As a result, the value that we're measuring isn't $g$ so much as $g-2$, which is important enough that it gets its own name: the anomalous magnetic moment of the muon ($a_\mu$).

The interesting thing about muons is that because of their higher mass, they're more sensitive to these virtual particles than the less massive electron, and they live long enough to be a pretty good candidate for study. Furthermore, the last time an experiment (E821 at Brookhaven National Lab) measured their precession rate and gyromagnetic ratio, what they found disagreed pretty substantially from what theory predicted at the time. In mathematical parlance, there was a $3\sigma$ (so three standard deviations) disagreement, which isn't enough to claim a discovery (in particle physics, we require $5\sigma$), but it's certainly enough to label the issue as interesting. If we make the same measurement again (but better, of course, thanks to improved detector technology and vastly improved beam quality at Fermilab) and that difference persists, it will be a good indication that something that we don't understand is going on at the most fundamental level. That could mean the existence of new particles, and the really exciting thing about the measurement is that it could find traces of new particles that even the Large Hadron Collider in Switzerland couldn't produce! And all from a very precise measurement of a single property of a single particle. Wow - physics is awesome.

If you want to learn more about the Muon g-2 experiment, some of my other blog posts can be found here, or you can check out the experiment's webpage here.

Monday, July 29, 2013

Muon g-2: What's that thing for, anyway?

A natural question to ask in light of my recent post is what exactly the enormous Muon g-2 (remember, it's pronounced gee minus two) electromagnet is good for. But first, a point of clarification: the electromagnet wasn't on while it was being transported; as an electromagnet, it has to be plugged in before it becomes magnetic. And before we can really plug it in, we need to cool it down. A lot. Along with the superconducting coils, the cryostat for the experiment also couldn't be safely disassembled, so the two were transported together. With the aid of the cryostat and liquid helium, the temperature of the coils will be reduced to just 5 Kelvin (around -450 degrees Fahrenheit), a hair above absolute zero, which chills them enough to be able to superconduct; that is, conduct electricity with exactly zero resistance. It's a fascinating physical phenomenon that falls squarely in the realm of "post some other day," but suffice it to say that once we cool this thing down and plug it in (slightly more complicated than your standard wall socket, but the same general idea), it generates a pretty uniform magnetic field of around 1.5 T (tesla) inside the storage area for the muons. As far as magnetic fields go, 1.5 T is pretty strong - the magnetic field of the Earth is on the order of a few dozen millionths of a tesla at the surface, and a standard refrigerator magnet has a field strength of a few thousandths of a tesla. That said, it is fairly similar to the magnet in an MRI, which tends to generate a magnetic field of one to three tesla. This is moderately unsurprising, as both MRIs and the g-2 ring make use of superconducting coils to produce their magnetic fields. (This, by the way, is why it is more expensive to turn an MRI off overnight than to leave it running - the coils just keep conducting in any case, and the costs and danger of releasing all that liquid helium in gas form are pretty high.)

Okay, we now have a magnetic field...so what? We rely on the fact that a magnetic field bends the path of a charged particle. In this experiment, we inject a very pure muon beam into this ring, and if we've calibrated the magnetic field just right, it bends these muons into a circular path so that they whiz around the ring in the region we want them to. Once they're injected, we essentially leave the muons alone - we're not pushing them to higher and higher energies like they do in the enormous ring at the Large Hadron Collider at CERN. Instead, we just let the muons circle around a few thousand times in just a few millionths of a second and wait for it to decay. Thus this particular ring is classified as a storage ring rather than a cyclotron, synchrotron, or synchrocyclotron. That's the general story behind what our giant electromagnet will be used for - I'll post more about the science behind it and some of the fascinating techniques used in the next few days.

If you want to check out other posts on Muon g-2, they can be found here.

Saturday, July 27, 2013

Muon g-2: The big move

If you live in or around Chicago, chances are good that you've heard about the big move of a giant electromagnet to its new home at Fermilab. The 50-foot ring was designed and constructed at Brookhaven National Lab, on Long Island, in the early 1990s for a Muon g-2 (pronounced gee minus two) experiment going on there. It took data through about 2001, after which the experiment published a bunch of results and shut down. Luckily for the new experiment at Fermilab, it was cheaper to just let it sit around than to disassemble it, so it sat in a very large garage for a little over a decade until it began its journey to the Midwest.

The vast majority of its trip was by barge - around a month spent traveling south along the Atlantic seaboard and all the way around Florida, then up a series of rivers, from the Tombigbee River and the Tennessee to the Ohio River and eventually the Mississippi. It ended its barge trip on the Illinois and eventually the Des Plaines Rivers, and it was unloaded just last weekend (July 21-22) in Lemont. From there, it went on a three-night journey to Fermilab, reaching the site successfully on Friday, July 26th. 

While it is a pretty large piece of equipment, this move would have been fairly straightforward if it hadn't been for the incredible sensitivity of the ring. It's a superconducting coil that is all one piece, so it couldn't be disassembled at all. As if that weren't enough, flexing just 3 millimeters (around a tenth of an inch) out of the plane of the ring could be enough to damage some of the superconducting coils within, which would make the whole magnet essentially an oversized doorstop. This added complication made the ring's "Big Move" an engineering marvel, and Emmert International really rose to the challenge, custom designing a fantastic (bright red) frame to support the ring and transporting it on a hydraulically balanced barge and truck.

Some of the transportation details are pretty neat, too. For instance, the truck trailer that supported the ring had a whopping 64 wheels on 16 independent axles, and those axles could be remotely controlled to help with maneuvering around trees and backing up the truck. That turned out to be very useful, as the ring's route required that the truck back up several highway ramps in order to avoid tollbooths. Oh, and one other fun fact: when it went through an open-road tolling arch, it had just six inches of clearance on either side!

If you're interested in more information about the ring's move, check out its website, complete with twitter feed, pictures, and a map showing its route.

I'm hoping to write some more posts about Muon g-2 in the near future, in particular about what that ring is useful for and some of the cool things that could come out of its use.

If you want to check out other posts on Muon g-2, they can be found here.

Friday, July 19, 2013

Condensation clouds

Condensation clouds seems like a strange name for an interesting phenomenon - after all, aren't all clouds formed by condensation? Well, it turns out that condensation clouds, also known as Wilson clouds, are actually a somewhat different phenomenon than your run-of-the-mill precipitation source.

They appear in shock wave situations, like super-sonic aircraft and large explosions, but only in particularly humid air (or in some underwater nuclear tests - vaporize enough water and you end up with really humid underwater bubbles). The obvious part of any shock wave is the high pressure at the 'positive phase' of the shock wave - this is what causes buildings to implode near a large explosion (the remnants are then blasted with winds of several hundred miles per hour in the shock's wake). But there's more to a shock wave than that. There's also a negative phase, in which the air is much lower-pressured. As the shock front expands, the positive and negative phases experience very little air flow with the rest of the world. As a result, the positive phase's pressure decreases as it expands, so its destructive power diminishes with distance. The negative phase of the shock wave similarly experiences a decrease in pressure as the shock radiates outward, and because air and heat are not being readily exchanged between the negative phase and the rest of the world, this manifests as a decrease in temperature. In sufficiently humid air, that decrease in temperature is enough to cause water vapor in the negative phase of the shock wave to condense. When large explosions are involved, the result is a roughly spherical-looking cloud around the explosion that has little or nothing to do with the debris and fireball inside.

Similar effects can be observed when airplanes fly really fast in humid conditions or do high-acceleration maneuvers - the curves of the wings and the body of the plane cause local areas of low and high pressure, and the low-pressure areas often experience condensation. That is why you often see a conical cloud behind planes in pictures of supersonic flight; the plane is moving pretty darn fast, so there's a lot of opportunity for low-pressure areas.

As for the eventual fate of the condensation cloud, when the extenuating circumstances that created it wear off, the water re-evaporates and the cloud seems to disappear. In some cases, parts of the condensation cloud can stabilize, which is apparently the source of those rings you sometimes see around the stems of mushroom clouds.

Wednesday, July 17, 2013

Ionization-based detectors

Ionization detectors were some of the earliest detectors developed in the study of high energy physics. While they have fallen out of widespread use in favor of more recent technology such as silicon detectors, they remain a simple, easy to maintain, and inexpensive option for tracking charged particles. Furthermore, descendents of the original ionization detectors, like the multi-wire proportional chamber and the time projection chamber, continue to push the boundaries of particle detection and are now being used in dark matter searches and collider detectors. 

The simplest ionization detector looks a bit like a straw. The straw has a wire fed through its center, and a high voltage is applied. That central wire is the anode, and conducting material along the inside of the straw itself is connected to ground and serves as the cathode. The straw is filled with gas, which generally cycles through the straw. The basic principle is that when a charged particle passes through, it ionized some of the gas molecules inside. The applied voltage generates an electric field that causes the yanked-off electrons to accelerate towards the central wire. They hit the wire, and we measure their presence as current.

Naturally, though, the whole business is slightly more complicated than that; there's actually a variety of regions in the behavior of the detector based on the voltage you apply to the central wire.
The various regions in ionization detector behavior. Figure from W.R. Leo, chapter 6.
Each of these regions has something different going on physically.
  • Recombination region: The electric field in the detector is low enough that ionized atoms are able to recombine (electrons find ions and re-merge) with some probability. As a result, not all the primary ionization electrons are captured, so as you increase the voltage on the wire, the number of ionization electrons you capture increases fairly dramatically.
  • Ionization chamber region: All ionized atoms remain ionized and travel to the anode/cathode. As such, there's a plateau here: increasing or decreasing the voltage applied to the central wire doesn't change how many of the ions are caught.
  • Proportionality region: The voltage in this range is high enough that it attracts those primary electrons very strongly - so strongly that they in turn ionize more atoms on their way to the central wire. This causes an avalanche, and in this region at least, the number of captured electrons is proportional to the number that were ionized in the first place, with a constant of proportionality dependent on the voltage.
  • Region of limited proportionality: Increase the voltage higher still, and the cascades of electrons develop such high charge densities near the anode that they distort the electric field, which reduces the proportionality of the detector.
  • Geiger-Muller counter: Beyond the region of limited proportionality is another plateau in the number of detected electrons. Physically, at this point, the cascades don't stop, and photons emitted by excited atoms ionize more and more atoms. These detectors end up with a self-sustaining chain reaction, so that even the slightest ionization in the gas results in exactly the same current as a massive, highly charged particle passing through. To avoid getting a constant current after just a single hit, these detectors have to have a quenching gas inside, in order to capture and disperse the energy from these emitted photons. 
If you're interested in learning more about ionization detectors (as well as a variety of other interesting detectors and techniques), I highly recommend W.R. Leo's Techniques for Nuclear and Particle Physics Experiments, which gives an excellent explanation of the theory behind many detector types.

Monday, July 15, 2013

Sonic booms

Many people have heard of the sonic boom, a loud noise that happens when something (an airplane, a bullet, the tip of a bull whip) starts moving faster than the speed of sound. What a lot of people don't realize, though, is that it's not just a single boom - it keeps going for as long as the object is moving faster than Mach 1 (1 times the speed of sound). In the case of an airplane, it will pass overhead, and the shock wave that is the sonic boom will travel outward to you, the observer, at the speed of sound. That is, slower than the airplane is moving. As a result, you end up hearing the sonic boom fairly substantially after the object actually passes over you. It expands in what looks like a cone shape, like this:
The red object, moving from left to right at Mach 1.5, leaves behind a sonic boom cone. When the edge of the cone reaches an observer on the ground, they hear the boom.  
Where does this shock wave come from, anyway? An airplane in normal flight sends out vibrational waves (some in the form of sound, others at frequencies we can't hear) in all directions. These vibrations progress outwards at, you guessed it, the speed of sound.* But what happens when an object starts moving at the speed of sound? All the vibrations it's been beaming forward suddenly can't leave the vicinity of the object, so they just pile up in approximately one place, right along the object itself!
The red object, (still) moving from left to right, now at Mach 1, appears to create a massive shock wave, thanks to many many waves piling on top of each other at the object's location. This is the Prandtl-Glauert singularity.

This hypothesis, the Prandtl-Glauert singularity (which also suggests infinite pressure at Mach 1), turns out to be not entirely true, because other effects, like turbulence and viscosity, start to assert themselves, and compressible fluids under extreme conditions are hard to describe in a precise mathematical way. In other words, physics is awesome.

* This, by the way, is the origin of the Doppler effect: when an object is moving towards you, you hear a higher-pitched sound than when it passes you and starts moving away, because the sound waves are compressed in front. That compression lessens the time between peaks of the sound's oscillation, which manifests itself as a higher-pitch noise. Conversely, once the object is past you, the wave peaks grow further apart in time, so you hear it as a lower-pitch sound.
An object traveling from left to right at Mach 0.7 emits sound (and other vibrations) in all directions. But it's catching up to the waves it sends forwards, and is running away from the waves it sends behind it. The Doppler effect is the result.

Tuesday, July 9, 2013

Optimization in c++

Recently, a question arose about the most efficient way to store certain data in a program. In essence, a unique identification is needed for a detector component, which can be broken down into properties A, B, C, and D. Many of these things will need to be initialized, stored, and sorted, and we wanted to know if a c++ struct (essentially identical to a c++ class, but conventionally used for storing related information together; at least in my experience, they tend to have far fewer methods and their fields are almost entirely public) or a sort of hashed integer (multiply property A by 10000, add B times 1000, etc.) would be more efficient.
A struct can have a manual comparison function, so it would be sortable, but we were interested in determining how efficient this sorting process is compared to sorting plain old integers, so I ran a quick little study to compare them.

The three candidate data structures are:
  • An integer, which is defined as $(A \times 10000) + (B \times 1000) + (C \times 100) + D$. The sorting function is a simple comparison, but is provided manually in order to ensure similarity between the trials.
  • A Specification struct, which contains fields for track number, view, plane, and wire, with a comparison function that accesses one or more of these fields in order, henceforth known as the naive struct.
  • A Specification struct with the same fields as above, but with an additional integer uniqueID field, calculated after all setting is done according to the formula above. The sorting function compares only the uniqueID. This type is referred to as the smart struct.
What I investigated was the execution time to create and sort $10^7$ objects of these different data types, using c++'s clock type and clock() method to count CPU cycles spent in the program (so as to avoid influences from system interruptions).
I timed both creation and sorting the data structures with no optimization and then with gcc -O3 optimization. Results are summarized in the figures below, and discussion is below the relevant figure.

Unoptimized times to create and sort $10^7$ objects of the three types
Here, unsurprisingly, we see that the integer is faster in all respects than the structs. Since we have a total of $10^7$ objects, the total time difference boils down to around 150 nanoseconds per object of extra time. As we would expect, it takes a little longer to create the smart struct than the naive one, since we have to set up the uniqueID, but it sorts much faster. The interesting stuff starts to happen when the code is compiled with maximum (standards-compliant) optimization.

Optimized times to create and sort $10^7$ objects of the three types.
The purple/pink is an integer with its default sorting mechanism.






For one thing, it's obvious that optimization speeds up the code, especially the sorting, by a lot: it runs around a factor of four faster than in the unoptimized version. It also brings the performance of the three data structures much closer together, and for some strange reason, the smart struct appears to actually sort faster than the integers; what a mystery!
The last consideration is what I call the bare integers; that is, the same uniqueID as held by the regular integers, but with the default integer sorting mechanisms, rather than a custom-provided function. The running time difference between the bare and bloated integer types is on the order of 20 nanoseconds, which corresponds to around 40 assembly instructions; probably around the right number to correspond to a function call.

My conclusion from this study is that when compiled with optimization, structs and ints are roughly identical in terms of CPU time required to create and sort them. Since structs pretty dramatically increase readability of the code, I'm sticking with those instead of tiptoeing around a funky hashed integer.

Tuesday, July 2, 2013

Amor asteroids

The Amor asteroids are a class of asteroids that get very close to the Earth from the outside, usually without crossing the Earth's orbit. Some of these are classified as potential collision hazards, but most just keep their distance. I learned about them as a result of more playing with Mathematica, this time with its AstronomicalData. A particularly handy option for the package is the "Classes" argument (so AstronomicalData["Classes"]), which will return a list of possible classes, like InnerMainBeltAsteroid and DwarfSpheroidalGalaxy and so on. These classes can be used to get a list of list of astronomical objects that fall into the class.

In any case, here's a fun little diagram showing the orbits of various Amor asteroids. The thick black lines are Earth, Mars, and Jupiter's orbits, from smallest to largest. You can see that many of the Amor asteroids cross Mars's orbit, and a few even get as far away as Jupiter.
Orbits of the Amor asteroids. Black lines show
Earth, Mars, and Jupiter. Blue lines show the asteroid
orbits. The big yellow dot represents the Sun.
But overall, this isn't a terribly revealing diagram. A slightly more intriguing plot is shown below: eccentricity of the orbit as compared to the asteroid's semimajor axis.
Eccentricity of the Amor asteroids as compared to their
semimajor axes. Looks interesting, right?
It's interesting to see that as the asteroid's semimajor axis increases, its eccentricity does too. There's not exactly a concrete reason that we would physically expect this result. So what causes this trend?

It turns out that it's actually all determined by our definition of an Amor asteroid: its perihelion (closest approach to the Sun) falls between 1.0 AU (astronomical units - the average distance from Earth to the Sun) and 1.3 AU. From the geometry of ellipses, the closest an orbit gets to one of its foci (which is the perihelion by definition), can be expressed in terms of its semimajor axis and eccentricity as $r_{min}=a(1-e)$ ($a$ is the semimajor axis, $e$ the eccentricity). Plotting this, we find that the eccentricities that we observed in such a promising-looking trend above are actually just adhering to the constraints imposed by the definition.
Perihelion of Amor asteroids compared to their
semimajor axes. Looks like that trend was just a
figment of our imaginations.
Another way of seeing this is to look at the possible values of the eccentricity for various semimajor axes:
Sure enough, all those eccentricities are based on
the definition of an Amor asteroid!
The upper bound there is $e=1-\frac{1}{a}$; that is, the highest possible eccentricity (for a given semimajor axis) the asteroid can have without crossing Earth's orbit. The lower bound is $e=1-\frac{1.3}{a}$, which is the lowest possible eccentricity it can have while still getting close enough (1.3 AU) to be considered an Amor asteroid. Mystery successfully solved.