Wednesday, June 26, 2013

Kirkwood gaps

Well, my recent post on resonance in beam dynamics got me thinking about one of my other favorite resonances: Kirkwood gaps. If you look at the semimajor axes (that's half the sum of closest and furthest distances from the Sun throughout the asteroid's orbit) of all the known asteroids and put them in a histogram of some sort, there's a very peculiar structure. There are noticeable gaps in the distribution, like there are some distances asteroids just don't want to live. Take a look:

What could cause this? The answer, it turns out, is our solar system's most massive planet, Jupiter, whose mass is within an order of magnitude of becoming an actual star. You see, the big gaps correspond to asteroid orbital periods that are rational number multiples of Jupiter's orbital period. Take the giant 3:1 gap around 2.5 AU. If some asteroid occupied that particular region, for every three of its turns around the Sun, Jupiter would go around once, which means that every three asteroid-years, Jupiter would be in the same relative position. This produces a resonance, and Jupiter, over the course of many, many asteroid-years, either slings the asteroid out of the solar system or adjusts its orbit so that its semimajor axis is no longer in that 'danger zone.' Similar stories apply to the three other prominent resonances.
These resonant regions are called Kirkwood gaps. It's just one of many really interesting phenomena resulting from resonances in physics.

Monday, June 24, 2013

Resonance in weak focusing

Since I'm sure you're not yet sick of weak focusing, I want to write a bit about the dangers of resonance in weak focusing systems. In the case I am most familiar with, a storage ring has a (mostly) uniform vertical magnetic field and some strong-focusing quadrupoles to ensure vertical stability. Muons are stored for many hundreds of turns around the ring, so it's critical that the orbits be stable in the 'long' term. The problem is that it's possible to have a localized disruption in the field at just one or two points along the ring. To ensure stability in the long run, the simple betatron oscillations that I mentioned have to be at a different point in their oscillation each time they hit that instability. This has to account for both vertical (frequency $f_y=\sqrt{n}f_C$) and horizontal (frequency $f_x=\sqrt{1-n}f_C$) oscillation frequencies (remember, $f_C$ is the cyclotron frequency, the rate at which the bunches move around the ring). Basically, for the beam to stay stored, any (integer-coefficient) linear combination of these frequencies must not be an integer multiple of the cyclotron frequency. If it is, then every few turns, the beam will be at the same phase of its oscillation at the location of the instability. This is called a resonance, and it can boost the beam out of its stable orbit. As a result, care must be taken in choosing the 'tune' (primarily the magnetic field index) of the storage ring.

This choice can be illustrated with a complicated-looking plot. Below, the x-axis shows $\nu_x=\sqrt{1-n}$, and the y-axis shows $\nu_y=\sqrt{n}$. Integer combinations of $\nu_x$ and $\nu_y$ that yield integers (corresponding to frequencies that are integer multiples of the cyclotron frequency) are plotted as black lines. By the definition of $\nu_x$ and $\nu_y$, we see that $\nu_x^2+\nu_y^2=1$, which is shown on the plot as a red curve. Blue dots are along the intersection of this tune curve and the forbidden lines, and represent bad tunes, while the bright green dots show acceptable tunes, at least to the degree that we've plotted the resonance lines. Neah, eh?


Thursday, June 20, 2013

Vertical focusing

So far, I've discussed what sort of a field will provide horizontal beam focusing. There are two problems left. For one thing, in my original discussion of weak focusing (link), I showed that the field index $n$ had to be less than one for horizontal weak focusing. Later on, the constraints had mysteriously tightened, and I stated that for weak focusing to occur, the field index had to satisfy $0\le n<1$. Secondly, we've seen that horizontal focusing can occur without vertical focusing in a uniform magnetic field, so we need a new constraint on the field index to ensure vertical focusing.  Well, in this case, two wrongs almost make a right, and the second of the above-mentioned problems explains the first. Here's how it works.

In order to stabilize the vertical structure of the beam, the magnetic field needs to provide a restoring force in the vertical direction, something along the lines of $F_z=-cz$. In order to produce that, the magnetic field needs a horizontal component: $B_x=-c'z$. Well, from this we know that $\frac{\partial B_x}{\partial z}=-c'$. One of Maxwell's famous equations tells us $\vec\nabla\times \vec{B}=0$, so we clearly see that
\[ \frac{\partial B_x}{\partial z} = \frac{\partial B_z}{\partial x}=\frac{\partial B_z}{\partial r}=-c' \]
Observe! We have shown that for vertical focusing, we need $\frac{\partial B_z}{\partial r}$ to be negative. From our definition of the field index ($n=\frac{-\rho}{B_0}\frac{\partial B_z}{\partial x}$), then, we see that for vertical focusing to occur, the field index must be positive. Voila!

Tuesday, June 18, 2013

Geometric focusing

A uniform magnetic field (field index 0, since the rate of change of the field is zero) provides a certain level of horizontal focus, in a phenomenon called geometric focusing.  From elementary E&M, we know that a particle in a uniform magnetic field with momentum perpendicular to the field lines will follow a perfectly circular path.  Let's examine the behavior of a nonconforming particle in the beam; call it Fred. If at some point it (he?) is in the ideal location, moving in the ideal direction, but has a lower momentum than a particle tracing out the ideal orbit (henceforth referred to as Ida), then the magnetic field will cause him to run around the ring in a smaller circle than Ida's trajectory. But after going all the way around, Fred ends up right back where he started, and while the beam may have defocused somewhat azimuthally (that is, the bunch is longer now, so it takes up a greater portion of the ring), it's once again focused horizontally. This is shown in the leftmost part of the figure below. Similarly, if Fred's momentum vector is pointed in a different direction than Ida's, he'll have a different trajectory, but that'll intersect Ida's twice, so we have geometric focusing.  Finally, just turning the previous example on its side, if Fred is slightly displaced relative to Ida, their trajectories once again meet twice in their trips around the ring. That's the premise of geometric focusing in accelerator physics; it's just a special case of horizontal weak focusing.
Some examples of geometric focusing. Black shows the ideal trajectory (Ida),
and red shows the trajectory of another particle (Fred) that differs slightly from
Ida in initial conditions (at the left of the image). In all cases, the trajectories
of Fred and Ida meet up at least once in each full revolution.

In (a), Fred starts in the same position as an ideal particle but with
lower magnitude momentum. In (b), he starts in the same position and with the same
magnitude momentum as an ideal particle, but pointed slightly outwards compared
to Ida's trajectory. And in (c), Fred has the same momentum as the ideal particle but
is slightly offset spatially.
There are a couple of problems with relying on geometric focusing, though.  For one thing, a very small deviation in the beginning can send Fred on a trajectory that is fairly far away from Ida's at certain points. In order for Fred to stay in the beam, he needs to not run into the walls, which creates a real headache for the accelerator designers. For another, a uniform magnetic field doesn't provide any vertical focusing effect; if Fred has even a tiny vertical component of his momentum, the magnetic field won't affect it, and Fred will end up moving higher and higher in the accelerator pipe until he runs into the material there and meets with an untimely end.

Thursday, June 13, 2013

Quadrupoles

I mentioned last time that weak focusing is all well and good, but that in many cases, it just doesn't cut it. In such situations, experimentalists go for strong focusing, which involves electric and/or magnetic fields that are not radially symmetric, so that a particle traveling along its trajectory will see a different field as it goes along. In particular, quadrupole magnets or electrostatic quadrupoles can serve to focus a beam in one direction while defocusing it in the other. So for instance, one quadrupole magnet might focus the beam into a thin horizontal strip (vertical focusing, horizontal defocusing), and then another immediately afterwards could do the opposite. It turns out that such a setup can have a net focusing effect in both the horizontal and vertical orientations. This, like weak focusing, allows for thinner, higher-flux beams, critical for colliders and target experiments.

How do these things work? As I understand it, there's this mathematical approach to magnetic fields called the multipole expansion, in which a simple permanent magnet generates primarily a second-order (dipole) term. Other, higher-order terms, tend to be smaller than the low-order ones, especially at larger distances, so they can often be ignored. In a quadrupole, though, four magnetic dipoles (either permanent magnets or electromagnets) are positioned in such a way as to cancel the dipole moment, leaving only the quadrupole moment. This generates an interesting-looking magnetic field that is the source of the curvy bits in the Fermilab logo.
Logo courtesy of fnal.gov (upper left hand corner, when I
pulled it off). The curvy bits represent the quadrupoles in
the various particle accelerators on site, while the straight
lines represent the dipoles used to bend the beams.

Tuesday, June 11, 2013

Betatron Oscillations

This post ended up being a little longer and mathier than I'd expected, but I found out that weak focusing is really cool. Enjoy!

Theoretical physicists enjoy playing with perfect particles in a well-behaved world. Experimentalists would love it if that worked, but the real world is never so nice, so they have to deal with imperfectly calibrated beams. In particular, that means that if a particle deviates slightly from its ideal trajectory, there should be some mechanism in place to ensure that it stays close, rather than diverging away from the ideal beam location. The mechanisms that allow this to occur are called focusing, and they also serve to keep the beam narrow enough to allow precise knowledge of its structure and enhanced probabilities of interactions of opposing beams (like in the LHC, where protons are circling the ring in opposite directions and then collide head on).

For the following discussion of focusing techniques, I'll treat only circular beams/rings, as they're easiest to describe. This class includes colliders like the Tevatron and the LHC as well as, say, storage rings involved in intensity frontier experiments.

In a technique called weak focusing, a radially symmetric magnetic field is present in the region of the beam. The field gradient (both radially and vertically) means that when a particle isn't quite on the perfect trajectory, there's a restoring force. In the long run, this causes such particles to oscillate about the central orbit with a frequency determined by the magnetic field gradients in what's called betatron oscillation.

Let's take a quick look at how weak focusing gives horizontal beam stability. We'll take a beam that is ideally at radius $\rho$. Let's examine a single particle of charge $q$ that deviates slightly from this ideal radius, with a radius of $r$. For convenience in Taylor expansion, define $x=r-\rho$. In order for the beam to be stable, we want to have a restoring force; that is, there's more force on the particle if $r>\rho$ (or equivalently, if $x>0$), and less for $x<0$. The force is a result of the magnetic field at the location of the particle, and its magnitude is $F=qvB_z(r)$. Here $v$ is the velocity of the particle, and $B_z(r)$ is the magnitude of the vertical component of the magnetic field at radius $r$.  Since we're dealing with weak lensing, the magnetic field is radially symmetric, so we don't have to worry about its dependence on the azimuthal angle $\theta$.

We know the centripetal force necessary to keep the ideal beam on a circular path is $F_c=\frac{mv^2}{r}$. Note that here, $m$ isn't the rest mass of the particle; it's the effective mass accounting for relativity, $m=\gamma m_0$, where as usual, $\gamma=\frac{1}{\sqrt{1-(v/c)^2}}$. Based on this observation, we define a restoring force 
\[ F_{rest} = \frac{mv^2}{r}-evB_z(r) \] 
Observe that since particles on the ideal orbit will happily orbit at radius $\rho$ until the end of time (or until they decay), the two terms are equal at $r=\rho$ ($x=0$), so we care only about the sign of the restoring force for small $x$ near zero. In particular, for beam stability, we want $F_rest$ and $x$ to have opposite signs.

Let's examine the second term first.  Taylor expanding the magnetic field about $x=0$ to first order in $x$, we see 
\[ B_z(x)\approx B_0+\frac{\partial B_z}{\partial x}x \]
where $B_0$ is the magnetic field strength at $x=0$, and the partial derivative is evaluated at $x=0$. By convention, we define the magnetic field gradient 
\[ n=\frac{-\rho}{B_0}\frac{\partial B_z}{\partial x}, \]
 which allows us to rewrite the magnetic field strength as 
\[ B_z(x)=B_0\left(1-\frac{x}{\rho}n\right). \]

Now let's look at the first term in the restoring force definition.  By the definition of $x$, we know that $r=\rho\left(1+\frac{x}{\rho}\right)$. The binomial approximation (for $x\ll1$, $(1+x)^n\approx 1+nx+\cdots$) allows us to write the first term as 
\[ \frac{mv^2}{r}=\frac{mv^2}{\rho\left(1+\frac{x}{\rho}\right)} \approx\frac{mv^2}{\rho}\left(1-\frac{x}{\rho}\right) \]

Based on the above approximations, the restoring force becomes 
\[ F_{rest}=\frac{mv^2}{\rho}\left(1-\frac{x}{\rho}\right) -qvB_0\left(1-\frac{x}{\rho}n\right) \]
Since the magnetic field at $r=\rho$ is exactly strong enough to keep the particles in the ideal circular orbit, we know that $\frac{mv^2}{\rho}=qvB_0$, which simplifies the above expression to 
\[ F_{rest}=qvB_0\left(1-\frac{x}{\rho}\right) -qvB_0\left(1-\frac{x}{\rho}n\right)=-qvB_0\,\frac{x}{\rho}\,(1-n). \]

As we saw, for the beam to be horizontally stable, we need $F_{rest}$ and $x$ to have opposite signs, so we find the weak focusing requirement on the field gradient: $n<1$.

One more quick(ish) note. By design, we have calculated this force only to first order in $x$, and that allows us to describe the motion as simply harmonic. Recall that if $F=-kx$, then the object's equation of motion is $\ddot{x}+\frac{k}{m}x=0$, so solutions have an angular frequency of $\sqrt{\frac{k}{m}}$. Based on this, we find the (angular) frequency of these betatron oscillations to be related to the cyclotron frequency $\omega_0$, which describes the frequency of the beam's rotation around the ring. By definition, $\omega_0=v/\rho$. The betatron oscillation frequency $\omega_{CBO}$ is 
\begin{align*} \omega_{CBO} &= \sqrt{\left(\frac{v}{\rho}\right)\left(\frac{Bq}{m}\right)(1-n)} \end{align*}
Recall that we have $\frac{mv^2}{\rho}=B_0 vq$, so the two terms in parentheses are actually equal. Furthermore, they are both equal to the cyclotron frequency, so we see 
\[ \omega_{CBO} = \omega_0\sqrt{1-n} \]
The critical thing to notice here is that because $0<n<1$, betatron oscillations must be lower in frequency than the cyclotron frequency; that is, it takes more than a full turn around the ring to complete a betatron oscillation. That means that these oscillations tend to have fairly large amplitudes.

Weak focusing is often convenient for its simplicity, but as we've seen, it also tends to result in fairly large-amplitude oscillations. This creates a headache for the beam pipe designers, since any portion of the beam that hits the pipe is quite abruptly no longer part of the beam. As a result, most modern experiments instead use strong focusing.

Strong focusing uses magnetic or electrostatic quadrupoles to provide alternating focusing in the horizontal and vertical directions. A single quadrupole focuses in one direction and defocuses in the other, so two quadrupoles in quick succession can provide a net focusing effect both horizontally and vertically. That's a topic for another day.

Saturday, June 8, 2013

The Human Side

I've talked a lot about particle physics and astrophysics, but one thing I haven't discussed much is the human side of an experimental hard science. In medical research, you've heard of the double blind study, in which neither the patients nor the doctors involved know who is in the testing or control groups, to eliminate or minimize the role of the placebo effect in the study. A less-known fact is that other experiments, even those with no human subjects, still have the possibility of human bias. It's called experimenter's bias, and it usually appears in the form of an analyst who unconsciously chooses various parameters to bring the final result closer to some expectation, whether that is in agreement with or contrary to theory. For instance, in particle physics, an analysis typically entails choosing a threshold and the values of several other parameters, which can affect the final result in subtle ways. If it is clear to the experimenter what the effect of each parameter is, then the analysis can be tuned to obtain certain results.

Experimenter's bias can be counteracted by a variety of blinding techniques, very much dependent on the experiment. Some experiments, like both CMS and ATLAS's searches for the Higgs, block out the region of interest and allow the analysis to develop based on the remaining data, un-blinding only when the analysis methods are finalized. Others multiply certain results by either 1 or -1 (chosen randomly experiment-wide), so that analysts fiddling with parameters can't tell whether those parameters increase or decrease the final result.

Muon g-2 (it had to come back to this experiment eventually, right?) is aiming to measure the anomalous magnetic moment of the muon, which can be found as the ratio of two values, a spin precession frequency called $\omega_a$ and the value of the magnetic field throughout the storage ring. As a result, the experiment's collaboration (that is, all the people working on it) is divided into three groups: the accelerator group, which deals with providing the muon beam to the storage ring, the field team, which tackles the magnetic field measurement, and the detector team, which deals with the detectors necessary to measure the precession frequency and monitor the beam. The final analysis will involve the measurement of $\omega_a$ and the magnetic field, and in order to ensure no bias in the analysis, these are blinded measurements. In this case, the way that the analysis is blinded is by a slight offset in the clocks provided to the two components of the experiment. In the naive case, or the final analysis (after blinding is removed), it is crucial that all of the systems are precisely synchronized. But in the preliminary analyses, when all the parameters are still being calibrated and the analysis finalized, the clock sent to the detector team has a frequency of, for instance 45.000xxx (where those x's are hidden values), and the clock sent to the field team has a frequency of perhaps 52.670yyy (again, the y's are hidden values) so the actual numbers obtained have no bearing on the final result. The hidden values, then, are only revealed when the analysis is finalized shortly before the result is released. It's fascinating how such a technically complex experiment still has to account for human bias. That's science.

Wednesday, June 5, 2013

More on pileup

I've discussed the problem of pileup before, so I'd like to briefly discuss one method for getting rid of pileup in the analysis phase of an experiment.

In the first, if the hardware has sufficient time resolution to reliably separate two events with a time separation of $n$ nanoseconds, one form of analysis, called shadowing, creates a 'window' of $2n$ nanoseconds centered on the time at which the detector reached a maximum. The principle is that statistically, we are just as likely to see another event inside this window as in the $n$ nanoseconds on either side of the window (the "wings").
My attempt to create a schematic for the shadowing
method of pileup analysis.
Note: this isn't at all what any sort of detector pulse looks like. 
In the schematic above, the blue curve is the pulse shape received by the calorimeter (no, it's actually a terrible approximation in Mathematica). The maximum is fairly obvious. The window is the light green area, and the wings are the dark green area. As I understand it, the analysis looks at all of the calorimeter signals a certain length of time after time zero in each event. By looking at the number of pulses within the wings, analysts can get a good prediction for the number of pulses within the windows, which correspond to pileup that weren't separated by the temporal resolution. Repeat that analysis for many other times after time zero, and you have a profile of pileup over the course of each event.

This form of analysis may not tell you which events in particular experienced pileup, but it allows analysts to determine the overall energy profile of the particles detected, which is what they cared about in the first place. All in all, a success for physicists.

Tuesday, June 4, 2013

Pion decay

As you may or may not know, positive pions decay by
\[
\pi^+ \to \mu^+ + \nu_\mu
\]
Well, this is a little puzzling. Why does it decay to a muon instead of a positron? In general, decays to less massive particles are favorable because of something called phase space (I think), so it seems very strange that the positron decay is so suppressed: 0.01% of the decays, as compared to 99.98% for the muon decay (the remaining 0.01% of decays consist of around 6 other rare processes).

The solution, it turns out, comes because of the maximal parity violation in the weak force I've mentioned before. We've seen that neutrinos are always left-handed and anti-neutrinos are always right-handed. (Remember that the handedness of a particle is a measure of the projection of its spin onto its momentum vector; if the projection points in the same direction as the particle's momentum, it's called right-handed.) Well, it turns out that in general, particles (with positive lepton number) are preferentially left-handed, while antiparticles prefer to be right-handed. The critical bit is that the more massive the particle, the less it has a preference for handedness. That's why neutrinos, which have absolutely minuscule masses, are entirely polarized according to their lepton number.

Based on this, positrons "want" to be right-handed far more than positive muons. Now, recalling that the pion has spin zero, let's look at the its decay. If we start in the rest frame of the pion, then by conservation of momentum, the neutrino and positively charged muon that are produced have to be going in opposite directions with equal (magnitude) momenta. We also know that the neutrino has to be left-handed, that is, with a spin opposite (at least in projection) to its momentum. Since the original pion had a spin of zero, conservation of angular momentum tells us that the muon must also have a spin opposite to its momentum, as in the following sketch:
Pion decay in the pion's rest frame. The black arrows show
momenta, while the colored arrows show the direction of the
spin projected onto the particle's momentum (helicity).
Note that this makes the positive muon a left-handed particle, whereas we've seen that it 'wants' to be right-handed. Aha! The same logic would apply to an electron decay as well, and we've seen that less massive particles have much stronger helicity preferences than more massive ones. Thus the decay into a muon is more acceptable from a helicity perspective than to an electron, so pions tends to decay into muons. Magic!

Saturday, June 1, 2013

Mathematica's WeatherData

This doesn't have much to do with physics, but I recently got the chance to play with Mathematica's WeatherData, and wanted to share some of the cool things that are possible with it. It's an awesome resource, and if you have the chance, it's fun to play with the incredible amount of data it contains. 

As an example, it's fairly easy to find how monthly temperatures in the last year compare to averages over the last several decades:

monthData = WeatherData["Chicago", "MeanTemperature", {{1900, 1, 1}, {2013, 5, 31}, "Month"}];
getInfo[month_] := Module[{mData, mAvgs, av, curr},
   mData = Select[monthData, #[[1]][[2]] == month &];
   mAvgs = Table[mData[[i]][[2]], {i, 1, Length[mData] - 1}];
   av = Mean[mAvgs];
   curr = mData[[Length[mData]]][[2]];
   {month, av, curr, 
    ToString[Abs[av - curr]] <> " degrees " <> 
     If[av < curr, "warmer", "colder"]}];
Table[getInfo[i], {i, 1, 12}] // MatrixForm

It's really cool to have such easy access to data about the weather. 

As an aside, the data appears to begin around 1950, and the last year containing complete data (at least according to one of the demonstrations online) is 2008. That said, I had no difficulty finding data for the last few years, so that information may have just been because the demonstration was published in 2009. 

With the data, it is also fairly straightforward to find average yearly temperatures over time:

yearData = WeatherData["Chicago", "MeanTemperature", {{1952, 1, 1}, {2012, 12, 31}, "Year"}];
DateListPlot[yearData, Joined -> True]
image
You can sort of see here how the temperature generally increases over the last several decades, but the fluctuations kind of drown it out. Instead, let's look at the average temperature each day of the year for two different decades: the 1960s and the 2000s. The data here is averaged over 10-year samples, with a smoothing length of 10 days in either direction to eliminate the essentially random day-to-day jitter.
getDayData[month_, day_, data_] := Module[{dData, dayAvgs},
  dData = Select[data, And[#[[1]][[2]] == month, #[[1]][[3]] == day] &];
  dayAvgs = Table[dData[[i]][[2]], {i, 1, Length[dData]}];
  {{0, month, day}, Mean[dayAvgs]}]; 

getAllDayDataRange[yearStart_, yearEnd_] := Module[{dayData},
  dayData = 
   WeatherData["Chicago", 
    "MeanTemperature", {{yearStart, 1, 1}, {yearEnd, 12, 31}, "Day"}];  Flatten[
   Table[getDayData[m, d, dayData], {m, 1, 12}, {d, 1, DaysInMonth[[m]]}], 1]
  ];

DaysInMonth = {31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31};

sixtiesData = getAllDayDataRange[1960, 1969];
smoothedSixties = sixtiesData[[11 ;; Length[sixtiesData] - 10]];
smoothedSixties[[All, 2]] = MovingAverage[sixtiesData[[All, 2]], 21];

thousandsData = getAllDayDataRange[2000, 2009];
smoothedThousands = thousandsData[[11 ;; Length[thousandsData] - 10]];
smoothedThousands[[All, 2]] = MovingAverage[thousandsData[[All, 2]], 21];
image
Now that is a striking change! 

I just wanted to share a neat resource for climate data to play with. Here I've barely scratched the surface of the information it contains; there's also information on pressure, humidity, wind, cloud conditions...it's a source of tons of data.