Tuesday, October 15, 2013

Rail guns!

A railgun is a fairly straightforward electromagnetic device that converts electrical potential into linear mechanical motion.

The general principle of a railgun is very simple: two rails have a voltage difference applied to them. The device being accelerated rests between the rails and has a conducting cross-link of some sort called the armature that allows a current to flow from the high rail into the low. The current through the rails produces a circulating magnetic field around the two rails, which results in a net magnetic field normal to the plane of the rails and device under acceleration. The current through the armature contains moving charges, and these are accelerated by the magnetic field according to the Lorentz force.  The result is that the armature and the affixed device experience a constant force parallel to the rails, and in the absence of other forces, the result is an acceleration in that direction. It is worth noting that no matter whether the potential difference across the rails is positive or negative, the device and armature will be accelerated away from the voltage source end of the rails.

Railguns are pretty cool devices, but they tend to be very high-current and high-power constructions, which means they have to deal with a lot of heat dissipation, not to mention finding awesome power sources that can provide enormous currents. As an example, let's look at a sample system.

But first, a bit of electromagnetism for the sake of calculations:

The magnetic field produced by a semi-infinite wire at the end of that wire is $B(x)=\frac{\mu_0 i}{2\pi r}$, where $\mu_0$ is the magnetic constant, $i$ is the current through the wire, and $r$ is the distance from the wire. Define a coordinate $x$ to start at the center of one rail and be positive in the direction of the other rail. We see that if the rails of radius $r$ are separated by a distance $d$, the magnetic field in the region between the wires is given by
\[B(x)=\frac{\mu_0 i}{2\pi}\left( \frac{1}{x}+\frac{1}{d-x} \right).\]
The force on a current-carrying wire carrying current $i$ in a perpendicular magnetic field $B$ is $F=i\ell B$, where $\ell$ is the length of the wire. We therefore see by integrating over the length of the wire that
\begin{align*}
    F &= \int_r^{d-r}i B(x)\,dx\\
    &= \frac{\mu_0 i^2 d}{2\pi}\int_r^{d-r}\frac{d}{x(d-x)}\,dx\\
    &= \frac{\mu_0 i^2}{\pi}\ln\left( \frac{d-r}{r} \right)\\
    &\approx \frac{\mu_0 i^2}{\pi}\ln\left( \frac{d}{r} \right)
\end{align*}
where we have assumed that $r\ll d$ in the last step. Based on this, we can calculate the current necessary to generate a given force in a system:
\[
    i=\sqrt\frac{\pi F}{\mu_0 \ln(d/r)}
\]
The resulting power dissipation is given by $P=i^2 R$, where $R$ is the overall resistance of the rail-armature system, which is at most $\rho(2\ell+d)$, where $\ell$ is the length of the rails and $\rho$ is the resistance per length of the wire and armature. We see
\[
    P=\frac{\pi F}{\mu_0 \ln(d/r)}\rho(2\ell+d)
\]
Let's choose some arbitrary parameters for our system:
    $\ell = 40$ cm (length of the rails)
    $d = 10$ cm (distance between the rails)
    $r = 0.5$ cm (radius of the rails and armature)
    $\rho = 1.608 \Omega/km$ (resistance per length of wire)

If we want to generate a force of $F=200$ N, then the current necessary is on the order of 13 kA (eek, that's a lot of current!), and the power generated is roughly 2 kW, which is around half the power generation of a microwave. The upside is that with this kind of wire, it takes a mere 2 V to generate these currents, so the voltage requirements are reasonable. The limiting factor in this power source will be the current draw necessary, which will be limited by any battery's internal resistance. A chief expense of this device will therefore be the use of a high-quality battery certified for absurdly high currents.

Friday, September 27, 2013

Full-wave rectifiers

I mentioned half-wave rectifiers in my last post as a pretty neat application of diodes. Unsurprisingly, it turns out we can do way better at converting alternating to direct current if we're willing to use some more diodes. Just like before, we want to input an alternating-current voltage source, which means the voltage difference between the two inputs oscillates over time. In the end, we would ideally like a completely constant voltage to be output. Well, check out this circuit:
Image adapted from electrician Joe Duncanson's highly informative blog.
(Notational note: The triangle-bar thing is the electrical symbol for a diode whose polarity will only allow current to flow in the direction the arrow points.)

As you can see, whether the top or the bottom input node is at a higher voltage, current can and will flow toward the upper output node. Then the current from the lower-potential bottom output node will flow through the diode diamond to the lower-potential of the two inputs. In this way, we ensure that the output voltage difference (voltage of the top output minus voltage of the bottom output) is always positive. More specifically, if the diodes are ideal, then the voltage we'd expect out of this circuit would be
Input and output voltages for an ideal full rectifier.
But as usual, the real world isn't quite so friendly. In reality, there's a bit of internal resistance in the diodes, so they don't start letting current through until there's enough voltage across them. As a result, the output voltage looks more like this:
Input and output voltages for a (slightly) more realistic full rectifier.
We're still not quite at the constant output voltage we set out to find, but we're a good deal closer than when we stuck with just one diode. Hopefully I can write a bit about capacitive filtering, which goes a long way towards smoothing this curve out, sometime in the near future.

Tuesday, September 24, 2013

Diodes

I got the chance to play with some diodes recently, and I came away with the fact that diodes are really cool. Abstractly, a diode is simply an electric device (not electronic, I believe, because it doesn't involve transistors or logic gates or any fun stuff like that) which prefers for electric current to pass through it in just one direction. In theory, if you apply a forward bias to an ideal diode, that is, you apply a voltage to it such that electric current "wants" to flow in the preferred direction, then the diode should behave just like a wire. But if you apply a reverse bias, trying to force electric current through the other way, then the diode opposes you with (theoretically) infinite resistance. Naturally, nothing behaves quite that nicely, and in real life, diodes can be broken down if you apply too strong a reverse bias. (The exception is Zener diodes, which are designed to be reverse biased, and they have the cool property that when you run current through them in reverse bias, the voltage across the diode itself is constant - weird, right?) The idea remains, though, that you can use a diode as a sort of one-way gate in electric circuits, and this turns out to be a very handy property.

For example, one application of diodes is in rectifier circuits, which convert alternating current (AC) into direct current (DC). In other words, they can be used to allow current to flow in only one direction in a circuit. The simplest type is a half-wave rectifier, which consists of one boring diode connecting your input voltage (AC) to your output voltage (DC). In the ideal case, this results in an output voltage identical to the input, except never negative, like this:
Input and output voltages for an ideal half-wave rectifier.
But, as usual, I don't have access to the magical ideal physics stockroom, so I have to make do with real-life, physical diodes. And those come with a little bit of internal resistance. That means that it takes a non-zero positive voltage across them to actually get current to flow, and when current does flow, some of the voltage is dissipated into heat inside the diode, leaving only a fraction of it to contribute to the voltage across the diode. So in reality, the output voltage from a half-wave rectifier looks more like this:
Input and output voltages for a real, physical half-wave rectifier.
There's a lot more you can do with diodes, but even this simple example is pretty neat. Someday, I might write up full-wave rectifiers and capacitive filter circuits. Those are pretty cool, too. Conclusion: physics and electronics are awesome.

Sunday, September 22, 2013

Galaxy spirals

Galaxies fall into several categories, which I think I've discussed before. The simplest two are elliptical and spiral. Those classifications are fairly self-explanatory; an elliptical galaxy is a fuzzy blob of stars, typically in the form of a three-dimensional ellipsoid, while a spiral galaxy has those distinctive spiral arms we all know and love from Hubble pictures. If you want to get even more specific, there are two forms of spiral galaxy: the simple spiral galaxy and the barred spiral. The barred spiral galaxies have a distinctive-looking bar in their centers, and while they look pretty strange, we're actually living in one right now! Yep, the Milky Way is a barred spiral galaxy.

In both kinds of spirals, stars orbit roughly circularly around the center, just like our planets orbit the Sun. But because the galaxy is roughly a disk with mass, the galaxy's stars also exhibit up-down oscillation around the center of the disk.

In an elliptical galaxy, on the other hand, there isn't a nice, uniform central gravitational potential, so stars, especially those far from the center, trace out bizarre three-dimensional shapes. It's a really fascinating problem to analyze the orbits of both individual stars and general classes of stellar orbits in elliptical galaxies.

Thursday, September 19, 2013

Cepheid variables!

It's been a while since I wrote anything about astrophysics, so it's high time I came back to a really cool topic: Cepheid variable stars. These provide a so-called standard candle that allows astronomers to determine extraordinary long distances to far-away galaxies.

At its root, a Cepheid variable star is one that breathes in and out either in its first mode of oscillation (just expanding and contracting spherically) or its second (becoming slightly tubular, then bouncing back to a pancake shape, and so on). As it contracts, its surface temperature increases, so its luminosity increases. In most cases, these sorts of oscillations would damp out pretty quickly, but in Cepheid variables, there's a shell of partially ionized helium inside the star. As the star compacts, the internal temperature increases, and this results in extra ionization in the helium layer. As a result, the opacity of that thin shell increases, so the radial pressure from inside increases, pushing the helium layer (and thus the rest of the star) back outwards. At this point the helium ionization decreases thanks to a decrease in temperature, and the whole cycle begins again. To make a long story short, the partially ionized helium layer allows the star to expand and contract with very little damping pretty much indefinitely. Most importantly, there is a set relationship between the period of oscillation and the star's luminosity, which means that by observing how long a brightening and dimming period takes, we know exactly how bright the star really is. Based on this and how bright we see it, we can determine its distance from us!

One drawback of these candles is that they tend to be large (apparently 4-20 solar masses), and therefore short-lived, stars. That means it's hard to find Cepheids in particularly elderly galaxies. But hey, they're one of the coolest (hottest?) standard candles in the astrophysics arsenal, so I'm not complaining!

Tuesday, September 17, 2013

The foundations of statistical mechanics

Many problems that students solve in introductory physics classes have to do with one or two interacting bodies. The two-body problem, in which two objects interact with a force dependent on the distance between them, is often exactly solvable. Take the Earth and the Moon, or the Earth and the Sun, and you can exactly solve the simple equations of motion by simplifying the two-body system into a one-body system. The three-body problem, on the other hand, is intractable. Add Jupiter or another planet into the bargain and suddenly all the math in the world won't help you except to numerically approximate the system's behavior. There are all sorts of approximations that can help you out, but the exact solution for arbitrary conditions is elusive. This, of course, makes analysis of complicated systems...well, complicated. Luckily for physicists, while the 3-body, 4-body, and 5-body systems are difficult or impossible mathematically, the $10^{23}$-body system is actually solvable in a statistical sense. The examination of systems with many many constituent parts (atoms, for instance) forms the basis of statistical mechanics, an incredibly powerful framework that describes large systems and explains tons of other physics into the bargain.

Statistical mechanics is based on just two simple hypotheses. As far as I know, these two hypotheses aren't provable from more basic principles, but taking them as axioms gives all sorts of useful and accurate results. In order to discuss these hypotheses, I first need to define a couple of terms. Any large system can be described in terms of a (very large) collection of microparameters. These do things like saying exactly where each particle is and what its momentum is, or the energy level of each particle, and so on. The exact microparameters will depend on the system we're examining. In general, it's very difficult (not to mention time-consuming) to measure microparameters. Instead, we define macroparameters to describe the system as a whole. Things like temperature, pressure, or the total number of spin-up atoms are all macroparameters, and again, we tend to define different macroparameters for different systems.

With those definitions out of the way, let's take a look at the first hypothesis of statistical mechanics: the ergodicity hypothesis. It states that for any system, there is an equilibrium state, in which the macroparameters describing the system (no matter which ones we choose) are constant, and that furthermore, the values of these macroparameters are given by the unweighted statistical average of all the microstates of the system.

This is a very profound statement, on a variety of levels. The existence of an equilibrium state for any old system is moderately intuitive, but it seems a little strange to have such an overarching statement of its existence. More than that, though, it gives an extraordinarily helpful way to find the equilibrium values of all the macroparameters: just take the average of all the microstates! Naturally, the math behind such averages is a little messy at times, but it's of critical importance that we can so easily find a description of equilibrium.

Alright, time for one more definition: let's call any system in which the microstates change faster than macroscopic time scales an ergodic system. In order for a system to experience this flavor of fast evolution, it is necessary to have interactions between particles in the system - imagine taking a collection of noninteracting gas particles and squishing them into a small ball at the center of a room; in the absence of interactions, nothing will ever happen and the system won't evolve at all.

The second critical hypothesis of statistical mechanics is called the relaxation hypothesis, and it states that any ergodic system always evolves towards equilibrium. This translates roughly into the third law of thermodynamics, which states that entropy always increases.

Based on these, we can derive (with sufficient approximations) things like the ideal gas law, rigorous definitions of temperature and pressure, and in general the behavior of very large systems. 

Friday, September 13, 2013

Cloud chambers!

One of the original detectors in physics was the cloud chamber. The basic idea is that you have a very pure gas with high humidity and cool it off or reduce its pressure until it's on the verge of condensing. Since you have essentially a pure gas without any impurities or dust particles, there's nothing for the water to condense on. As a result, the air in the chamber is supersaturated, and it will condense at the introduction of any small impurity. Luckily for physicists, the chamber can be tuned so that a charged particle passing through the detector is just enough to cause condensation. So a charged particle leaves a track of condensation as it passes through the gas. If you're lucky enough to have a particle decay or otherwise interact inside the detector, you can see all the (charged) tracks involved in the interaction.

Once you have that set up, you can produce a nice, uniform magnetic field throughout the detector, and then a low-momentum particle will spiral in a characteristic way, while a higher-momentum particle will curve only slightly. At this point, the early physicists took pictures of the chamber for later analysis (often by human "computers"). The resulting images are entrancing and emblematic of the early days of particle physics.

A somewhat stylized cloud chamber image, courtesy of the CERN Courier.

Thursday, September 12, 2013

The trouble with gluons...

If you've ever heard of Feynman diagrams, you've probably seen the simplest sort - the ones with the fewest possible interactions. What you may not have seen much of are loop diagrams - Feynman diagrams with the same starting and ending states but with extra, seemingly-extraneous interactions in the middle. The funky thing about the Feynman path integral formulation of particle physics is that in order to calculate the amplitude (related to the probability) for an interaction, you need to account for all possible diagrams with start and end states you care about. That means, yes, all of those loops on loops on loops, which we call higher-order diagrams. Luckily, for forces like the weak force and electromagnetism, the amount that higher-order diagrams contribute to the amplitude decreases with complexity. As a result, it's often a reasonable approximation to use only the simplest diagram to describe an interaction, and even professionals often content themselves with fourth- or fifth-order diagram contributions.

Less fortunately, the strong nuclear force is less obliging. Rather than higher-order diagrams contributing less and less to the total amplitude, they contribute increasingly large amounts. This trouble with gluons (those are the force carriers for the strong force) is one of a couple of traits that makes the strong nuclear force such a pain to work with for theorists.

Tuesday, September 10, 2013

An awesome tidbit about particles

A fun fact recently came up in a discussion about particle physics: the fact that elementary particles can have mass but take up no space. This seems a little weird at first. Based on our everyday experience, volume and mass seem to come hand in hand. But like so many other classical ideas, this preconception breaks down in the subatomic world.

How does that work? Well, volume and mass are actually very different concepts. Mass determines how a particle reacts to an external force, according to $F=ma$ (it also has to do with gravity based on its gravitational mass, but that's a whole separate story - one of these days I'll figure out what's up with general relativity, and in any case, subatomic particles don't interact with gravity enough to worry about), and in the world of elementary particles, a particle's mass is determined by its interaction with the Higgs field.

Volume, on the other hand, gives some measure of how much physical space an object occupies. In a particle like a proton, which has three quarks bound together in a complicated way by the strong nuclear force, that volume is determined by the quarks' interactions. It isn't the quarks that take up space, as much as the fact that they can't all occupy the same space, by the laws of quantum mechanics and inter-particle interactions. Similarly, the "space" that an atom takes up is dictated not by the space taken by protons and electrons, but by their electromagnetic attraction and the necessary space for electron orbitals and clouds. In a sense, we define the volume of an object in terms of its internal interactions. But if you take a look at a single particle like an electron, there aren't (as far as we know) any constituent parts. So what would it mean for there to be internal interactions to give it volume? It's practically nonsensical.

This is a really strange concept in the already-slightly-weird world of particle physics, but like so many such issues, it's a lot of fun to wrestle with.

Wednesday, September 4, 2013

Mushroom clouds

I suspect that if you're reading this blog right now, you've seen the fear- and awe-inspiring pictures of mushroom clouds over nuclear tests. What on earth causes those? Part of it is a result of all the dust and debris that was surely churned up by such an explosion. The ground underneath the explosion is partially vaporized, but the rising plume of hot air pulls in nearby air (and dust and debris) as it heads upwards. (As an aside, some fireballs can rise as fast as 300 miles per hour!) At a certain height, the air stops uniformly rising and starts falling, creating convection plumes that form the donut-shaped top of the mushroom. Note that all this happens a while after the initial detonation, after the fireball has done some expanding and shock-wave generation of its own.

Another interesting issue is the nice circular rings that sometimes form around the stem of the mushroom cloud. These are actually partial condensation clouds, which are formed in the negative phase of the shock wave, behind the obvious high-pressure shock front. Previously, I've discussed complete condensation clouds, which appear as continuous surfaces behind, for instance, an airplane in supersonic flight. The basic idea here is the same: as the front of lower-pressure air expands outward, the air inside it expands and therefore cools, so if there's enough humidity, the water in the air condenses into a cloud. The difference is that in supersonic flight, the shock wave is localized enough that the whole lower-pressure region condenses. In atmospheric nuclear weapon detonations, though, the shock wave is large enough that the structure of the atmosphere comes into play. The rings you see, then, indicate high-humidity layers in the atmosphere, which condense in rings as the negative phase of the shock wave intersects them.

Thank you to Jonathon Vigh's thesis for much of the information contained here.