Wednesday, January 13, 2016
In a 1994 paper, Gary T Leavens explained, in a wonderfully graspable manner, that Fields in physics can actually be viewed as curried functions. If you understand what curried functions are, then you already have a good idea of what fields are—these sort of bridges are incredibly helpful to those of us peering in from both sides of the divide (for me, from the CS side). The examples were in scheme and so the syntax inversion might make it a bit difficult for the unpracticed (such as myself) to follow, hence my translating (and hopefully, helpful to you too). My examples here are in F#, where units of measure are especially helpful in making things clearer.
First, some definitions:
[<Measure>] type kg
[<Measure>] type m
[<Measure>] type s
[<Measure>] type N = kg * m / s^2
let G = 6.674e-11<N * m^2 / kg^2>
let grav_force (m1:float< kg>) (r:float<m>) (m2:float<kg>) =
if r = 0.<m> then 0.<N>
else ((m1 * m2) * G) / (r * r)
This is a curried function since it's a function that takes a value and returns a function specialized to the passed in value.
Returning to our function grav_force
, when we put it into F# we got the type:
val grav_force ∈ m1∈float<kg> ⟹ r∈float<m> ⟹ m2∈float<kg> ⟹ float<N>
So everything looks good, it takes a mass (such as Earth) and returns a function that takes a distance (e.g. at the surface) which itself returns a function: one from masses (your mass) to Forces (your weight). This function is in fact the definition of a (scalar) gravitational field but more on that later. We can specialize the function and get earth's field.
let earth_mass = 5.96e24<kg>
let earth_rad = 6.37e6<m>
let earth_grav = grav_force earth_mass
Similarly, we can specialize to the surface (by applying a distance to earth_grav
) or we can compute the mass of an object at a particular distance by passing in a distance and a mass. For example, noticing that N = kg m/s²
, we can compute Earth's accelaration at surface simply by passing in 1kg and the Earth's radius for a value of 9.802877992 N (which is just m/s² for this case). But we can do more interesting things with curried functions. We can flip the field equation to compute the gravitational force at a list of distances from earth's surface (a bit more flexible than how physics is typically taught isn't it):
let flip f a b = f b a
let flipped_field = flip earth_grav 1.<kg>
[for dist in 0.0<m>..1e5<m>..1e6<m> -> 6.37e6<m> + dist] |> List.map flipped_field
The map applies our function to each distance in the list to get a list of forces. So at a distance of 1000km, the gravitational acceleration is ~7.32 m/sec².
Below you can look at the gravitational acceleration on the surface of the moon and Mars. The moon has a mere acceleration of 1.62 m/s² and Mars is only just over 2x that—it is for this reason that many are concerned over the health effects (on skeletomuscular integrity) of an extended mission to Mars.
Additionally, we can look at the Earth/Moon system. The Moon and the Earth both exert a force of $2 \times10^{20}\,N$ on each other. Both are forever falling towards each other, with the condition that—on the odd chance that—if ever they should meet, things would not end well for both of them. A tragic love story if ever there was one.
let mass_moon = 7.3459e22<kg>
let mass_mars = 6.41693e23<kg>
let rad_moon = 1737.5<m> * 1000.
let radius_mars = 3386.<m> * 1000.
let mean_moon_dist = 3.85e8<m>
grav_force mass_moon rad_moon 1.<kg> ,
grav_force mass_mars radius_mars 1.<kg> ,
grav_force earth_mass mean_moon_dist mass_moon
To make things a bit more concrete than a list of numbers, I graph some commonly known locations and the gravitational acceleration felt there. Looking at these, I can't help but think of the common conception (held by myself for a long time too) that space is "out there" and far away; when really, space (LEO, at least) is literally within walking distance. Most satellites, the International Space Station included, aren't floating in space; they're still deep within Earth's gravitational well!
let pois = [|"~Low Earth Orbit Min", 160.<m>
"~Low Earth Orbit Max", 2000.<m>
"GEO Synch" , 40000.<m>
"GPS" , 20350.<m>
"ISS Avg" , 382.5<m>
"Hubble Space Telescope", 595.<m>|]
let gs = pois |> Array.map (snd >> ( * ) 1000. >> (+) earth_rad >> flipped_field)
Array.zip pois gs
|> Array.sortBy snd
|> Array.map (fun ((l,n),g) -> [|l; string n ; string g|])
|> tohtmlTable ["Location"; "Distance in km"; "g"]
There is an art, it says, or rather, a knack to flying. The knack lies in learning how to throw yourself at the ground and miss. Pick a nice day, [The Hitchhiker's Guide to the Galaxy] suggests, and try it.
The first part is easy. All it requires is simply the ability to throw yourself forward with all your weight, and the willingness not to mind that it's going to hurt.
That is, it's going to hurt if you fail to miss the ground. Most people fail to miss the ground, and if they are really trying properly, the likelihood is that they will fail to miss it fairly hard.
Clearly, it is the second part, the missing, which presents the difficulties..."
--Douglas Adams
Even basic physics has a lot of ideas that grind against intuition. For example, the idea that objects in motion will remain in motion, unless acted on by a force, is one. The idea that gravity and the concept of weight as we feel it in the everyday is fictious is another counter-intuitive notion. Orbits combine these two. Firstly, because of a relative lack of friction, a large horizontal velocity can last a long time (meanwhile, we are used to large velocities requiring a continuous application of force). An object can continue to miss the ground without any extra expenditure of energy. Together with the idea of weightlessness as the more natural concept for two objects interacting in a gravitational field, orbits as falling becomes a bit clearer to grasp (as you fall, you follow the curvature of the earth—it also helps to imagine it as an animation, frame by frame; going down the frame makes it look more like falling—almost like we're unrolling the interaction through time).
At this point, I can't help but note that I've veered a bit far afield. The original intention of this piece was to connect curried functions from functional programs to fields in physics and yet, here I am talking about how falling is flying with no pesky forces in the way. Nonetheless, I hope the above has been an effective demonstration of the advantage of a computational approach to learning topics commonly thought of as challenging (there'll be more such demonstrations in the below). In reality, much of the difficulty is incidental instead of necessary: for the student, it is grappling with inconsistent and often unmotivated notation, as well as plain general unfamiliarity or counter-intuitiveness and for the teacher: the baggage of being wedded to centuries old tradition of how subjects must be taught. Much pointless complexity arises from the interaction of all those variables.
Right, on topic. The real world has more than just one dimension. Depending on whom you ask, it can be anywhere from 3 or 4 to 11 to 26 or more or less. Most real world (classical) problems settle on 3 however. And here, again we see the advantage of a computational approach. It takes only a few lines to generalize our methodology to vectors (and though it's general to N-dimensions, only for 3—or 2 in a few places—does it really makes sense for the operations we're performing).
let grav_field (m1:float<kg>) (r:float<m> []) (m2:float<kg>) =
let mag = vec_mag r * 1.<m>
let scale = (-G * m1 * m2) / cube mag
r |> Array.map (( * ) scale)
The function is the same as before except that, instead of returning a single number, we now return a list of numbers, representing our force vector. The input for distance has also been replaced with a vector. Below, we apply random masses at different locations to get force vectors. Everything is working correctly.
grav_field earth_mass [|1.<m>; 0.<m>; earth_rad|] 68.0<kg> ,
grav_field earth_mass [|1.<m>; 12.<m>; earth_rad|] 1.0<kg>
The functional way of viewing fields is that they are curried functions; through partial application they can either return more specialized functions or with enough inputs, (e.g.) a vector of forces. For example, the common static field is a function that takes an object of some appropriate type (say, Coulombs or Kg) and returns a function that accepts position vectors which itself returns a function that takes objects (of the same type) and maps them to forces. So by applying (e.g.) a mass, M, to the gravitational field function we specialize to talking about the gravitational field around M. Further specifying a position vector allows us to talk about the force at that distance for various other masses.
In short, fields are really functions. And the way they are used makes them the same as using curried functions with partial application.
In quantum mechanics things known as observables can't be avoided. They have an exact mathematical definition as self adjoint operators but for our purposes we can think of them as functions. This is tricky because whereas before, our field function simply took a vector of reals (such as $\mathbb{R}^3$) for position, in quantum mechanics, position is an operator and so is instead something like ($ \mathcal{S} \rightarrow \mathbb{R}$ ). Thinking it through, I realized that the most sensible notion of variables as functions are random variables! A quick search reveals that indeed, "real-valued quantum random variables correspond to self-adjoint operators affiliated with $\mathcal{A}$ [a von Neumann algebra on operators over a Hilbert Space], as postulated in quantum mechanics". One can also apply the notion of Observables to classical mechanics, and there also, they are functions that smell like random variables. And so, measurement can be thought of as an evaluation and hence, computation. Working backwards and having everything fit this way is really nice.
Quantum Field Theory further muddies our picture because there, people talk about fields as if they were a real thing and particles as excitations in this field. If fields are actually curried functions with specializations obtained through partial application, what does it mean for a function to be excited? However, it's worth remembering that at the bottom of it all, there is a computation that's carried out; a fluctuating field suggests that there's a computation unfolding—in other words, evaluating the field gives different values over time. This is also suggestive of another curiosity—fields in physics are actually two different things. There is the field as function and then there is the underlying computation that is eventually evaluated (or calculation or phenomenon*, the 'real' thing).
In the classical world, measurement yields simple things like vectors, in the quantum realm we get (things whose square magnitude are) probability distributions. There exist computations whose outputs are also probability distributions: probabilistic programs. Putting all these together, when a physicist says fields are interacting or oscillating, we can think of a probabilistic program (on noncommuting random variables) whose inputs are curried functions and, in some sense, mutually recursive with other 'fields', all of which unfolds some computation (over time).
*If for whatever reason, you do not like the idea of a computable reality, at the least our only interface with it through testable predictions must be.
Looking at the types for a gravitational and electrostatic field makes it clear that both fields can be represented by a single function parameterized by some Constant! By looking at the types, the similarity between the two fields is really brought into focus (more so than the similarity between the equations IMO) because the ability to specify both fields with a single function really forces one to take notice of the equivalence.
[<FunScript.JS>]
let static_field (constant:float<'some_constant>)
(m1:float<'some_quantity>)
(r:float<m> [])
(m2:float<'some_quantity>) =
let mag = vec_mag r * 1.<m>
let scale = (constant * m1 * m2) / cube mag
r |> Array.map (( * ) scale)
The type tells us that essentially, a static field is 'some_constant * 'some_quantity²/m², capturing the inverse square aspect of the relationship. We can then write the gravitational field in terms of static field and all the types work out and the function works as it should:
let gravitational_field (m:float<kg>) (r:float<m> []) (m2:float<kg>) = static_field -G m r m2
gravitational_field earth_mass [|10.<m>; 5.<m>;earth_rad|] 1.<kg>
[<Measure>] type C
let k = 9e9<N * m ^2 / C^2>
let electric_field (q:float<C>) (r:float<m> []) (q2:float<C>) = static_field k q r q2
electric_field 1.0<C> [| 1.0<m> |] 1.0<C> = [| 9e9<N> |]
G,k, log10 (float k) - log10 (float G)
We can define electric fields in the same manner. From that, it's clear that mass and charge are similar abstractions. The difference between the two fields is that 1) the applied constants have opposite signs and differ by some 20 orders of magnitude, 2) charges are like masses that are allowed to go negative. That opposite attracts is a mere fact of arithmetic, what is more fundamental is that charges are less restricted in that they can be negative.
//Electrostatic fields
[<FunScript.JS>]
module ElectorstaticFields =
let electric_field (q:float<C>) (r:float<m> []) (q2:float<C>) = static_field k q r q2
let eforce_on_q (r,q) (r', q') = electric_field q (sub r r') q'
let efield_force charge qs = qs |> Array.map (eforce_on_q charge) |> sum_force_vecs
let efield_sum qs r = total_efield_force (r,1.0<C>) qs |> Array.map (flip (/) 1.<C>)
At this point, it's instructive to highlight an example of the unnecessary obtuseness that abounds in physics equations. Given some test charge q, and a set of discrete charges, the electric field felt by q is given by: $$F(r) = q \cdot k \sum\limits_{i=1}^n {q_i\frac{r-r_i}{|r-r_i|^3}}$$
The problem of course, is that the Equation does not make clear that the function $F$ should take a pair of variables: the charge and its vector. It's assumed implicity, and thus one more thing to be confused about. The same function is given by efield_force
where the type makes it clear that the field is parameterized by both a position vector and a charge.
In this essay, I looked at the connection between fields and currying in functional languages. I used as examples, fields as found in gravity and electrostatics to elucidate the concept with a couple simple visualizations. I also traced where this road leads when followed along through to quantum mechanics. I observed that in quantum field theory, particles can (be loosely) labeled as oscillations of a field, making the field concept more primary. Yet, in what sense is a function—which has been only partially specialized by application of a few parameters—fluctuating? The field concept must have a second meaning here, referring to the space over which the field function can be applied. A further complication is that the field is defined over functions so the issue remains, in what sense is a function fluctuating? It is not clear to me but, these functions corresponding to observables are analogous to random variables, which in turn index a structure, mapping it to some measurable space. A probability distribution can be generated from them and presumably, interactions and measurements must in some sense "sample" from our "random variable".
Recalling that the fluctuations are periodic, we can dispense with particles to focus on "fields", a function whose computational representation would have (recursive, because it persists) periodic state changes and defined over an appropriate space. The field would be defined over functions that yield distributions, reminiscent of a function which indexes into a space of probabilistic programs. Virtual particles would themselves be analogized by transitory state changes (in computing terms, imagine values which do not persist once the scope of a function is exited) across message passing persistent computations (our approximation of fields).
I've taken many liberties in drawing the analogies above and, beyond the field/curry link, and, while speculation on the nature of quantum fields are not exactly correct, I do think they inhabit a useful plane of accuracy somewhere between precise knowledge and lay intuition.