1. Fields as Curried Functions

Wednesday, January 13, 2016

In a 1994 paper, Gary T Leavens explained, in a wonderfully graspable manner, that Fields in physics can actually be viewed as curried functions. If you understand what curried functions are, then you already have a good idea of what fields are—these sort of bridges are incredibly helpful to those of us peering in from both sides of the divide (for me, from the CS side). The examples were in scheme and so the syntax inversion might make it a bit difficult for the unpracticed (such as myself) to follow, hence my translating (and hopefully, helpful to you too). My examples here are in F#, where units of measure are especially helpful in making things clearer.

2. Gravity

First, some definitions:

In [3]:
[<Measure>] type kg
[<Measure>] type m
[<Measure>] type s
[<Measure>] type N = kg * m / s^2

let G = 6.674e-11<N * m^2 / kg^2>
val G ∈ float<N m²/kg²> = 6.674e-11
In [4]:
let grav_force (m1:float< kg>) (r:float<m>) (m2:float<kg>) =        
    if r = 0.<m> then 0.<N> 
    else ((m1 * m2) * G) / (r * r)
val grav_force ∈ m1∈float<kg> ⟹ r∈float<m> ⟹ m2∈float<kg> ⟹ float<N>

This is a curried function since it's a function that takes a value and returns a function specialized to the passed in value.

Briefly: a curried function such as plus: (+) ∈ (int ⟹ int ⟹ int), allows us to say, pre-apply 5 to it. As in, e.g., f = (+) 5. Then we can do f 3 = 8 or f 10 = 15 etc. Basically, with the preapplication of 5 to (+), we get back a new function: int ⟹ int which takes a number and returns 5 + that number. Easy yes?

Returning to our function grav_force, when we put it into F# we got the type:

val grav_force ∈ m1∈float<kg> ⟹ r∈float<m> ⟹ m2∈float<kg> ⟹ float<N>

So everything looks good, it takes a mass (such as Earth) and returns a function that takes a distance (e.g. at the surface) which itself returns a function: one from masses (your mass) to Forces (your weight). This function is in fact the definition of a (scalar) gravitational field but more on that later. We can specialize the function and get earth's field.

In [5]:
let earth_mass = 5.96e24<kg>
let earth_rad = 6.37e6<m>
let earth_grav = grav_force earth_mass
val earth_mass ∈ float<kg> = 5.96e+24
val earth_rad ∈ float<m> = 6370000.0
val earth_grav ∈ (float<m> ⟹ float<kg> ⟹ float<N>)

Similarly, we can specialize to the surface (by applying a distance to earth_grav) or we can compute the mass of an object at a particular distance by passing in a distance and a mass. For example, noticing that N = kg m/s², we can compute Earth's accelaration at surface simply by passing in 1kg and the Earth's radius for a value of 9.802877992 N (which is just m/s² for this case). But we can do more interesting things with curried functions. We can flip the field equation to compute the gravitational force at a list of distances from earth's surface (a bit more flexible than how physics is typically taught isn't it):

In [6]:
let flip f a b = f b a
let flipped_field = flip earth_grav 1.<kg>
val flip ∈ f∈('a ⟹ 'b ⟹ 'c) ⟹ a∈'b ⟹ b∈'a ⟹ 'c
val flipped_field ∈ (float<m> ⟹ float<N>)
In [181]:
[for dist in 0.0<m>..1e5<m>..1e6<m> -> 6.37e6<m> + dist] |> List.map flipped_field
val it ∈ float<N> list =
  [9.802877992; 9.502194172; 9.215135446; 8.940890874; 8.678708962; 8.42789251;
   8.187793968; 7.957811259; 7.737383994; 7.525990059; 7.323142521]

The map applies our function to each distance in the list to get a list of forces. So at a distance of 1000km, the gravitational acceleration is ~7.32 m/sec².

Trivia: Mount Everest's height is 8848 meters, g = 9.776 m/sec² there—only 0.28% weaker than on the surface.

Below you can look at the gravitational acceleration on the surface of the moon and Mars. The moon has a mere acceleration of 1.62 m/s² and Mars is only just over 2x that—it is for this reason that many are concerned over the health effects (on skeletomuscular integrity) of an extended mission to Mars.

Additionally, we can look at the Earth/Moon system. The Moon and the Earth both exert a force of $2 \times10^{20}\,N$ on each other. Both are forever falling towards each other, with the condition that—on the odd chance that—if ever they should meet, things would not end well for both of them. A tragic love story if ever there was one.

In [6]:
let mass_moon = 7.3459e22<kg>
let mass_mars = 6.41693e23<kg>
let rad_moon = 1737.5<m> * 1000.
let radius_mars = 3386.<m> * 1000.

let mean_moon_dist = 3.85e8<m>

grav_force mass_moon rad_moon    1.<kg>  , 
grav_force mass_mars radius_mars 1.<kg>  ,
grav_force earth_mass mean_moon_dist mass_moon
val mass_moon ∈ float<kg> = 7.3459e+22
val mass_mars ∈ float<kg> = 6.41693e+23
val rad_moon ∈ float<m> = 1737500.0
val radius_mars ∈ float<m> = 3386000.0
val mean_moon_dist ∈ float<m> = 385000000.0
val it ∈ float<N> ⨯ float<N> ⨯ float<N> =
  (1.623983408, 3.735421349, 1.971314948e+20)

2.1 Graphing the near Earth Gravitational Pull

To make things a bit more concrete than a list of numbers, I graph some commonly known locations and the gravitational acceleration felt there. Looking at these, I can't help but think of the common conception (held by myself for a long time too) that space is "out there" and far away; when really, space (LEO, at least) is literally within walking distance. Most satellites, the International Space Station included, aren't floating in space; they're still deep within Earth's gravitational well!

In [8]:
let pois = [|"~Low Earth Orbit Min",   160.<m>  
             "~Low Earth Orbit Max",  2000.<m>  
             "GEO Synch"           , 40000.<m>
             "GPS"                 , 20350.<m>
             "ISS Avg"             ,  382.5<m>
             "Hubble Space Telescope", 595.<m>|] 

let gs = pois |> Array.map (snd >> ( * ) 1000. >> (+) earth_rad >> flipped_field) 
Array.zip pois gs 
|> Array.sortBy snd
|> Array.map (fun ((l,n),g) -> [|l; string n ; string g|])
|> tohtmlTable ["Location"; "Distance in km"; "g"]
Location Distance in km g
GEO Synch 40000 0.184994267215874
GPS 20350 0.557133861020474
~Low Earth Orbit Max 2000 5.67781902995993
Hubble Space Telescope 595 8.19955381460682
ISS Avg 382.5 8.7237513057884
~Low Earth Orbit Min 160 9.32837721530268

2.2 Not Falling is Unnatural

There is an art, it says, or rather, a knack to flying. The knack lies in learning how to throw yourself at the ground and miss. Pick a nice day, [The Hitchhiker's Guide to the Galaxy] suggests, and try it.

The first part is easy. All it requires is simply the ability to throw yourself forward with all your weight, and the willingness not to mind that it's going to hurt.

That is, it's going to hurt if you fail to miss the ground. Most people fail to miss the ground, and if they are really trying properly, the likelihood is that they will fail to miss it fairly hard.

Clearly, it is the second part, the missing, which presents the difficulties..."

--Douglas Adams

How can those on the ISS feel ~89% of what we get here on the surface at the same time as experiencing weightlessness? If you're like me, when you asked, you gotten the standard "because they're in free-fall". I find that explanation lacking; it fails to elaborate on what about freefall makes one weightless. The clearest analogy I've ever run into links freefall to the above quoted passage from THHGTH. Things in orbit around each other are one of the few things able to fall and manage to miss the ground. The key to appreciating free-fall is in realizing that things in orbit and every day falling (ignoring friction and such) are the same thing. Unlike how THHGTH recommends achieving flight however,for things in orbit, it is not so much that they miss the ground as it's that the ground keeps moving out of their away (it helps to look at it that way instead of as the symmetric situation of horizontal vs vertical velocity).

To fully understand the phenomenon however, we need to disentangle the two meanings of weight typically mixed together. There is weight as in the force exerted by participating in a gravitational field and there is the everyday notion of weight we feel. They're related since our weight comes from being party to a gravitational field while something else pushes back just as hard as the Earth is pulling.

The problem arises when we combine our notion of fall (downwards) with the idea that weight has to feel like something. But in fact, just being in a (uniform) gravitational field is not enough to "feel solid", you need a counter opposing force resisting your inertia. In the everyday world, this is provided by the Normal Force of say, a chair or the ground pushing back at you, resisting gravity and keeping you from falling. Things that are falling (this includes an apple, a person jumping or the moon or the Earth) have no 'felt' weight, the reason the ISS and other objects in orbit manage to achieve weightlessness is because the ground manages to move away before they get a chance to hit it. This is something we could all manage without having to move at incredible horizontal velocities if we somehow figured out how to jump and miss the ground by accident.

Even basic physics has a lot of ideas that grind against intuition. For example, the idea that objects in motion will remain in motion, unless acted on by a force, is one. The idea that gravity and the concept of weight as we feel it in the everyday is fictious is another counter-intuitive notion. Orbits combine these two. Firstly, because of a relative lack of friction, a large horizontal velocity can last a long time (meanwhile, we are used to large velocities requiring a continuous application of force). An object can continue to miss the ground without any extra expenditure of energy. Together with the idea of weightlessness as the more natural concept for two objects interacting in a gravitational field, orbits as falling becomes a bit clearer to grasp (as you fall, you follow the curvature of the earth—it also helps to imagine it as an animation, frame by frame; going down the frame makes it look more like falling—almost like we're unrolling the interaction through time).

3. Vector Fields

At this point, I can't help but note that I've veered a bit far afield. The original intention of this piece was to connect curried functions from functional programs to fields in physics and yet, here I am talking about how falling is flying with no pesky forces in the way. Nonetheless, I hope the above has been an effective demonstration of the advantage of a computational approach to learning topics commonly thought of as challenging (there'll be more such demonstrations in the below). In reality, much of the difficulty is incidental instead of necessary: for the student, it is grappling with inconsistent and often unmotivated notation, as well as plain general unfamiliarity or counter-intuitiveness and for the teacher: the baggage of being wedded to centuries old tradition of how subjects must be taught. Much pointless complexity arises from the interaction of all those variables.

Right, on topic. The real world has more than just one dimension. Depending on whom you ask, it can be anywhere from 3 or 4 to 11 to 26 or more or less. Most real world (classical) problems settle on 3 however. And here, again we see the advantage of a computational approach. It takes only a few lines to generalize our methodology to vectors (and though it's general to N-dimensions, only for 3—or 2 in a few places—does it really makes sense for the operations we're performing).

Expand vector helpers
In [11]:
let grav_field (m1:float<kg>) (r:float<m> []) (m2:float<kg>) =        
    let mag = vec_mag r * 1.<m>
    let scale = (-G * m1 * m2) / cube mag
    r |> Array.map (( * ) scale)
val grav_field ∈ m1∈float<kg> ⟹ r∈float<m> [] ⟹ m2∈float<kg> ⟹ float<N> []

The function is the same as before except that, instead of returning a single number, we now return a list of numbers, representing our force vector. The input for distance has also been replaced with a vector. Below, we apply random masses at different locations to get force vectors. Everything is working correctly.

In [89]:
grav_field earth_mass [|1.<m>; 0.<m>; earth_rad|] 68.0<kg> ,
grav_field earth_mass [|1.<m>; 12.<m>; earth_rad|] 1.0<kg>
val it ∈ float<N> [] ⨯ float<N> [] =
  ([|-0.0001046461073; 0.0; -666.5957035|],
   [|-1.538913343e-06; -1.846696011e-05; -9.802877992|])

3.1 Finally, On Currying vs Fields

The functional way of viewing fields is that they are curried functions; through partial application they can either return more specialized functions or with enough inputs, (e.g.) a vector of forces. For example, the common static field is a function that takes an object of some appropriate type (say, Coulombs or Kg) and returns a function that accepts position vectors which itself returns a function that takes objects (of the same type) and maps them to forces. So by applying (e.g.) a mass, M, to the gravitational field function we specialize to talking about the gravitational field around M. Further specifying a position vector allows us to talk about the force at that distance for various other masses.

In short, fields are really functions. And the way they are used makes them the same as using curried functions with partial application.

3.1.1 A quick note on Observables

In quantum mechanics things known as observables can't be avoided. They have an exact mathematical definition as self adjoint operators but for our purposes we can think of them as functions. This is tricky because whereas before, our field function simply took a vector of reals (such as $\mathbb{R}^3$) for position, in quantum mechanics, position is an operator and so is instead something like ($ \mathcal{S} \rightarrow \mathbb{R}$ ). Thinking it through, I realized that the most sensible notion of variables as functions are random variables! A quick search reveals that indeed, "real-valued quantum random variables correspond to self-adjoint operators affiliated with $\mathcal{A}$ [a von Neumann algebra on operators over a Hilbert Space], as postulated in quantum mechanics". One can also apply the notion of Observables to classical mechanics, and there also, they are functions that smell like random variables. And so, measurement can be thought of as an evaluation and hence, computation. Working backwards and having everything fit this way is really nice.

3.1.2 Quantum Field Theory

Quantum Field Theory further muddies our picture because there, people talk about fields as if they were a real thing and particles as excitations in this field. If fields are actually curried functions with specializations obtained through partial application, what does it mean for a function to be excited? However, it's worth remembering that at the bottom of it all, there is a computation that's carried out; a fluctuating field suggests that there's a computation unfolding—in other words, evaluating the field gives different values over time. This is also suggestive of another curiosity—fields in physics are actually two different things. There is the field as function and then there is the underlying computation that is eventually evaluated (or calculation or phenomenon*, the 'real' thing).

In the classical world, measurement yields simple things like vectors, in the quantum realm we get (things whose square magnitude are) probability distributions. There exist computations whose outputs are also probability distributions: probabilistic programs. Putting all these together, when a physicist says fields are interacting or oscillating, we can think of a probabilistic program (on noncommuting random variables) whose inputs are curried functions and, in some sense, mutually recursive with other 'fields', all of which unfolds some computation (over time).

*If for whatever reason, you do not like the idea of a computable reality, at the least our only interface with it through testable predictions must be.

4. Examples by Visualizations of and Interacting with Static Fields

Expand Visualize Fields

The graph above is a visualization of a portion of the Moon's, Earth's and Jupiter's gravitational fields. It's not the clearest but stacked like this, we can compare them by zooming in and rotating. The X axis is the distance from the surface, ranging from 0-100,000km. The Y's are masses, ranging from 10kg - 100,000kg. And the Z axis consists of forces felt in Newtons. Graphed like this, we can sort of get an idea the shape of the different fields.

The graphing package I used (Elegans), didn't make it obvious how to label or format axes and I didn't want to waste time hunting for them. Apologies for unclear labelling.

4.1 Generalizing to Static Fields

Looking at the types for a gravitational and electrostatic field makes it clear that both fields can be represented by a single function parameterized by some Constant! By looking at the types, the similarity between the two fields is really brought into focus (more so than the similarity between the equations IMO) because the ability to specify both fields with a single function really forces one to take notice of the equivalence.

In [17]:
let static_field (constant:float<'some_constant>)
                 (r:float<m> []) 
                 (m2:float<'some_quantity>) =        
    let mag = vec_mag r * 1.<m>  
    let scale = (constant * m1 * m2) / cube mag
    r |> Array.map (( * ) scale)
val static_field ∈
  constant:float<'some_constant> ⟹
    m1:float<'some_quantity> ⟹
      r:float<m> [] ⟹
        m2:float<'some_quantity> ⟹
          float<'some_constant 'some_quantity²/m²> []

The type tells us that essentially, a static field is 'some_constant * 'some_quantity²/m², capturing the inverse square aspect of the relationship. We can then write the gravitational field in terms of static field and all the types work out and the function works as it should:

In [18]:
let gravitational_field (m:float<kg>) (r:float<m> []) (m2:float<kg>) = static_field -G m r m2

gravitational_field earth_mass [|10.<m>; 5.<m>;earth_rad|] 1.<kg>
val gravitational_field ∈
  m∈float<kg> ⟹ r∈float<m> [] ⟹ m2∈float<kg> ⟹ float<N> []
val it ∈ float<N> [] = [|-1.538913343e-05; -7.694566713e-06; -9.802877992|]
In [19]:
[<Measure>] type C

let k = 9e9<N * m ^2 / C^2> 
let electric_field (q:float<C>) (r:float<m> []) (q2:float<C>) = static_field k q r q2
electric_field 1.0<C> [| 1.0<m> |] 1.0<C> = [| 9e9<N> |]
val k ∈ float<N m²/C²> = 9000000000.0
val electric_field ∈ q∈float<C> ⟹ r∈float<m> [] ⟹ q2∈float<C> ⟹ float<N> []
val it ∈ bool = true
In [70]:
G,k, log10 (float k) - log10 (float G)
val it ∈ float<N m²/kg²> ⨯ float<N m²/C²> ⨯ float =
  (6.674e-11, 9000000000.0, 20.12985631)

We can define electric fields in the same manner. From that, it's clear that mass and charge are similar abstractions. The difference between the two fields is that 1) the applied constants have opposite signs and differ by some 20 orders of magnitude, 2) charges are like masses that are allowed to go negative. That opposite attracts is a mere fact of arithmetic, what is more fundamental is that charges are less restricted in that they can be negative.

In [252]:
//Electrostatic fields
module ElectorstaticFields = 
    let electric_field (q:float<C>) (r:float<m> []) (q2:float<C>) = static_field k q r q2
    let eforce_on_q (r,q) (r', q') = electric_field q (sub r r') q'
    let efield_force charge qs = qs |> Array.map (eforce_on_q charge) |> sum_force_vecs
    let efield_sum qs r = total_efield_force (r,1.0<C>) qs |> Array.map (flip (/) 1.<C>)
  val electric_field ∈
    q∈float<C> ⟹ r∈float<m> [] ⟹ q2∈float<C> ⟹ float<N> []
  val eforce_on_q ∈
    r∈float<m> [] ⨯ q∈float<C> ⟹ r'∈float<m> [] ⨯ q'∈float<C> ⟹ float<N> []
  val efield_force ∈
    float<m> [] ⨯ float<C> ⟹ qs∈(float<m> [] ⨯ float<C>) [] ⟹ float<N> []
  val efield_sum ∈
    qs:(float<m> [] ⨯ float<C>) [] ⟹ r:float<m> [] ⟹ float<N/C> []

At this point, it's instructive to highlight an example of the unnecessary obtuseness that abounds in physics equations. Given some test charge q, and a set of discrete charges, the electric field felt by q is given by: $$F(r) = q \cdot k \sum\limits_{i=1}^n {q_i\frac{r-r_i}{|r-r_i|^3}}$$

The problem of course, is that the Equation does not make clear that the function $F$ should take a pair of variables: the charge and its vector. It's assumed implicity, and thus one more thing to be confused about. The same function is given by efield_force where the type makes it clear that the field is parameterized by both a position vector and a charge.


In this essay, I looked at the connection between fields and currying in functional languages. I used as examples, fields as found in gravity and electrostatics to elucidate the concept with a couple simple visualizations. I also traced where this road leads when followed along through to quantum mechanics. I observed that in quantum field theory, particles can (be loosely) labeled as oscillations of a field, making the field concept more primary. Yet, in what sense is a function—which has been only partially specialized by application of a few parameters—fluctuating? The field concept must have a second meaning here, referring to the space over which the field function can be applied. A further complication is that the field is defined over functions so the issue remains, in what sense is a function fluctuating? It is not clear to me but, these functions corresponding to observables are analogous to random variables, which in turn index a structure, mapping it to some measurable space. A probability distribution can be generated from them and presumably, interactions and measurements must in some sense "sample" from our "random variable".

Recalling that the fluctuations are periodic, we can dispense with particles to focus on "fields", a function whose computational representation would have (recursive, because it persists) periodic state changes and defined over an appropriate space. The field would be defined over functions that yield distributions, reminiscent of a function which indexes into a space of probabilistic programs. Virtual particles would themselves be analogized by transitory state changes (in computing terms, imagine values which do not persist once the scope of a function is exited) across message passing persistent computations (our approximation of fields).

I've taken many liberties in drawing the analogies above and, beyond the field/curry link, and, while speculation on the nature of quantum fields are not exactly correct, I do think they inhabit a useful plane of accuracy somewhere between precise knowledge and lay intuition.