# Wandering Autumn

Exploring change and the life that comes with it

# Calculating Apparent Magnitude

November 26, 2016

Astronomy is hard. This is mostly because astronomy is applied physics, and physics is hard. Ultimately, it’s a lot of math, largely in the geometry, algebra, and trigonometry areas—with calculus getting a word in from time to time. Unfortunately, my skills in geometry and trigonometry especially are not the greatest, so it’s something I struggle with a bit.

But I also know that math is just something that if you work hard enough at it, you’ll be able to figure it out. This applies also when trying to solve astronomical problems.

Needless to say, this post will have math in it.

I am working on a conworld. This should be no surprise, given my having done it before, and my expressed interest in conlanging. One part of building a conworld is figuring out some of the astronomical data: the solar system the planet is in, the various physical parameters of the sun and planet, and so on.

This has led me down an astronomy rabbit-hole, as I start to work out the details for the world’s solar system. Where I have run into an issue is around magnitudes, and because I’ve been struggling, I thought: what better way to figure it out than attempt to put together a blog post explaining it?

The problem I am trying to solve is this: what will the other planets in the solar system look like from the surface of my planet? More specifically, how big will they appear in the sky, and how bright will they be?

Aspects of calendaring—that is, how long each planet’s year is and so on—are also on my mind, but are orthogonal to this problem.

I want to address the second problem first, because it’s the one I’ve been struggling with: how bright will the planets be?

So, let’s talk about stars. This might seem to wander a little, but I promise, I’m going to come back.

The place to start is luminosity. The luminosity of an object is the amount of energy it outputs per unit of time, so it tends to be measured in watts, which are equivalent to joules per second. This tells you in quantitative terms how “bright” something is, by letting you know how much light it gives off.1

The next thing to discuss is flux, in particular, radiative flux. This is how much of that luminosity energy is hitting a particular area, and is measured in watts per square meter.2 This is the formula:

Where $f_s$ is the radiative flux in watts per square kilometer, $L_s$ is the luminosity of the source in watts, and $d_s$ is the distance from the source to the point of measurement in kilometers. Keep in mind that whenever distances are measured in space, they are measured from the centers of the objects, not the surfaces.

This formula is pretty easily derivable if you think about the geometry of the situation. If the energy from the sun is being spread evenly—a safe assumption—across the inside of a sphere, then that energy is split equally among the entire inner surface area of that sphere. The surface area of a sphere of radius $r$ is $4 \pi r^2$; that derivation is calculus-based and beyond the scope of this blog post. So the flux is the luminosity divided by the surface area of the sphere that’s the size of the planet’s orbit. It makes sense!

This is actually a pretty cool little formula, because it ties these three things together. It means that we can set up a detector that can measure the watts of light3 that hit it from a particular star. Given the area of that detector, we can determine the radiative flux. If we know the distance to the star, we can solve for luminosity; if we know the luminosity, we can solve for distance.

As it turns out, we can know the luminosities for stars, because luminosity is a function of a star’s temperature and size. This method of going from flux to distance is used to figure out how far away a lot of stars are from us. But that’s for another post; for now, let’s get back to luminosity.

There are two problems with luminosity and flux. The first is that the numbers are big. The luminosity of the sun is $3.828 \times 10^{26}\text{ W}$, and there are much brighter stars out there. We can get around this by referring to things in multiples of sun luminosity units, but that’s still not necessarily useful.

The second, more important, problem is that it doesn’t map as well to our experiences on Earth. As humans, we work well with comparisons; we care far more about relative brightness than we do absolute brightness.

That brings us to magnitude, which is a measure of how bright a star is. There are actually two magnitudes: apparent magnitude, which is how bright it appears to be to us “on Earth”; and absolute magnitude, which is how bright it actually is. Both are this are given relative to a reference star.

Magnitudes lack units, because they are a relative measurement. As well, they are logarithmic instead of linear; so a change of 1 magnitude is multiplicative. And, for historical reasons,4 the lower the magnitude, the brighter the object is.

The absolute magnitude of a star is defined as its apparent magnitude at a distance of 10 parsecs.5 This means we need to actually discuss apparent magnitude. As noted, this is how bright an object appears to be relative to a reference point. That reference point was decided to be the star Vega as seen from Earth, and has an apparent magnitude of 0.6

For historical reasons, the way the magnitude scale is defined is that an object of apparent magnitude 1 is 100 times brighter than an object of apparent magnitude 6. Doing a little bit of math, this means that a change in 1 on the magnitude scale means a multiplication of about 2.512. Not the most convenient,7 but at least we can just use 2.5 for simplicity.

So how do we measure “brightness”? Luminosity! How convenient!

Because the apparent magnitude is a relative measurement, we need to measure the relative brightness. So the relative brightness of our star $s$ to the base star $b$ is:

Where $L_s$ is the luminosity of the star in question, and $L_b$ is the luminosity of our base reference (that is, “Vega”). So if $L_s$ is 25,000 W and $L_b$ is 10,000 W, then we have a “relative magnitude” of 2.5.

We then need to wrap it into a logarithm because we have a logarithmic scale:

This leaves us with a “relative magnitude” of 0.4. We need to translate this to our scale of each 1 magnitude is actually a multiple of about 2.5 and it goes backwards, so we have to multiply by what each step means:

Plugging these numbers in, we get a relative magnitude of -1! Or at least, close enough given how large our measures of error probably are on those luminosity measurements.

If we know the luminosity of two stars, $L_{s1}$ and $L_{s2}$ for instance, then we can figure out the difference in their magnitudes, $m_{s1}$ and $m_{s2}$ with a little bit of math:8

You’ll notice that this is the more general form of the equation derived above. After all, the base star is $m_{s2} = 0$.

This is all well and good, but if have the luminosity of a star and its distance from us, how do we actually calculate the apparent magnitude, given that we can get from distance and luminosity to flux? That is, how does flux relate to apparent magnitude?

As it turns out, because magnitudes are ratios, then “luminosity” is not the only thing we can find a relation of. We can also find a relation between the fluxes of two stars. In other words:

That is, the apparent magnitude $m_s$ of a star is a function of its measured flux $f_s$ and the measured flux $f_b$ of a base star of magnitude 0. Note that the earlier relation between two arbitrary stars also holds while using fluxes instead of luminosities.

The last piece of the puzzle is that we need to be able to get from a star’s apparent magnitude to its absolute magnitude. In other words, we need to normalize what the flux would be from the star if we were 10 parsecs away, based on knowing its flux at the current distance.

We can figure out this relationship. Let’s get some variables defined: $f_{s}$ is the flux for our current apparent magnitude in watts per square kilometer; $d_s$ is our distance to the star in kilometers; $L_s$ is the luminosity of the star in watts; $m_s$ is the apparent magnitude of the star; $M_s$ is the absolute magnitude of the star; $F_s$ is the flux of the star at 10 parsecs in watts per square kilometer.

We know $m_s$, $f_s$, and $d_s$. This means:

So now considering the difference between apparent magnitude and absolute magnitude:

We have a subtle shift where $d_s$ is in kilometers and $10$ is in parsecs, but they both measure distance, so we can calculate them. This equation is known as the distance modulus, and it allows us to get from apparent magnitude to absolute magnitude:

Where $d_s$ here is given in parsecs.

Some people have an alternate way of writing this formula, derived thus:

Which is a little easier to do mentally; I personally prefer the first version I derived.

Either way, I’ve given a derivation of it, but why does it work? I will try to explain.

When we talk about the difference of two magnitudes, we are using logarithms to assign a linear number to express an exponential relationship. A difference of 1 means a multiple of 2.5; of 5 is 100; and such. So when we say $m_1 - m_2$, what we are really saying is “what do I have to multiply the brightness of star 2 by in order to get the brightness of star 1?” It’s just wrapped in a logarithm so that instead of saying “10,000 times”, we can say “a magnitude of 10”.

So $m_s - M_s$ is asking “if I move from where I am to 10 parsecs out, how many times dimmer does the star appear?” Given that the real contributing factor to the flux at a given distance is that distance squared, we just need to compare the two distances squared, and that will tell us the ratio. The squaring goes away because of algebra around logarithms, but the 2.5 is still there to give us our scale.

Let’s check our math here, and make sure it matches up with what scientists have already figured out. Take the sun, which has an apparent magnitude of -26.74 according to Wikipedia. The sun is also 1 AU away from Earth, which is $4.848 \times 10^{-6}\text{parsec}$ according to Wolfram. Plugging those into our formula:

Which is what Wikipedia also says. Hooray, math!

Now, to get at what I wanted to do in the first place: figure out the apparent magnitude of a planet. This is, of course, going to be a multi-step process. Let me lay out a roadmap. First, based on the luminosity of the sun and the reflectiveness of the planet, we can find the luminosity of the planet. Second, we can compare the luminosity of the planet to the luminosity of the sun to give us the difference in magnitudes at the same distance, which given the sun’s absolute magnitude can give us the planet’s absolute magnitude. Finally, we can use the distance modulus translate that absolute magnitude to apparent magnitude given the distance between us and that planet.

Got it? Good. Let’s start.

To get the luminosity of a planet, we need to know how much luminosity is hitting it from the source sun and bouncing off. The “bouncing off” can be handled by albedo. The albedo is basically a measurement of how much light is reflected off of a surface. It is given as a unit-less number between 0 and 1, effectively being a percentage. So, an albedo of 0.2 means that the object reflects 20% of the light that hits it.9

After knowing a planet’s albedo, keep in mind that we have the star’s flux, which gives us luminosity over area, so we just need to know the surface area. For our purposes dealing with basic reflection, it should be safe to approximate the spherical planet as a circle, which means that:

Where $L_p$ is the luminosity of the planet in watts, $a_p$ is the albedo of the planet, $f_p$ is the solar flux at the distance of the planet in watts per square kilometer, and $r_p$ is the radius of the planet in kilometers.

Just a reminder:

Where $d_p$ is the distance from the sun to the planet.

Next up is figuring out the absolute magnitude of the planet. We know that:

Which means that:

And finally, distance modulus in hand, we can get the apparent modulus:

Where $M$ is absolute magnitude, $m$ is apparent magnitude, $L$ is luminosity, $_s$ is of the sun, and $_p$ is of the planet. Also, $d_{e-p}$ is the distance from us to the planet in parsecs.

This should, in theory, give us our answer. Let’s work through an example: Mars. According to Wikipedia, Mars has an albedo of 0.25, a radius of 3400 kilometers, and is about 1.5 AU away from the sun, which is 224 million kilometers. As we found above, the Sun has an absolute magnitude of 4.83. So, keeping in mind that I’m posting rounded numbers while still using fuller representations in my calculations:

Which is extremely dim: the Hubble Space Telescope has a limit around 30–32, so it would probably not be able to see this at 10 parsecs away. That more or less meshes with expectations. So let’s get the apparent magnitude.

The key term here is the distance from the Earth to Mars. Obviously, this is variable, and actually based on all sorts of things related to elliptical orbits in reality. But for our purposes, let’s assume the Earth has an orbit of 1 AU and Mars has an orbit of 1.5 AU, and that they’re at their closest point, so Earth gets a “full Mars”. That makes their distance 0.5 AU, which is $2.42 \times 10^{-6}\text{ parsecs}$. We can put that in:

This is squarely in the range of $+1.6$ to $-3.0$ that Wikipedia gives as the apparent magnitude, so we have success!

As for why our attempt at calculating the brightest Mars fell pretty far short of the actual measured value, I think it’s mostly the glossing-over of using albedo of a flat circle that produces the wrong number.

The reality of the situation is that planets have a phase curve, which is an empirical measurement of the brightness of the planet at various angles of reflectivity of the sun’s light. These curves can actually give us a fair amount of information about the materials doing the reflecting, but are a bit harder to just invent for conworlds. Especially as these phase curves quantify brightness as absolute magnitude vs. angle.

However, with planets, “absolute magnitude” actually means something slightly different: instead of apparent magnitude at 10 parsecs, it’s apparent magnitude at 1 AU. The difference makes a lot of sense—we interact with planets on a solar system level, and stars at an interstellar scale—but it makes it a little more confusing to work with, simply because of the mix of terminology. It also changes the distance modulus, though I choose not to re-derive it under the AU scale, mostly because it doesn’t really help answer my question; at most, it would let me verify my apparent magnitude against the absolute magnitudes in Mars’ phase curve.

The phase curve is used to determine an object’s albedo, so for my purposes, it ends up sufficing. Someone else more willing to dive even more into details certainly could, if desired, but it’s here that I’ll stop with magnitudes.

The other question I wondered about was “how big will it appear in the sky?” Thankfully, this is a much, much easier question to answer. The way we measure how big something appears—its angular diameter—in the sky is in degrees, with each degree split into 60 arcminutes, and each arcminute split into 60 arcseconds. Further splits of arcseconds are accomplished by SI prefixes, so you can have e.g. a milliarcsecond.

If you hold your hand at arms length and look at it against a sky backdrop with one eye closed, the width of one of your fingers is probably about one degree.

Figuring out an object’s angular diameter—or in the case of a finger, its angular width—is pretty easy. If you draw a triangle from you to the object’s center and then to the object’s edge, you can just solve the triangle. So if you know the distance from you to the object and the actual size of the object, you can just do some trigonometry:

Where $\theta$ is the angle we’re trying to get, $w$ is the actual width of the object—that is, diameter for spheres—and $d$ is the distance from us to the object. Realistically, for astronomical objects, we can just use $r$ for radius, since that’s what’s going to be in whatever we look up values in:

So let’s go with the finger example. My pinky finger has a width of 1.5 cm. And trying my best with a tape measure, with my hand outstretched, it looks to be about 65 cm away from my eye. Thus:

Which is a bit bigger than I noted before, but I guess I must have big hands, or short arms relative to my finger size, or something. It’s just an approximation.

Let’s now try with an actual astronomical object: the moon. According to Wikipedia, the moon is about 375,000 km away, and has a radius of about 1700 km. I’m rounding, of course, especially with the distance, because that fluctuates; the small difference shouldn’t matter for our purposes. Anyway, let’s plug in some numbers:

That falls into the 29.3 to 34.1 arcminutes also given by the Wikipedia article, which means that we’re probably on the right track!10

Finally, let’s also do it for Mars. We know the distance is about 0.5 AU, which is $7.48 \times 10^7\text{ km}$, and the radius is 3400 km. So, with numbers:

Which is in the 3.5 to 25 arcseconds that Wikipedia claims. Hooray!

What is impressive about this, to me, is that if you do the math for stars, you get smaller and smaller angular diameters. Yet, we still see them as little points of light in the night sky. Gives you a sense of how bright they really are.

So, that’s that. I’ve worked out the equations so that for building a planetary system, I can now figure out what the night sky looks like—at least, from a planets perspective. I showed how to derive a lot of the math around magnitudes, and double-checked my work, and also talked about how big that point of light really is.

It’s been fun, but I don’t have much else to say about this for now. I’m sure these equations will end up being very useful for my conworlding, so the results will show up there. Eventually.

1. You’ll notice that luminosity only applies to astronomical things; for things on Earth we use different units to measure light output. I admit to not really understanding how to get from one to the other.

2. I say “meter” because that’s how things are properly defined, but using kilometers instead doesn’t have a particular impact on the math, aside from hiding three zeroes.

3. Yes, I know it makes no sense, but you get what I’m trying to say.

4. Thanks, Hipparchus, even if Ptolemy popularized your categorization.

5. Parsec is a unit of distance, not of time, despite George Lucas’ best efforts to convince us otherwise.

6. Actually, Vega has an apparent magnitude of 0.03, so it’s a little dimmer than we thought. I can’t find a particularly good explanation for this; my guess is it’s a measurement error similar to why the circumference of the Earth is more than 40,000 kilometers even though a kilometer was originally defined as 1/40,000 of the circumference of the Earth.

7. Thanks, Pogson

8. Written out because I’m bad at logarithmic algebra in my head.

9. In reality, planets have different levels of reflectivity depending on the angle of the light. This is called the phase curve, and is more complicated than I want to get. The albedo is calculated from the phase curve, and should keep us relatively in the right neighborhood, so we’ll stay there.

10. If you’re following along at home, please don’t make the mistake I did and have your calculator in radians mode. Also, know what arctangent is on your calculator; I was getting weird numbers because mine wanted “atan” not “arctan” for it.

❦     ❦     ❦