« Unexpected benefits of version control » What would Dijkstra do? Proving the associativity of min

Competitive Programming in Haskell: modular arithmetic, part 1

Posted on February 15, 2020
Tagged , , ,

Modular arithmetic comes up a lot in computer science, and so it’s no surprise that it is featured, either explicitly or implicitly, in many competitive programming problems.

As a brief aside, to be good at competitive programming it’s not enough to have a library of code at your disposal (though it certainly helps!). You must also deeply understand the code in your library and how it works—so that you know when it is applicable, what the potential pitfalls are, how to debug when things don’t work, and how to make modifications to the code to fit some new problem. I will try to explain all the code I exhibit here—why, and not just how, it works. But you’ll also ultimately be better off if you write your own code rather than using mine! Read my explanations for ideas, and then go see if you can replicate the functionality you need.

Modular exponentiation

We start with a simple implementation of modular exponentiation, that is, computing \(b^e \pmod m\), via repeated squaring. This comes up occasionally in both number theory problems (unsurprisingly) and combinatorics problems (because such problems often ask for a very large answer to be given modulo \(10^9+7\) or some other large prime).

This works via the recurrence

\(\begin{array}{rcl}x^0 &=& 1 \\[0.5em] x^{2n} &=& (x^n)^2 \\[0.5em] x^{2n+1} &=& x \cdot (x^n)^2\end{array}\)

and using the fact that taking the remainder \(\pmod m\) commutes with multiplication.

modexp :: Integer -> Integer -> Integer -> Integer
modexp _ 0 _ = 1
modexp b e m
  | even e    = (r*r) `mod` m
  | otherwise = (b*r*r) `mod` m
  where
    r = modexp b (e `div` 2) m

This could probably be slightly optimized, but it’s hardly worth it; since the number of multiplications performed is proportional to the logarithm of the exponent, it’s pretty much instantaneous for any inputs that would be used in practice.

However, there’s another technique, obvious in retrospect, that I have recently discovered. Many competitive programming problems ask you to compute the answer modulo some fixed number (usually a large prime). In this context, all arithmetic operations are going to be carried out modulo the same value. With Haskell’s great facilities for cheap abstraction it makes perfect sense to write something like this:

m :: Integer
m = 10^9 + 7   -- or whatever the modulus is supposed to be

-- Make a newtype for integers mod m
newtype M = M { unM :: Integer }
  deriving (Eq, Ord)
instance Show M where show = show . unM

-- Do all arithmetic operations mod m
instance Num M where
  fromInteger n = M (n `mod` m)
  (M a) + (M b) = M ((a + b) `mod` m)
  (M a) - (M b) = M ((a - b) `mod` m)
  (M a) * (M b) = M ((a * b) `mod` m)
  abs    = undefined  -- make the warnings stop
  signum = undefined

The fun thing is that now the normal exponentiation operator (^) does modular exponentiation for free! It is implemented using repeated squaring so it’s quite efficient. You can now write all your code using the M type with normal arithmetic operations, and it will all be carried out mod m automatically.

Here are a couple problems for you to try:

Extended gcd

Beyond modular exponentiation, the workhorse of many number theory problems is the extended Euclidean Algorithm. It not only computes the GCD \(g\) of \(a\) and \(b\), but also computes \(x\) and \(y\) such that \(ax + by = g\) (which are guaranteed to exist by Bezout’s identity).

First, let’s recall how to compute the GCD via Euclid’s Algorithm:

gcd a 0 = abs a
gcd a b = gcd b (a `mod` b)

I won’t explain how this works here; you can go read about it at the link above, and it is well-covered elsewhere. But let’s think how we would find appropriate values \(x\) and \(y\) at the same time. Suppose the recursive call gcd b (a mod b), in addition to returning the greatest common divisor \(g\), were to also return values \(x\) and \(y\) such that \(bx + (a \bmod b)y = g\). Then our goal is to find \(x'\) and \(y'\) such that \(ax' + by' = g\), which we can compute as follows:

\(\begin{array}{rcl}g &=& bx + (a \bmod b)y \\[0.5em] &=& bx + (a - b\lfloor a/b \rfloor)y \\[0.5em] &=& bx + ay - b\lfloor a/b \rfloor y = ay + b(x - \lfloor a/b \rfloor y)\end{array}\)

Hence \(x' = y\) and \(y' = x - \lfloor a/b \rfloor y\). Note the key step of writing \(a \bmod b = a - b \lfloor a/b \rfloor\): If we take the integer quotient of \(a\) divided by \(b\) and then multiply by \(b\) again, we don’t necessarily get \(a\) back exactly, but what we do get is the next smaller multiple of \(b\). Subtracting this from the original \(a\) gives \(a \bmod b\).

-- egcd a b = (g,x,y)
--   g is the gcd of a and b, and ax + by = g
egcd :: Integer -> Integer -> (Integer, Integer, Integer)
egcd a 0 = (abs a, signum a, 0)
egcd a b = (g, y, x - (a `div` b) * y)
  where
    (g,x,y) = egcd b (a `mod` b)

Finally, egcd allows us to find modular inverses. The modular inverse of \(a \pmod m\) is a number \(b\) such that \(ab \equiv 1 \pmod m\), which will exist as long as \(\gcd(m,a) = 1\): in that case, by Bezout’s identity, there exist \(x\) and \(y\) such that \(mx + ay = 1\), and hence \(mx + ay \equiv 0 + ay \equiv ay \equiv 1 \pmod m\) (since \(mx \equiv 0 \pmod m\)). So \(y\) is the desired modular inverse of \(a\).

-- inverse m a  is the multiplicative inverse of a mod m.
inverse :: Integer -> Integer -> Integer
inverse m a = y `mod` m
  where
    (_,_,y) = egcd m a

Of course, this assumes that \(m\) and \(a\) are relatively prime; if not it will silently give a bogus answer. If you’re concerned about that you could check that the returned GCD is 1 and throw an error otherwise.

And here are a few problems for you to try!

In part 2 I’ll consider the task of solving modular equations.