iTranslated by AI

The content below is an AI-generated translation. This is an experimental feature, and may contain errors. View original article
⏰️

Solving Clock Puzzles with Elementary Divisor Theory

に公開

This is an article for the 12th day of the "Haskell Advent Calendar 2025".


I titled this "Clock Puzzle," but I don't actually know the real name of this puzzle (please let me know if you do). For example, a puzzle might look like this:

Suppose there are clocks on the left, center, and right that measure 3 hours, 4 hours, and 5 hours respectively, and the clock hands are pointing to 1, 2, 3. When you press the push buttons, the hands move as follows:

  • Pressing button a moves the left hand by 2 hours and the center hand by 1 hour.
  • Pressing button b moves the center hand by 2 hours and the right hand by 3 hours.
  • Pressing button c moves the left hand by 1 hour and the right hand by 2 hours.

In this case, start from the initial state and press the buttons so that all clock hands point to 0.

In general, the number of hours each clock can measure can be the same or different, and the number of clocks and buttons doesn't have to match. Which button corresponds to which clock is also fundamentally free (of course, whether it's solvable depends on the settings).

I've seen such puzzles in actual games:

  • Sonic Frontiers - The Tombstone Puzzle (Spoiler Video)
  • Legend of Wand and Sword - Stela Ruins (Spoiler Video)
  • (I feel like there was something similar in FFX as well)

There are likely many others (please let me know if you know any). Long ago, there was a puzzle called Lights Out, which can be thought of as a similar puzzle where 2-hour clocks are arranged on a grid. I was also told that Rubik's Clock is a similar type of puzzle.

Although it's a bit like using a sledgehammer to crack a nut, in this article, I will solve this puzzle using Elementary Divisor Theory.

How to Solve

We will focus on the puzzle in the example above, but the following discussion can be extended directly to general cases.

First, the state of the clocks can be represented as an element of a triple of integers:

{\mathbb Z}\times{\mathbb Z}\times{\mathbb Z} = {\mathbb Z}^3

by counting how many hours the hands have rotated from the 0 o'clock reference (clockwise is +1, counter-clockwise is -1). Furthermore, if we focus on the numbers on the dial, it can be considered an element of the direct product of cyclic groups:

{\mathbb Z}/3{\mathbb Z}\times{\mathbb Z}/4{\mathbb Z}\times{\mathbb Z}/5{\mathbb Z}

There exists a natural surjection:

\pi: {\mathbb Z}^3\to{\mathbb Z}/3{\mathbb Z}\times{\mathbb Z}/4{\mathbb Z}\times{\mathbb Z}/5{\mathbb Z}

which considers the remainder after dividing by the hours each clock can measure. For example, the state where all hands point to 0 represents:

(\overline{0},\overline{0},\overline{0})\in{\mathbb Z}/3{\mathbb Z}\times{\mathbb Z}/4{\mathbb Z}\times{\mathbb Z}/5{\mathbb Z}

Suppose the initial state was:

(\overline{y_1},\overline{y_2},\overline{y_3})\in{\mathbb Z}/3{\mathbb Z}\times{\mathbb Z}/4{\mathbb Z}\times{\mathbb Z}/5{\mathbb Z}

To solve the puzzle, we need to create the inverse element (-\overline{y_1},-\overline{y_2},-\overline{y_3}) of the initial state by pressing the buttons. Since the current initial state is (\overline{1},\overline{2},\overline{3}), the inverse is (-\overline{1},-\overline{2},-\overline{3}), which is (\overline{2},\overline{2},\overline{2}). Pulling this back to the world of {\mathbb Z}^3 means we need to create any element from the set \pi^{-1}((\overline{2},\overline{2},\overline{2})).

Pressing button a is represented in the world of {\mathbb Z}^3 as adding the following to the clock states (representing elements of {\mathbb Z}^3 as column vectors):

\begin{pmatrix} 2 \\ 1 \\ 0 \end{pmatrix}

Similarly, pressing button b adds:

\begin{pmatrix} 0 \\ 2 \\ 3 \end{pmatrix}

And pressing button c adds:

\begin{pmatrix} 1 \\ 0 \\ 2 \end{pmatrix}

Let's define the matrix A by arranging these column vectors:

A = \begin{pmatrix} 2 & 0 & 1 \\ 1 & 2 & 0 \\ 0 & 3 & 2 \end{pmatrix}

Considering a vector x = (a, b, c)^{\sf T} that represents the number of times buttons a, b, and c were pressed, the result of pressing the buttons is:

Ax

To solve the puzzle, we need to find a, b, c, n, m, l such that:

Ax\in\pi^{-1}((\overline{2},\overline{2},\overline{2}))

In other words:

\begin{pmatrix} 2 & 0 & 1 \\ 1 & 2 & 0 \\ 0 & 3 & 2 \end{pmatrix} \begin{pmatrix} a \\ b \\ c \end{pmatrix} = \begin{pmatrix} 2 \\ 2 \\ 2 \end{pmatrix} + \begin{pmatrix} 3 & 0 & 0 \\ 0 & 4 & 0 \\ 0 & 0 & 5 \end{pmatrix} \begin{pmatrix} n \\ m \\ l \end{pmatrix}

By setting:

\tilde{A} = \begin{pmatrix} 2 & 0 & 1 & 3 & 0 & 0 \\ 1 & 2 & 0 & 0 & 4 & 0 \\ 0 & 3 & 2 & 0 & 0 & 5 \end{pmatrix},\ \ w = \begin{pmatrix} a \\ b \\ c \\ -n \\ -m \\ -l \end{pmatrix},\ \ b = \begin{pmatrix} 2 \\ 2 \\ 2 \end{pmatrix}

the original equation becomes:

\tilde{A}w=b

We can see that we just need to solve this system of linear equations.

If we only wanted to solve the system of linear equations, a straightforward method would be to reduce the coefficient matrix to row echelon form using Gaussian elimination, which leads to the Hermite Normal Form. However, in this article, to investigate the structure of the puzzle in more detail, we will consider a solution using Elementary Divisor Theory, specifically the Smith Normal Form.

Elementary Divisor Theory

Elementary Divisor Theory is a theory that investigates the structure of matrices and finitely generated modules over an elementary divisor ring R, using the Smith Normal Form and elementary divisors of matrices over R. If it is a Principal Ideal Domain (PID), it is an elementary divisor ring, so the integers {\mathbb Z} are included as a concrete example. Since we only need the Smith Normal Form for integer matrices this time, we will proceed with the discussion based on the integers {\mathbb Z}.

First, let's look at the elementary operations on integer matrices.

【Elementary Operations for Integer Matrices】
<Left Elementary Operations>
(Left 1) Swap two rows
(Left 2) Multiply a row by -1
(Left 3) Multiply a row by an integer z and add it to another row
<Right Elementary Operations>
(Right 1) Swap two columns
(Right 2) Multiply a column by -1
(Right 3) Multiply a column by an integer z and add it to another column

These elementary operations can be realized by multiplying by matrices. For example, multiplying by the 3x3 matrix

\begin{pmatrix} 1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \end{pmatrix}

from the left realizes the elementary operation (Left 1) of swapping the 2nd and 3rd rows. Conversely, multiplying this matrix from the right realizes the elementary operation (Right 1) of swapping the 2nd and 3rd columns. As can be seen from the fact that it represents an exchange, this matrix is its own inverse.

Next, multiplying by

\begin{pmatrix} 1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \end{pmatrix}

from the left realizes the elementary operation (Left 2) of multiplying the 2nd row by -1. Conversely, multiplying this matrix from the right realizes the elementary operation (Right 2) of multiplying the 2nd column by -1. This matrix is also its own inverse.

Finally, multiplying by

\begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & z \\ 0 & 0 & 1 \end{pmatrix}

from the left realizes the elementary operation (Left 3) of adding z times the 3rd row to the 2nd row. Conversely, multiplying this matrix from the right realizes the elementary operation (Right 3) of adding z times the 2nd column to the 3rd column. The inverse of this matrix is

\begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & -z \\ 0 & 0 & 1 \end{pmatrix}

.

Matrices that represent elementary operations are called elementary matrices. An invertible integer matrix—that is, an integer matrix with a determinant of \pm 1—is called a unimodular matrix. Since elementary matrices are invertible, they are unimodular matrices; conversely, any unimodular matrix can be expressed as a product of elementary matrices (we will prove this later).

When a matrix B is obtained by repeatedly applying elementary operations to a matrix A, we say that matrices A and B are equivalent. By definition, when matrices A and B are equivalent, there exist unimodular matrices U and V such that

B = UAV

This relationship of being equivalent is an equivalence relation.

Now that we are prepared, let's prove the existence of the Smith Normal Form.

【Existence of Smith Normal Form】
Any m \times n matrix A with integer elements is equivalent to an m \times n matrix (normal form) that has values only in its diagonal components:

\begin{pmatrix} d_1 & \\ & d_2 \\ & & \ddots \\ & & & d_r \\ & & & & 0 \\ & & & & & \ddots \\ & & & & & & 0 \\ \end{pmatrix}

d_1, \dots, d_r are non-zero integers, and d_{i-1} divides d_i (2 \leq i \leq r).

Proof
If A is the zero matrix O, then A itself is the normal form.

If we can prove the existence of a matrix D equivalent to A \neq O such that:
(1) d_{11} \neq 0
(2) d_{i1} = d_{1j} = 0 \ (i > 1, j > 1)
(3) d_{11} divides d_{ij} \ (i > 1, j > 1)
represented as the following matrix:

D = \begin{pmatrix} d_{11} & 0 & \cdots & 0\\ 0 & d_{22} & \cdots & d_{2n}\\ \vdots & \vdots & \ddots & \vdots\\ 0 & d_{m2} & \cdots & d_{mn} \end{pmatrix}

then the proof is completed by recursively repeating the same argument for the lower-right block matrix, reaching the normal form in a finite number of operations.

① If the non-zero element with the smallest absolute value in A is a_{ij}, then by considering a matrix B where the 1st and i-th rows and the 1st and j-th columns are swapped, we obtain a matrix equivalent to A that satisfies b_{11} = a_{ij}.

② If b_{11} divides all elements in the 1st row, we obtain an equivalent matrix where all (1, j) components are 0 by adding -b_{1j}/b_{11} times the 1st column to the j-th column (j > 1). Similarly, if b_{11} divides all elements in the 1st column, we obtain an equivalent matrix where all (i, 1) components are 0 by adding -b_{i1}/b_{11} times the 1st row to the i-th row (i > 1).

③ Conversely, if there is an element in the 1st row that is not divisible by b_{11}, i.e., in the (1, j) component:

b_{1j} = qb_{11} + r, \ 0 < r < |b_{11}|

we obtain an equivalent matrix where the (1, j) component becomes r by adding -q times the 1st column to the j-th column. We then treat this matrix as A and restart the discussion from ①. The same applies if there is an element in the 1st column not divisible by b_{11}. Since 0 < r < |b_{11}|, this process must stop in a finite number of iterations, eventually yielding a matrix C where the elements of the 1st row and 1st column other than the (1, 1) component are 0 (combined with ②).

④ If there exists an element c_{ij} \ (i > 1, j > 1) that is not divisible by c_{11}, treat the matrix obtained by adding the i-th row to the 1st row as B and restart the discussion from ③. Every time this discussion is repeated, |c_{11}| strictly decreases, so this process must terminate in a finite number of steps, yielding a matrix D where all components (i, j) \ (i > 1, j > 1) are divisible by the (1, 1) component.

This matrix D satisfies properties (1), (2), and (3), so the same argument can be recursively repeated thereafter.


In fact, it can also be proven that the normal form is uniquely determined up to a unit multiple (multiplication by -1) of the diagonal components[1]. This uniquely existing normal form equivalent to a matrix A is called the Smith Normal Form, and the non-zero diagonal components d_1, \dots, d_r are called the elementary divisors of the matrix A.

Consider the Smith Normal Form S of a unimodular matrix A. Since A and S are equivalent, there exist matrices U and V that can be expressed as products of elementary matrices (unimodular matrices) such that S = UAV. Since the determinant of a unimodular matrix is \pm 1:

\begin{matrix} \operatorname{det}(S) &=& \operatorname{det}(U)\operatorname{det}(A)\operatorname{det}(V) \\ &=& \pm 1\cdot\pm 1\cdot\pm 1 \\ &=& \pm 1 \end{matrix}

it follows. Since S is a diagonal matrix, all its diagonal components are \pm 1, and such a matrix can be expressed as a product of elementary matrices. Thus, by considering A = U^{-1}SV^{-1}, we can see that the unimodular matrix A itself can be expressed as a product of elementary matrices.

Implementation

Let's go back to the puzzle. While it is easy to find the Smith Normal Form by hand for the 3x3 matrix we are currently considering, it becomes difficult in general cases, so let's use a program to perform the calculations.

Haskell is used for the implementation. By using type-level natural numbers, even if we write a process that confuses row and column operations, it can be detected at compile time, providing peace of mind.

Declaration of pragmas and library imports
{-# LANGUAGE GHC2024 #-}
{-# LANGUAGE BlockArguments #-}

import Control.Monad (forM_, when)
import Data.Function (fix)
import Data.Maybe (fromMaybe, listToMaybe)
import Data.Proxy (Proxy(..))
import GHC.TypeLits (KnownNat, natVal)

import Control.Monad.State
import Data.Finite (Finite, packFinite)
import Data.Vector.Sized (Vector)
import qualified Data.Vector.Sized as V

First, we implement functions related to pure linear algebra. For vector and matrix types, we use Vector from vector-sized so that sizes can be handled as type-level natural numbers.

-- | Dot product
dot :: Num a => Vector n a -> Vector n a -> a
dot = (V.sum .) . V.zipWith (*)

-- | Matrix
type Matrix n m a = Vector n (Vector m a)

-- | Identity matrix
eye :: KnownNat n => Matrix n n Int
eye = V.generate \i -> V.generate \j -> if i == j then 1 else 0

-- | Matrix transposition
transpose :: KnownNat n => Matrix m n a -> Matrix n m a
transpose = sequenceA

-- | Matrix multiplication
(!*!) :: (KnownNat r, Num a) => Matrix m n a -> Matrix n r a -> Matrix m r a
a !*! b = fmap (flip fmap (transpose b) . dot) a

If you are interested in the implementation of more varied linear algebra functions, please refer to the following article:

https://zenn.dev/lotz/articles/6b0d8081ad2f8a

Next, let's implement the elementary operations of integer matrices, which are essential for calculating the Smith Normal Form. We will implement them as operations corresponding to elementary operations rather than as products of elementary matrices.

-- | Elementary operation of integer matrices (Left 1)
swapRows :: (KnownNat n, KnownNat m) => Finite n -> Finite n -> Matrix n m Int -> Matrix n m Int
swapRows i j mat
  | i == j = mat
  | otherwise = V.generate \r ->
      if r == i then mat `V.index` j
      else if r == j then mat `V.index` i
      else mat `V.index` r

-- | Elementary operation of integer matrices (Right 1)
swapCols :: (KnownNat n, KnownNat m) => Finite m -> Finite m -> Matrix n m Int -> Matrix n m Int
swapCols i j mat
  | i == j = mat
  | otherwise = flip V.map mat \row ->
      V.generate \r ->
        if r == i then row `V.index` j
        else if r == j then row `V.index` i
        else row `V.index` r

-- | Elementary operation of integer matrices (Left 2)
negateRow :: (KnownNat n, KnownNat m) => Finite n -> Matrix n m Int -> Matrix n m Int
negateRow i = V.imap (\r row -> if r == i then V.map negate row else row)

-- | Elementary operation of integer matrices (Right 2)
negateCol :: (KnownNat n, KnownNat m) => Finite m -> Matrix n m Int -> Matrix n m Int
negateCol j mat = flip V.map mat \row -> flip V.imap row (\c z -> if c == j then negate z else z) 

-- | Elementary operation of integer matrices (Left 3)
addRow :: (KnownNat n, KnownNat m) => Finite n -> Finite n -> Int -> Matrix n m Int -> Matrix n m Int
addRow i j z mat = V.generate \r ->
  if r == i
    then V.zipWith (\x y -> x + z * y) (mat `V.index` i) (mat `V.index` j)
    else mat `V.index` r

-- | Elementary operation of integer matrices (Right 3)
addCol :: (KnownNat n, KnownNat m) => Finite m -> Finite m -> Int -> Matrix n m Int -> Matrix n m Int
addCol i j z mat = flip V.map mat \row ->
    V.generate \c ->
      if c == i
        then (row `V.index` i) + z * (row `V.index` j)
        else row `V.index` c

Let's define a data structure that represents the triple (S, U, V) providing the equivalence between the matrix A and its Smith Normal Form S:

S = UAV
-- | Smith Normal Form of integer matrices
-- Represents S, U, V such that S = UAV
data SNFState n m = SNFState
  { snfS :: Matrix n m Int
  , snfU :: Matrix n n Int
  , snfV :: Matrix m m Int
  } deriving Show

Within the algorithm for calculating the Smith Normal Form, we frequently refer to and update the SNFState. Therefore, we will adopt management using the State monad and prepare necessary helper functions.

-- | Monad used for calculating the Smith Normal Form
type SNFMonad n m a = State (SNFState n m) a

getS :: SNFMonad n m (Matrix n m Int)
getS = gets snfS

getVal :: (KnownNat n, KnownNat m) => Finite n -> Finite m -> SNFMonad n m Int
getVal i j = do
  s <- getS
  pure $ (s `V.index` i) `V.index` j

modifyS :: (Matrix n m Int -> Matrix n m Int) -> SNFMonad n m ()
modifyS f = modify \s -> s { snfS = f (snfS s) }

modifyU :: (Matrix n n Int -> Matrix n n Int) -> SNFMonad n m ()
modifyU f = modify \s -> s { snfU = f (snfU s) }

modifyV :: (Matrix m m Int -> Matrix m m Int) -> SNFMonad n m ()
modifyV f = modify \s -> s { snfV = f (snfV s) }

We define elementary operations of integer matrices for SNFState. For example, multiplying matrix S by an elementary matrix P from the left results in:

PS = PUAV

Thus, an elementary operation on matrix S also simultaneously triggers an elementary operation on matrix U or V.

-- | Elementary operations of integer matrices for `SNFState`
swapRowsM :: (KnownNat n, KnownNat m) => Finite n -> Finite n -> SNFMonad n m ()
swapRowsM i j = do
  modifyS $ swapRows i j
  modifyU $ swapRows i j

swapColsM :: (KnownNat n, KnownNat m) => Finite m -> Finite m -> SNFMonad n m ()
swapColsM i j = do
  modifyS $ swapCols i j
  modifyV $ swapCols i j

negateRowM :: (KnownNat n, KnownNat m) => Finite n -> SNFMonad n m ()
negateRowM i = do
  modifyS $ negateRow i
  modifyU $ negateRow i

negateColM :: (KnownNat n, KnownNat m) => Finite m -> SNFMonad n m ()
negateColM j = do
  modifyS $ negateCol j
  modifyV $ negateCol j

addRowM :: (KnownNat n, KnownNat m) => Finite n -> Finite n -> Int -> SNFMonad n m ()
addRowM i j m = do
  modifyS $ addRow i j m
  modifyU $ addRow i j m

addColM :: (KnownNat n, KnownNat m) => Finite m -> Finite m -> Int -> SNFMonad n m ()
addColM i j m = do
  modifyS $ addCol i j m
  modifyV $ addCol i j m

Finally, let's implement the algorithm to find the Smith Normal Form. It is somewhat long, but since the proof of the existence of the Smith Normal Form is constructive and it is basically implemented as-is, it should be easier to read if you follow along with the proof. Since elementary divisors have a degree of freedom of unit multiples, we have just added a step at the end to multiply by -1 to ensure they are 0 or greater.

-- | Algorithm to find the Smith Normal Form
smithNormalForm :: forall n m. (KnownNat n, KnownNat m) => Matrix n m Int -> SNFState n m
smithNormalForm mat0 = execState algorithm initialState
  where
    initialState = SNFState mat0 eye eye
    rows = fromIntegral $ natVal (Proxy :: Proxy n)
    cols = fromIntegral $ natVal (Proxy :: Proxy m)
    limit = min rows cols

    algorithm :: SNFMonad n m ()
    algorithm = do
      forM_ [0 .. limit - 1] \k -> do
        processPivot k
        ensurePositiveDiagonal k

    toFinite :: KnownNat k => Int -> Finite k
    toFinite x = fromMaybe (error "Index out of bounds") $ packFinite (fromIntegral x)

    processPivot :: Int -> SNFMonad n m ()
    processPivot k = do
      fix \restart -> do
        -- ① Find the smallest non-zero element and move it to (k, k)
        maybeMin <- findMinNonZero k
        case maybeMin of
          Nothing -> pure () -- If the rest is a zero matrix, this process is complete
          Just (r, c) -> do
            let k_row = toFinite @n k
                k_col = toFinite @m k
            swapRowsM k_row r
            swapColsM k_col c

            -- ② Row direction
            forM_ [k + 1 .. rows - 1] \i -> do
              let i_row = toFinite @n i
              val <- getVal i_row k_col
              pivot <- getVal k_row k_col
              when (val /= 0) do
                let q = val `div` pivot
                addRowM i_row k_row (-q)
                rem' <- getVal i_row k_col
                when (rem'/= 0) do
                  swapRowsM k_row i_row
                  restart

            -- ② Column direction
            forM_ [k + 1 .. cols - 1] \j -> do
              let j_col = toFinite @m j
              val <- getVal k_row j_col
              pivot <- getVal k_row k_col
              when (val /= 0) do
                let q = val `div` pivot
                addColM j_col k_col (-q)
                rem' <- getVal k_row j_col
                when (rem' /= 0) do
                  swapColsM k_col j_col
                  restart

            -- ③
            pivot <- getVal k_row k_col
            badIndices <- findBadIndices k pivot
            case badIndices of
              Just (_, c') -> do
                addColM k_col c' 1
                restart
              Nothing -> pure ()

    -- Function to find the position of the non-zero element with the smallest absolute value in the lower-right block matrix starting from (k, k)
    findMinNonZero :: Int -> SNFMonad n m (Maybe (Finite n, Finite m))
    findMinNonZero k = do
      s <- getS
      let candidates = [ (abs val, (r, c))
                       | r <- map (toFinite @n) [k .. rows - 1]
                       , c <- map (toFinite @m) [k .. cols - 1]
                       , let val = (s `V.index` r) `V.index` c
                       , val /= 0
                       ]
      if null candidates
        then pure Nothing
        else pure $ Just (snd $ minimum candidates)

    -- Function to find the position of an element in the lower-right block matrix starting from (k, k) that is not divisible by a given integer
    findBadIndices :: Int -> Int -> SNFMonad n m (Maybe (Finite n, Finite m))
    findBadIndices k pivot = do
      s <- getS
      let indices = [ (r, c)
                    | r <- map (toFinite @n) [k + 1 .. rows - 1]
                    , c <- map (toFinite @m) [k + 1 .. cols - 1]
                    , ((s `V.index` r) `V.index` c) `mod` pivot /= 0
                    ]
      pure $ listToMaybe indices

    -- Make diagonal components positive
    ensurePositiveDiagonal :: Int -> SNFMonad n m ()
    ensurePositiveDiagonal k = do
      let k_row = toFinite @n k
          k_col = toFinite @m k
      val <- getVal k_row k_col
      when (val < 0) $ negateRowM k_row

If you are interested in loop implementations using fix, please refer to the following articles:

https://qiita.com/lotz/items/409112751a23689d7d85

https://qiita.com/lotz/items/0894079a44e87dc8b73e

How to Solve (Continued)

Now that we have implemented the function to calculate the Smith Normal Form, let's use it to solve the puzzle. We were considering the system of linear equations:

\tilde{A} = \begin{pmatrix} 2 & 0 & 1 & 3 & 0 & 0 \\ 1 & 2 & 0 & 0 & 4 & 0 \\ 0 & 3 & 2 & 0 & 0 & 5 \end{pmatrix},\ \ w = \begin{pmatrix} a \\ b \\ c \\ -n \\ -m \\ -l \end{pmatrix},\ \ b = \begin{pmatrix} 2 \\ 2 \\ 2 \end{pmatrix}

as

\tilde{A}w=b

Let's calculate the Smith Normal Form for this coefficient matrix \tilde{A}.

main :: IO ()
main = do
  let matList :: [[Int]]
      matList = [ [2, 0, 1, 3, 0, 0]
                , [1, 2, 0, 0, 4, 0]
                , [0, 3, 2, 0, 0, 5]]
      maybeMat :: Maybe (Matrix 3 6 Int)
      maybeMat = V.fromListN =<< (sequence $ map V.fromListN matList)

  case maybeMat of
    Nothing -> putStrLn "Matrix generation error"
    Just matA -> do
      putStrLn "Original Matrix A:"
      V.mapM_ print matA
      putStrLn "--------------------"

      let res = smithNormalForm matA

      putStrLn "Smith Normal Form S:"
      V.mapM_ print $ snfS res
      putStrLn "--------------------"

      putStrLn "Left Transform U:"
      V.mapM_ print $ snfU res
      putStrLn "--------------------"

      putStrLn "Right Transform V:"
      V.mapM_ print $ snfV res
      putStrLn "--------------------"

      putStrLn "Verification (U * A * V):"
      let s_calc = snfU res !*! matA !*! snfV res
      V.mapM_ print s_calc
      putStrLn $ "Match: " ++ show (snfS res == s_calc)
      putStrLn "--------------------"

Output

$ cabal run
Original Matrix A:
Vector [2,0,1,3,0,0]
Vector [1,2,0,0,4,0]
Vector [0,3,2,0,0,5]
--------------------
Smith Normal Form S:
Vector [1,0,0,0,0,0]
Vector [0,1,0,0,0,0]
Vector [0,0,1,0,0,0]
--------------------
Left Transform U:
Vector [1,0,0]
Vector [0,1,0]
Vector [-2,4,1]
--------------------
Right Transform V:
Vector [0,1,0,0,-4,-2]
Vector [0,0,0,0,0,1]
Vector [1,-2,3,-15,-40,-29]
Vector [0,0,-1,5,16,11]
Vector [0,0,0,0,1,0]
Vector [0,0,-1,6,16,11]
--------------------
Verification (U * A * V):
Vector [1,0,0,0,0,0]
Vector [0,1,0,0,0,0]
Vector [0,0,1,0,0,0]
Match: True
--------------------

From the above,

S = \begin{pmatrix} 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ \end{pmatrix}

is the Smith Normal Form of \tilde{A}, and with

U = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ -2 & 4 & 1 \\ \end{pmatrix},\ V = \begin{pmatrix} 0 & 1 & 0 & 0 & -4 & -2 \\ 0 & 0 & 0 & 0 & 0 & 1 \\ 1 & -2 & 3 & -15 & -40 & -29 \\ 0 & 0 & -1 & 5 & 16 & 11 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & -1 & 6 & 16 & 11 \\ \end{pmatrix}

we found that the relationship

S = U\tilde{A}V

holds.

Transforming the original system of linear equations:

\begin{matrix} && \tilde{A}w &=& b \\ &\iff& U\tilde{A}VV^{-1}w &=& Ub \\ &\iff& SV^{-1}w &=& Ub \end{matrix}

and Ub is:

Ub= \begin{pmatrix} 1 & 0 & 0\\ 0 & 1 & 0\\ -2 & 4 & 1 \end{pmatrix} \begin{pmatrix} 2\\ 2\\ 2 \end{pmatrix} = \begin{pmatrix} 2 \\ 2 \\ 6 \end{pmatrix}

Since S is a diagonal matrix, an integer solution does not exist unless each diagonal component of S divides the corresponding component of Ub. In other words, you can tell whether the puzzle is solvable by whether the elementary divisors of \tilde{A} divide the components of Ub. By designing it so that all elementary divisors are 1, as in this case, the puzzle becomes solvable regardless of the initial values.

Continuing the calculation:

SV^{-1}w = Ub

from

V^{-1}w = \begin{pmatrix} 2 \\ 2 \\ 6 \\ p \\ q \\ r \end{pmatrix}

where p, q, r are arbitrary integers.

Calculating VV^{-1}w=w:

\begin{pmatrix} 0 & 1 & 0 & 0 & -4 & 2 \\ 0 & 0 & 0 & 0 & 0 & 1 \\ 1 & -2 & 3 & -15 & -40 & -29 \\ 0 & 0 & -1 & 5 & 16 & 11 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & -1 & 6 & 16 & 11 \\ \end{pmatrix} \begin{pmatrix} 2 \\ 2 \\ 6 \\ p \\ q \\ r \\ \end{pmatrix} = \begin{pmatrix} 2-4q+2r \\ r \\ 16-15p-40q-29r \\ -6+5p+16q+11r \\ q \\ -6+6p+16q+11r \\ \end{pmatrix}

Since

w = \begin{pmatrix} a \\ b \\ c \\ -n \\ -m \\ -l \end{pmatrix}

substituting arbitrary numbers for p, q, r to make a, b, c non-negative integers gives a=2, b=0, c=1 when p=1, q=0, r=0. Therefore, the answer to the puzzle is:

  • Press button a 2 times
  • Press button b 0 times
  • Press button c 1 time

We have solved the clock puzzle! 👏

Conclusion

In this article, we considered the specific solution for a particular puzzle, but the same method can be applied to general problem settings without any issues. While the algorithm for calculating the Smith Normal Form implemented here is a simple method based on the proof, it is also known to cause a phenomenon called "coefficient explosion," where the number of digits of the values that appear during intermediate steps grows exponentially relative to the number of digits of the elements in the input matrix. To avoid such computational difficulties, various improved algorithms, including the Kannan-Bachem algorithm, have been studied. If you encounter a puzzle that works in an interlocking way and can be viewed as a clock puzzle, please try solving it using the Smith Normal Form of a matrix!



Thank you for reading!
If you enjoyed this article, I would appreciate it if you could give it a Like ♡ ☺️
Giving a badge would also be a great encouragement for me to write the next article! 🙌

脚注
  1. For example, refer to "Basic Algebra B Lecture Notes (Ring Theory)". ↩︎

Discussion