You are on page 1of 39

# Journal of Automated Reasoning 25: 83121, 2000.

83

## Vectorial Equations Solving for Mechanical

Geometry Theorem Proving ?
HONGBO LI
MMRC, Institute of Systems Science, Academia Sinica, Beijing 100080, P.R. China
Abstract. In this paper a new method is proposed for mechanical geometry theorem proving. It
combines vectorial equations solving in Clifford algebra formalism with Wus method. The proofs
produced have significantly enhanced geometric meaning and fewer nongeometric nondegeneracy
conditions.
Key words: mechanical theorem proving, Wus method, Clifford algebra, vectorial equations solving, the area method.

1. The Idea
Among algebraic methods for mechanical geometry theorem proving, Wus characteristic set method [2, 29, 34] is one of the most often used. In this method,
a characteristic set is computed from a polynomial set representing the hypothesis of a geometric theorem through a sequence of pseudo-divisions. As different
sequences of pseudo-divisions lead to different results, a good arrangement for
pseudo-divisions can lead to a characteristic set of simpler structure and better geometric meaning, which in turn can simplify the proof of a theorem. Here, by simpler
structure we mean fewer irreducible branches and nondegeneracy conditions; by
better geometric meaning we mean more constraints that are geometrically invariant. Geometric invariance implies at least independence of the choice of a
coordinate system.
The starting point of our discussion is how to design a good sequence of pseudodivisions. Let us take a look at the first example. In a plane there are two lines
li : ai x + bi y + ci = 0, i = 1, 2. From the polynomials

a1 x + b1 y + c1 ,
(a)
(1.1)
a2 x + b2 y + c2
(b)
a characteristic set can be computed under the order of variables ai , bj , ck x
y,?? through the pseudo-division a1 (1.1b)a2 (1.1a):
? This paper is supported partially by the NSF of China.
?? It means that one can choose any order among the a , b , c as long as they precede x.
i j k

84

HONGBO LI

a1 x + b1 y + c1 ,
(a1 b2 a2 b1 )y + (a1 c2 a2 c1 ).

(1.2)

## The nondegeneracy conditions are a1 b2 a2 b1 6= 0 and a1 6= 0. Lines l1 , l2 are

parallel if and only if a1 b2 a2 b1 = 0, so this degenerate case is geometrically
invariant. The degenerate case a1 = 0 happens if and only if l1 is parallel to the
x-axis, and therefore is not geometrically invariant.
In fact, the second degenerate case does not appear if we use a matrix method
to compute (x, y): when writing the equations of l1 , l2 as
 
 

x
c1
a1 b1
=
,
a2 b2
c2
y
we find that the only degenerate case occurs when the determinant of the coefficient
matrix, a1 b2 a2 b1 , equals zero.
The above matrix method can be realized through the following sequence of
pseudo-divisions [2]: we compute a1 (1.1b)a2 (1.1a) first, then compute b2 (1.1a)
b1 (1.1b). The result is the characteristic set

(a)
(a1 b2 a2 b1 )x + (c1 b2 c2 b1 ),
(1.3)
(b)
(a1 b2 a2 b1 )y + (a1 c2 a2 c1 ).
The generalization to higher dimensions is as follows: let

a11 x1 + + a1n xn = b1 ,

an1 x1 + + ann xn = bn

(i1 )
(1.4)
(in )

## be n hyperplanes in an nD space. Then the kth coordinate xk of their intersection

can be computed through the sequence of pseudo-divisions
A1k (1.4i1 ) + + Ank (1.4in ),

(1.5)

## where Aj k is the minor corresponding to aj k of the coefficient matrix in (1.4).

If we use vectors instead of coordinates to represent geometric entities, then in
vector algebra, lines l1 , l2 have the form

X A1 = c1 ,
(1.6)
X A2 = c2 ,
where X = (x, y)T , Ai = (ai , bi )T for i = 1, 2. Assuming that {A1 , A2 } form a
basis of the plane, we have X = 1 A1 + 2 A2 , where 1 , 2 are unknown scalars.
The nondegeneracy condition for this assumption is A1 A2 6= 0. Solving for the
with the matrix method, we obtain the expression for X:
(A21 A22 (A1 A2 )2 )X
= (A1 A2 c2 A22 c1 )A1 + (A1 A2 c1 A21 c2 )A2 ,

(1.7)

85

## where A2i represents Ai Ai . Since

A21 A22 (A1 A2 )2 = (A1 A2 )2 = (a1 b2 a2 b1 )2 ,
(1.7) is more complicated than (1.3), because X is a quadratic fraction of both A1
and A2 in (1.7), while the coordinates of X are linear fractions of both coordinates
of A1 and A2 in (1.3).
We can also solve (1.6) in Clifford algebra. Let a dot denote the inner product
and a wedge denote the outer product. Then for vectors X, A1 , A2 ,
X (A1 A2 ) = (X A1 )A2 (X A2 )A1 .

(1.8)

## Applying the dual operator to (1.8), we obtain an expression for X:

(A1 A2 ) X = c1 A
2 + c2 A1 .

(1.9)

(1.9) is exactly (1.3) when vectors are represented by their coordinates, since
T
A
i = (bi , ai ) ,

a a
(A1 A2 ) = 1 2 .
b1 b2

(1.9) also has geometric meaning, as (A1 A2 ) equals the signed area of the

## parallelogram spanned by vectors A1 , A2 , and A

i is the image of Ai after a 90
anticlockwise rotation.
This example suggests that we can use Clifford algebra to solve vectorial equations (equations of multivectors, see later sections). By doing so we can obtain
characteristic sets with better structure and geometric meaning, at least better than
using vector algebra. Below we look at the second example.
In space there is a plane p : a1 x + b1 y + c1 z + d1 = 0 and a line l : (x a3 ) :
(y b3 ) : (z c3 ) = a2 : b2 : c2 . For the polynomials

(a)
b (x a3 ) a2 (y b3 ),

2
(b)
c2 (x a3 ) a2 (z c3 ),
(1.10)
(y

b
)

b
(z

c
),
(c)
c

2
3
2
3

(d)
a1 x + b1 y + c1 z + d1
a characteristic set under the order of variables ai bi ci x y z, can be
computed through the pseudo-divisions a2 (1.10d) + b1 (1.10a) + c1 (1.10b):

b (x a3 ) a2 (y b3 ),

2
c2 (x a3 ) a2 (z c3 ),
(1.11)
(a1 a2 + b1 b2 + c1 c2 )x

b1 b2 a3 + b1 b3 a2 c1 c2 a3 + c1 c3 a2 + d1 a2 .
The nondegeneracy conditions are a1 a2 + b1 b2 + c1 c2 6= 0 and a2 6= 0. Plane p
is parallel to line l if and only if a1 a2 + b1 b2 + c1 c2 = 0, so this degenerate case

86

HONGBO LI

## is geometrically invariant. The degenerate case a2 = 0 happens if and only if l is

parallel to the (y, z)-plane and therefore is not geometrically invariant.
The second degenerate case occurs even when we use the matrix method to
compute (x, y, z), because in

x
b2 a3 b3 a2
b2 a2 0
c2 0 a2 y = c2 a3 c3 a2 ,
a1 b1 c1
d1
z
the determinant of the coefficient matrix is a2 (a1 a2 + b1 b2 + c1 c2 ).
However, the second degenerate case can be avoided when we use vector algebra. Let X = (x, y, z)T , Ai = (ai , bi , ci )T , i = 1, 2, 3. Then the line and plane
have the form

(X A3 ) A2 = 0,
X A1 = d1 .
From the first equation we derive X = A3 +A2 , where is an unknown scalar. The
nondegeneracy condition for this derivation is A2 6= 0. Solving for , we obtain
the expression for X:
(A1 A2 ) X = (A1 A2 )A3 (d1 + A1 A3 )A2 .

(1.12)

## The nondegeneracy conditions are A2 6= 0 and A1 A2 6= 0. As A2 = 0 implies

A1 A2 = 0, the only degenerate case is A1 A2 = a1 a2 + b1 b2 + c1 c2 = 0.
In coordinate form, (1.12) can be obtained through the following sequence of
pseudo-divisions:
a2 (1.10d) + b1 (1.10a) + c1 (1.10b),
b2 (1.10d) a1 (1.10a) + c1 (1.10c),
c2 (1.10d) a1 (1.10b) b1 (1.10c).

(1.13)

Using Clifford algebra, we can obtain the same result (1.12). The equations of
the line and plane are

X A2 = A3 A2 ,
(1.14)
X A1 = d1 .
From the identity (1.8) we have
A1 (X A2 ) = (A1 X)A2 (A1 A2 )X,
which becomes (1.12) after we substitute (1.14) into it. This example suggests that,
when using Clifford algebra to solve equations, we can obtain characteristic sets at
least better than using the matrix method.
Both examples show that equations solving in Clifford algebra formalism can
be realized by peculiar sequences of pseudo-divisions and that Clifford algebra can

## MECHANICAL GEOMETRY THEOREM PROVING

87

help to obtain better characteristic sets. We can be satisfied with using Clifford
algebra to devise sequences of pseudo-divisions for Wus method, but we want to
go one step further; that is, we want to use Clifford algebra to represent geometric
entities and constraints directly, compute a triangular sequence in Clifford algebra
formalism from the hypothesis of a geometric theorem, and prove the conclusion
with the triangular sequence. This idea inspires the following program of applying
Clifford algebra to theorem proving. Below we sketch an outline of the program.
The details can be found in later sections.
We use multivectors in Clifford algebra to represent geometric entities, and vectorial equations to represent geometric constraints. After providing an order for the
multivector variables, we do triangulation in the set of vectorial equations representing the hypothesis of a geometric theorem. The elimination techniques include
pseudo-division, substitution, and equations solving. Then we obtain a triangular
sequence of vectorial equations. We can do pseudo-divisions and substitutions to
the conclusion, which is assumed to be one or several vectorial equations, to detect
if it can be reduced to 0 = 0. If it can, the conclusion is proved under some nondegeneracy conditions; the triangular sequence and the nondegeneracy conditions
have geometric interpretation. This is the first stage of the program.
The correctness of the proving method is guaranteed by Wus method, because
vectorial equations solving is a sequence of pseudo-divisions that in the end reduces the order of a polynomial set. The above proving method is not complete in
that there are theorems that cannot be reduced to 0 = 0 by the triangular sequence.
What to do next?
Let us take a look at the third example. In plane, there is a line l : a1 x + b1 y +
c1 = 0 and a circle c : x 2 + y 2 2a2 x 2b2 y + c2 = 0. From the polynomials

a1 x + b1 y + c1 ,
(1.15)
x 2 + y 2 2a2 x 2b2 y + c2
a characteristic set can be computed through pseudo-divisions under the order of
variables ai , bi , ci x y:

a1 x + b1 y + c1 ,
(1.16)
(a12 + b12 )y 2 + 2(b1 c1 + a1 a2 b1 a12 b2 )y + (c12 + 2a1 a2 c1 + a12 c2 ).
The nondegeneracy conditions are a12 + b12 6= 0 and a1 6= 0. Line l is degenerate
if and only if a12 + b12 = 0, so this degenerate case is geometrically invariant.
The degenerate case a1 = 0 happens if and only if l1 is parallel to the x-axis and
therefore is not geometrically invariant.
The second degenerate case does not appear when we use Clifford algebra. Let
X = (x, y)T , Ai = (ai , bi )T for i = 1, 2. Then the line and circle have the form

X A1 = c1 ,
(a)
(1.17)
(X A2 )2 = c2 + A22 .
(b)
From (1.17a) and under the assumption that A1 6= 0, we obtain
A21 X = c1 A1 + A
1,

(1.18)

88

HONGBO LI

## where is an unknown parameter. Substituting (1.18) into (1.17b), we obtain

2 + 2(A1 A2 ) + (c12 + c2 A21 + 2c1 A1 A2 ) = 0.

(1.19)

Now we obtain a triangular sequence (1.18), (1.19). The order of variables becomes
Ai , cj X. The parameter equals the signed area of the parallelogram
spanned by vectors A1 and X. The derivation from (1.17a) to (1.18) is called
parametric equations solving. This example shows that by parametric equations
solving, some implicit equations can be replaced by explicit ones automatically.
Equation (1.18) is more useful than (1.17a) in proving, although less satisfactory
in geometric interpretation.
The second stage of our theorem proving program involves triangulation to the
triangular sequence obtained from the first stage by allowing parametric equations
solving. The result is called a parametric triangular sequence. It is then used to
prove the conclusion. Parametric equations solving is often used when a multivector is underdetermined by a set of linear equations (but may be determined or
overdetermined when nonlinear equations are also counted). It is also applicable to
some nonlinear equations, for example, quadratic equations representing conics.
If the parametric triangular sequence still fails to prove the conclusion, we need
to go back to the coordinate representation. This is the last stage of our program.
The advantage now is that we start from the parametric triangular sequence, instead
of the original hypothesis, and the former has been partially triangulated while
keeping good geometric interpretation. The two-stage preparation when viewed
from Wus method is worth all the trouble, because it contributes to making a good
choice of a coordinate system, an order for the coordinates, as well as making
simplification and reasoning for the original hypothesis. Another reason to use
Clifford algebra stems from the fact that a theorem is often proved without using coordinates. In our experiments of proving typical theorems, in Euclidean,
spherical, hyperbolic, and projective geometries, more than half of the theorems
do not need coordinates, while in differential geometry, we have actually never
used coordinates. Our experiments also show that, the higher the dimension of the
geometric space, the better the proofs produced without using coordinates.
The idea of vectorial equations solving has appeared in several previous papers
by the author [21 23]. Paper [22] is on theorem proving in plane geometry; the
proofs produced there are similar to those by the area method. Paper [23] is for
theorems on space curves in differential geometry; the proofs produced there are
similar to those used in college differential geometry textbooks. Paper [21] is for
theorems on space surfaces in differential geometry; the proofs produced there
are often simpler than those used in textbooks. Recently we have implemented a
program for theorem proving in solid geometry. Some examples are given in later
sections.
In this paper we talk about the vectorial equations solving method for nD Euclidean geometry. The method can also be applied to other geometries, for example,
non-Euclidean, differential, conformal geometries. In other geometries, new equa-

89

## tions solving formulas need to be provided. Section 2 is on equations solving

formulas in Euclidean geometry. Section 3 is on our theorem proving method.
Section 4 is on implementation details.
During the preparation of this paper, the author learned that D. M. Wang has
proposed to combine Clifford algebra with a Grbner basis method [19, 27] for
theorem proving. He has come up with a general framework [30] and his experiments in Euclidean geometry show that Clifford algebra combining a Grbner basis
can produce geometrically invariant proofs similar to those by the area method [3].
Our works suggest great potential for the application of Clifford algebra in theorem
proving.
Clifford algebra was originally called geometric algebra by its discoverer W.
Clifford. It includes a bunch of algebraic tools in geometries: vector algebra and
calculus, GrassmannCayley algebra, determinants, differential forms, complex
numbers, quaternions, spinors, and so on, and can be applied to geometry, analysis, algebra, mechanics and physics [8 10, 14 18, 20]. The version of Clifford
algebra formulated by D. Hestenes [15, 16] has further enhanced its computational
efficiency and make Clifford algebra applicable immediately to geometries as a
geometric invariant method. Some applications of this version of Clifford algebra
(called geometric algebra) to geometries can be found in [6, 7, 1113, 25, 26]. In
Appendix A there is a preliminary introduction to this version.
2. Vectorial Equations Solving
In this section we present some general formulas for vectorial equations solving in
Euclidean geometry. Among various Clifford algebraic models for geometry, we
use only the Clifford model and the Grassmann model. Their descriptions can be
found in Appendix A. We always use E n to denote an n-dimensional Euclidean
space.
The following formula is applicable to both models:
Type (0). uxv y = 0, where u, v, y x and are multivectors, x is the leading
basic variable, and u, v, y do not involve x.
Solution:
x = u1 yv 1 .

(2.1)

## Nondegeneracy conditions: u, v are invertible.

Here u, v can take scalar values; therefore four different kinds of equations are
included in this type:
uxv y
ux y
xv y
xy

= 0,
= 0,
= 0,
= 0.

(2.2)

90

HONGBO LI

The drawback of the form (2.1) is that it does not present explicit forms for u1 , v 1 ,
which exist only in some special cases. In 2D, the solution can be written as
x=

u yv
;
(uu )(vv )

(2.3)

## the nondegeneracy conditions are the scalars uu , vv 6= 0. In 3D, when u, v are

even or odd, the solution can be written as
x=

u ryv
;
(uu )(vv )

(2.4)

## the nondegeneracy conditions are the scalars uu , vv 6= 0.

The following are formulas for the Clifford model Gn of E n : Let X and the
Ai be vectors, the cj be scalars, the Bk be bivectors, the Cl be trivectors, the 0m
be homogeneous multivectors, the Kp be pseudovectors, the Jq be pseudoscalars.
Here i, j, k, l, m, p, q are indices. Also let the Ai , cj , Bk , Cl , 0m , Kp , Jq X and
do not involve X.
Type (1, 1). X A1 c1 = 0, X A2 B2 = 0.
Solution:
X=

c1 A2 A1 B2
.
A1 A2

(2.5)

Nondegeneracy condition: A1 A2 6= 0.
Type (1, 2). X A1 c1 = 0, . . . , X An cn = 0.
Solution:
n
P

X=

ci (A1 Ai An )

i=1

(A1 An )

(2.6)

Nondegeneracy condition: A1 An 6= 0.
Type (1, 3). X A1 B1 = 0, X A2 B2 = 0.
There are two methods to solve it. The first method is used for n 3. Let
B = A1 A2 . Then the solution is

(B1 A2 ) B B1 B A2 + B2 B A1

X=
,
(a)
(2.7)
BB

(b)
B1 A2 + B2 A1 = 0.
The nondegeneracy condition is B 6= 0. This method is often preferred to the
next one in application, but has the drawback that in (2.7), the coefficient of
X, which is B B, is quadratic with respect to both A1 and A2 .

91

## MECHANICAL GEOMETRY THEOREM PROVING

The second method is used in 3D only, but it has the advantage that in the
solution, the coefficient of X is linear with respect to both A1 and A2 . Let Y
be a vector satisfying Y A1 A2 6= 0. Then the solution is

X = (B2 Y) A1 (A2 Y) B1 ,
(a)
(A1 A2 Y)
(2.8)

(b)
B1 A2 + B2 A1 = 0.
The nondegeneracy condition is A1 A2 Y 6= 0. This method can be used
when B B is complicated after expansion and a vector Y can be found to
simplify A1 A2 Y.
When interchanging the indices 1, 2 in (2.7), (2.8) (do not forget the indices in
B!), we obtain two more solutions that are different from, although equivalent
to, (2.7), (2.8). From this aspect, type (1, 3) is an ordered pair of equations. In
application, we usually choose a solution with fewer terms: if after expansion,
B1 A2 has more terms than B2 A1 , we interchange the indices 1, 2 in (2.7),
(2.8).
Type (1, 4). X A1 c1 = 0, . . . , X An1 cn1 = 0.
Parametric solution: let K = A1 An1 . Then
X = K +

n1
X
i=1

ci
(A1 Ai An1 ) K.
KK

(2.9)

## Here is the free parameter. The nondegeneracy condition is K 6= 0.

Type (1, 5). X A1 Ar 0 = 0, where r < n, 0 is an (r + 1)-vector.
Parametric solution: let 01 = A1 Ar . Then
X=

r
X
i=1

i A i +

0 01
.
01 01

(2.10)

## Here the s are free parameters. The nondegeneracy condition is 01 6= 0.

The following are formulas for the Grassmann model Gn+1 of E n . The leading
variable X below is always assumed to be a point vector.
Type (2, 1). X K J = 0, X B C = 0.
Solution:
X=

K (C) K B + J (B)
.
((B) K)

(2.11)

92

HONGBO LI

## Type (2, 2). X K1 J1 = 0, . . . , X Kn1 Jn1 = 0.

Solution: let A = K1 Kn1 . Then
A+
X=

n1
P
i=1

(A)

(2.12)

## Nondegeneracy condition: (A) 6= 0.

Type (2, 3). X B1 C1 = 0, X B2 C2 = 0.
There are two solving methods. The first is valid for n 4, but requires that
B1 be a blade: B1 = A1 A2 . Let B = (B1 ) (B2 ); Then the solution is

## X = ((C2 ) B (B1 ) + (C1 ) B (B2 )+

+ (A1 B2 ) B A2 (A2 B2 ) B A1
((C1 ) (B2 )) B)/B B,

(a)

(2.13)

(b)

## The nondegeneracy condition is B 6= 0. Notice that in (2.13), the coefficient

of X is quadratic with respect to both B1 and B2 . When n = 4, this method is
better avoided.
The second method is valid only for n = 4, but in the solution, the coefficient
of X is linear with respect to both B1 and B2 , and inner products do not occur
in the solution. Let Y be a vector satisfying (B1 ) B2 Y 6= 0. Then the
solution is

## X = (B2 Z) ((C1 ) B1 ) + (C2 Z) (B1 ) ,

((B1 ) B2 Z)

C1 (B2 ) + C2 (B1 ) = B1 B2 .

(a)

(2.14)

(b)

## The nondegeneracy condition is (B1 ) B2 Y 6= 0.

Type (2, 3) is also an ordered pair of equations, as different orders lead to
different but equivalent solutions. In application, we usually choose a solution
with fewer terms: if after expansion, C1 (B2 ) has more terms than C2
(B1 ), we interchange the indices in (2.7), (2.8).
In implementation, the user can be required to provide the reference vector
Y from the set of basic vector variables; the default reference vector is the
lowest-ordered basic vector variable satisfying the nondegeneracy condition.
Type (2, 4). X A0 Ar 0 = 0, where r < n 1, 0 is an (r + 2)-vector.

93

## Parametric solution: let 01 = (A0 Ar ) 6= 0; Then at least one of the

(Ai ) is nonzero; assuming (A0 ) 6= 0, we have
A0 +

r
P

i (A0 Ai )

i=1

X=

(A0 )

(0) 01
.
01 01

(2.15)

Here the s are free parameters. The nondegeneracy conditions are 01 , (A0 )
6= 0.
The reader may add more formulas to the above list for better performance of
the vectorial equations solving method. Besides these general formulas, for each
specific dimension there are specific geometric entities and constraints, and correspondingly there are specific formulas to solve specific vectorial equations. We
present two examples below.
Special type (a). X2 2X O + 2A O A2 = 0, where A, O X are vectors in
the Clifford model G2 of Euclidean plane.
Parametric solution:
X=A+

2
((O A) + (O A) ).
1 + 2

(2.16)

## Here is the free parameter. The nondegeneracy condition is X 6= A.

This type of equation describes the constraint that X, A are on a circle with
center O, i.e., (X O)2 = (A O)2 . The solution provides a rational parametrization of X.
P
Special type (b) (Cartans Lemma in differential geometry [1]). ri=1 i 2i =
0, where 1 , . . . , r 21 , . . . , 2r are 1-forms.
Parametric solution:
2i =

r
X

ij j , i = 1, . . . , r,

(2.17)

j =1

## where ij = j i for 1 i, j r are free parameters. The nondegeneracy

condition is 1 r 6= 0.
Types (1, 4), (1, 5), (2, 4), (a), (b) are called parametric solvable types because free parameters appear in the solutions. The free parameters will be ordered
immediately precedent to the leading basic variable in every type. For (2.17),
1 , . . . , r 11 , . . . , rr 21 , . . . , 2r .

94

HONGBO LI

## 3. Theorem Proving via Vectorial Equations Solving

First we give some definitions. In Clifford algebra there are many operators, and
many are defined by simpler ones. For the purpose of theorem proving, it seems
that the inner, outer, geometric products and dual are the simplest operators. We
call them basic operators. By unification we mean replacing all operators with
basic ones by their definitions. By expansion we mean expanding all multilinear
operators. By explosion we mean distributing an operator among the arguments in
another operator. For example, in the last two rules of (A.1.4), the inner product is
distributed among the arguments in the outer product.
Let pol be a polynomial whose variables are multivectors, and let the leading
variable be lv. When pol involves only scalar variables, we call pol = 0 a scalar
equation, otherwise we call it a vectorial equation. A variable that is not a function
of other variables is called a basic variable. For example, given vector variables
X, Y, then X Y, X Y, XY are variables but not basic ones; X, Y are basic
variables. For pol, let the leading basic variable be X. Then X is either equal to
lv or contained in lv. By vectorial equations solving we mean solving the leading
basic variable from a set of vectorial equations that have the same leading basic
variable. If we allow free parameters to appear in the solutions, we call the process
parametric equations solving.
Let there be two equations pol1 = 0 and pol2 = 0; let the leading variable of
pol1 be lv1 . When pol1 is linear with respect to lv1 , and lv1 is part of some variable
in pol2 , we can replace every lv1 in pol2 by its expression derived from pol1 = 0,
thus change pol2 = 0 into an equation without lv1 . This is called the substitution
of pol1 into pol2 . For example, given basic vectors Y, Z X and two equations
X Y = B, X Y Z = C, the substitution of the first equation into the second
one results in B Z = C. When pol1 is nonlinear with respect to lv1 , and both
pol1 and lv1 are scalar-valued, then we can pseudo-divide pol2 with pol1 , thus
reduce the degree of lv1 in pol2 . The pseudo-substitution of pol2 by pol1 refers to
the substitution when pol1 is linear, or the pseudo-substitution when pol1 and lv1
are scalar-valued.
Given a polynomial pol and a set of polynomials pols, pol is said to be reduced
(or pseudo-reduced) with respect to pols, if it (or its leading term) is unchanged
after being pseudo-substituted by pols, that is, by every polynomial in pols. A
polynomial set pols is said to be reduced (or pseudo-reduced) if every polynomial
in it is reduced (or pseudo-reduced) with respect to the rest of pols.
Now we are ready to present the following program for theorem proving:
Input: MODEL (the Clifford algebraic model to be used, which is either Clifford
or Grassmann, and the dimension of the model), ord (the order sequence of basic
variables), hyp (the set of equalities in the hypotheses), ineq (the set of inequalities
in the hypothesis), conc (the conclusion, sometimes not available). All elements in
hyp, ineq, conc are assumed to be polynomial (in)equalities.

95

## Step 1. Algebraic transformations in hyp and conc, which include Unification,

Expansion, Explosion (UEE).
hyp := U EE(hyp),

conc := U EE(conc).

(3.1)

Step 2. Information report in hyp and conc: for each equation pol = 0, its
information for pseudo-substitution and equations solving are analyzed; then the
expression pol and the information form an object, which will replace the original equation. The information for pseudo-substitution contains at least the leading
basic variable, and the leading variable, the polynomial degree, and the leading
coefficient. The information for equations solving contains at least the leading basic
variable, the function type of the leading variable, the collected leading variable,
and the leading coefficient. For example, let B C A be basic vectors; then
for A B + 2A C, the leading basic variable is A. For pseudo-substitution, the
leading variable is A C, the polynomial degree is 1, the leading coefficient is 2.
For equations solving, the function type of the leading variable is an outer product
with grade 2, the collected leading variable is A (B + 2C), the leading coefficient
is 1.
Step 3. Sorting in the hypothesis: the expressions in hyp are sorted descendingly
for pseudo-substitution. Let pol1 , pol2 be two expressions in hyp, with leading
basic variables lbv1 , lbv2 , leading variables lv1 , lv2 , polynomial degrees d1 , d2 ,
leading coefficients lc1 , lc2 , numbers of terms ntm1 , ntm2 , respectively. Then

lbv1 lbv2 , or

H pol1 pol2 .

(3.2)

## lbv1 lbv2 , lv1 = lv2 , d1 = d2 , lc1 = lc2 , ntm1 ntm2

For vectors B1 Bn1 A, where n is the dimension of the model, we
define the following three groups of orders:

(1) A A B1 A Bn

A B1 B2 A Bn1 Bn

A B1 Bn1 A B2 Bn
(3.3)

B
,

1
n

(2) A A B1 A Bn ,

(3) A A B1 A Bn A A.
One can define any order among the three groups. In our implementation we choose
the following order: geometric products outer products inner products.
Step 4. Triangulation. No parametric equations solving is allowed in this step.
The following are the pseudo-codes for triangulate(hyp), which returns a descend-

96

HONGBO LI

## ing sequence of objects (polynomials and their information), denoted by tri-seq

(triangular sequence).
function triangulate
local tri, i;
global ord, pseudo-subst, solvable, solve;
tri := pseudo-subst (args);
for i from #(ord) to 1 by 1 do
{
if solvable(ith element in ord, tri) then
{
tri := solve(ith element in ord, tri);
tri := pseudo-subst(tri)
}
end if
}
end do;
RETURN(tri)
end function
The function pseudo-subst(a1 , . . . , am ), where the as are a descending sequence
of objects, is pseudo-substitution inside a polynomial set, which returns a descending sequence of reduced objects (reduced polynomials and their information).
function pseudo-subst
local pols, red, old, new;
global one-subst, eqn-simplify, insert;
if args[1] = 0 then
RETURN(pseudo-subst(args[2 . . 1]))
else if nargs 1 then
RETURN(args)
else
pols := [args]
end if;
do
{
old := pols[1]; red := pols[2 . . 1];
for i to #(red) do
{
new := one-subst(old, ith element in red);
if new = 0 then
RETURN(pseudo-subst(red ))
else if new 6= old then
old := eqn-simplify(new)
end if
}

## MECHANICAL GEOMETRY THEOREM PROVING

97

end do;
if old 6= pols[1] then
pols := insert (old, red)
end if
} while (old 6= pols[1])
end do;
RETURN(old, pseudo-subst(red )) end function
The function one-subst(a, b) returns the result of a pseudo-substituted by b.
In differential case, differentiation may be needed before the algebraic pseudosubstitution. The function eqn-simplify(a) removes nonzero factors from polynomial a. The function insert(a, b1 , . . . , bm ) inserts a to the bs so that the result is a
descending sequence.
The function solvable(A, a1 , . . . , am ), where A is a basic variable, the ai are a
descending sequence of objects, tests whether or not A is solvable from the ai = 0.
It returns true when a solvable type for A exists and the corresponding nondegeneracy conditions are satisfied. For example, if pol 6= 0 is a nondenegeracy condition,
then if pol is not reduced to zero by the ai , we say the nondegeneracy condition is
satisfied.
The function solve(A, a1 , . . . , am ) solves basic variable A from the descending
sequence ai = 0. It returns a descending sequence of objects containing both the
solutions and the unused objects:
function solve
local stype, ptype, A, eqns, soln, i;
global MODEL, PARASOLVE, solve-type;
stype := #(solvable types in current model);
A := args[1]; eqns := args[2 . . 1];
for i to stype do
{
soln := solve-type(MODEL, i, A, eqns);
if soln 6= eqns then RETURN(soln)
end if
}
end do;
if PARASOLVE then
{
ptype := #(parametric solvable types in current model);
for i to ptype do
{
soln := solve-type(MODEL, stype+i, A, eqns);
if soln 6= eqns then RETURN(soln)
end if
}
end do
}

98

HONGBO LI

end if;
RETURN(eqns)
end function
PARASOLVE is a global logic variable: when it is true, then parametric equations solving is allowed. The function solve-type(MODEL, k, A, a1 , . . . , am ) solves
A from the ai = 0 using the kth solvable type in MODEL. It returns a descending
sequence of solutions and unused objects. The order in which the solvable types
are arranged is very important. In our implementation, for both models we use the
order in which the solvable types are presented in Section 2. Section 4 will provide
more details on the implementation of pseudo-subst.
Summing up, after this step we have
tri-seq := triangulate(hyp).

(3.4)

## Step 5. Proving by the triangular sequence. If the conclusion is reduced to

zero by tri-seq through pseudo-substitution, the program finishes proving the theorem under some nondegeneracy conditions, which arise from eqn-simplify and
equations solving, in addition to the leading coefficients and separants in tri-seq.
Step 6. Parametric triangulation. Free parameters are allowed in equations solving:
PARASOLVE := true; partri-seq := triangulate(tri-seq).

(3.5)

## Step 7. Proving by the parametric triangular sequence. If the conclusion is

reduced to zero, the program finishes proving the theorem under some nondegeneracy conditions.
Step 8. Coordinate triangulation. Select a global coordinate system (Cartesian,
affine, polar, etc.), then represent every nonscalar basic variable with its coordinates, and every nonscalar equation with a set of scalar ones. There is a nondegeneracy condition for coordinatization. For example, for Cartesian coordinate system,
the basis vectors must be linear independent; for affine coordinate system, the basis
point vectors must be affine independent. Now apply Wus method to paratriseq to compute a main characteristic set, which we call coordinate triangular
sequence.
Step 9. Proving by the coordinate triangular sequence. If the conclusion is reduced to zero, the program finishes proving the theorem under some nondegeneracy conditions; otherwise algebraic factorization may be needed to obtain irreducible characteristic sets. Using irreducible characteristic sets, we can always
prove if the theorem is true or not, according to Wus mechanical theorem proving
principle [34].
This is the end of the program. Its correctness can be derived from Wus wellordering principle, as vectorial equations solving can be realized by a sequence of
pseudo-divisions that reduce the order of polynomials in hyp.
We illustrate the above program with examples from solid geometry. The examples have been tested by our implementation with Maple V Release 5.

99

## EXAMPLE 1 (selected from [3]). Let ABCD be a tetrahedron. A plane that is

parallel to both AB and CD cuts the tetrahedron into two parts, with P, Q, R, S
as the intersections inside rectilinear segments AD, AC, BC, BD respectively. Let
the ratio of distances from the plane to line AB and line CD respectively be r > 0.
Find the ratio of volumes of the two parts.
For this example, the Grassmann model G4 for the space is appropriate, as only
parallelism and concurrence relations occur in the hypothesis. The cutting plane is
represented by P Q R. The hypothesis is

(A B) P Q R = 0,
AB || PQR

(C

D)

R
=
0,
CD
|| PQR

## A P Q R = rD P Q R, the ratio of distances is r

P
Q R S = 0,
PQRS is a plane

A, D, P are collinear
A D P = 0,
A C Q = 0,
A, C, Q are collinear

B C R = 0,
B, C, R are collinear

S
=
0,
B, D, S are collinear

D
=
6
0,
A,
B, C, D are not coplanar

R
=
6
0,
D,
P,
Q, R are not coplanar

r > 0.
This example does not have a conclusion part. Let s be the required ratio. It has the
following expression:
s=

APQR+APRS+ARBS
.
DPQR+DPRS+DQCR

Figure 1. Example 1.

(3.6)

100

HONGBO LI

## We need a triangular sequence to compute s. Below we show the triangulation step

by step. We make the following order for basic variables:
r P Q R D S C B A.
Step 1. Expansion: the brackets in the first two equations of the hypothesis are
broken up.
Step 2, 3. Reporting and ordering: the hypothesis becomes

APQRBPQR
(a)

R
+
r
D

R
(b)

ACQ
(c)

(d)
(3.7)
B

R
(e)

BDS
(f)

R
(g)

PQRS
(h)
The attached information is

(3.7a): A [A P Q R, 1, 1]

(3.7b):
A [A P Q R, 1, 1]

(3.7c): A [A C Q, 1, 1]

(3.7d): A [A D P, 1, 1]
(3.7e): B [B C R, 1, 1]

(3.7f):
B [B D S, 1, 1]

(3.7g): C [C P Q R, 1, 1]

(3.7h): S [P Q R S, 1, 1]

[4, P Q R, B P Q R]
[4, P Q R, r D P Q R]
[3, C Q, 0]
[3, D P, 0]
[3, C R, 0]
[3, D S, 0]
[4, P Q R, D P Q R]
[4, P Q R, 0]

Here, each line contains the expression (polynomial) index, the leading basic variable, information for pseudo-substitution, and information for equations solving.
The information for pseudo-substitution contains the leading variable, the ploynomial degree, and the leading coefficient. The information for equations solving
contains the function type of the leading variable (an outer product whose grade
is specified by the number), the residual of the collected leading variable after
moving the leading basic variable to the leftmost position and then dropping it,
and the negative of the residual after dropping the collected leading term from the
expression.
Step 4. Triangulating:
Pseudo-substitution. (3.7a) is replaced by
B P Q R + rD P Q R.

(3.7i)

## Equations solving for A. Type (2, 1): (3.7b), (3.7c). Solution:

(C P Q R) A (C P Q R) Q
r(D P Q R) Q + r(D P Q R)C.

(3.7j)

101

## The nondegeneracy condition is C P Q R 6= 0.

Type (2, 1): (3.7b), (3.7d). Solution:
A P rP + rD,

(3.7k)

## where a nonzero factor (D P Q R) has been removed by eqn-simplify,

which is in solve-type. The nondegeneracy condition is D P Q R 6= 0.
Pseudo-substitution. (3.7j) is replaced by
rC rD + P + rP Q rQ.

(3.7l)

## (3.7g) is deleted because it is changed to 0.

Now hyp is composed of (3.7k), (3.7i), (3.7e), (3.7f), (3.7l), (3.7h).
Equations solving for B. Type (2, 1): (3.7i), (3.7e). Solution:
(C P Q R) B (C P Q R) R
r(D P Q R) R + r(D P Q R) C.

(3.7m)

Nondegeneracy condition: C P Q R 6= 0.
Type (2, 1): (3.7i), (3.7f). Solution:
(D P Q R) B + (P Q R S) B
(D P Q R) S (P Q R S) D
r(D P Q R) S + r(D P Q R) D.

(3.7n)

Nondegeneracy condition: D P Q R + P Q R S 6= 0.
Pseudo-substitution. (3.7m) is replaced by
B + rD R rR Q rQ rP P.

(3.7o)

(3.7n) is replaced by
S P + Q R.

(3.7p)

## where a nonzero factor r + 1 has been removed. (3.7h) is deleted because it is

changed to 0.
Now hyp is composed of (3.7k), (3.7o), (3.7l), (3.7p).
Equations solving for C, S, D, R, Q, P, r. None.
The triangular sequence is

A P rP + rD,

B + rD R rR Q rQ rP P,
rC rD + P + rP Q rQ,

S P + Q R.

(3.8)

102

HONGBO LI

## The nondegeneracy conditions are

D P Q R, r, r + 1 6= 0,

(3.9)

## which are all guaranteed by the original hypothesis.

Step 5. Substituting the triangular sequence into the expression for s: we obtain
r 2 (r + 3)
,
(3.10)
3r + 1
which is the required expression for the ratio.
In later examples we no longer present details on the information attached to
each expression.
s=

EXAMPLE 2 (Mongers theorem, selected from [3]). The six planes passing
through the midpoints of the six edges of a tetrahedron ABCD, respectively, and
at the same time perpendicular to the opposite edges respectively have a point in
common.
Let X be the common point of the three planes passing through the midpoints
of AB, BC, AC, respectively. We want to prove that X is on the other three planes.
We use the Clifford model G3 for the space, as orthogonality occurs in the
hypothesis. The hypothesis is


A+B
A+B

(C D) = 0,
X
CD
X

2
2




B+C
X B + C (A D) = 0,
X
2
2



A+C
A+C

X
(B D) = 0,
X
BD

2
2

## (A D) (B D) (C D) 6= 0, A, B, C, D are not coplanar.

Figure 2. Example 2.

## MECHANICAL GEOMETRY THEOREM PROVING

103

The conclusion is


A+D

(B C) = 0,
X




B+D
X
(A C) = 0,




C+D

(A B) = 0.
X
2
The order is D A B C X. Let D be the observer (i.e., D = 0).
Step 1. Expansion: the hypothesis becomes

(a)
2A X A B A C = 0,
2B X A B B C = 0,
(b)
(3.11)

2C X A C B C = 0.
(c)
Step 2, 3. Reporting and ordering: omitted.
Step 4. Triangulating: no pseudo-substitution is done; the three equations in
(3.11) form exactly the solvable type (1, 2). The solution is
2X(A B C) (B C) A B (B C) A C+
+ (A C)A B + (A C) B C (A B) A C
(3.11d)

(A B) B C.
The nondegeneracy condition is A B C 6= 0.
The triangular sequence is (3.11d). The nondegeneracy condition is guaranteed
by the original hypothesis.
Step 5. Proving: all three equalities in the conclusion are verified trivially by
substituting (3.11d) into them.
EXAMPLE 3. Let ABCD be a tetrahedron. There is a plane intersecting with lines
AB, AC, DC, DB at M, N, E, F, respectively. If when the plane moves, MNEF is
always a parallelogram, then the center O of the parallelogram is always on a fixed
straight line.
In the Grassmann model G4 for the space, the hypothesis is

M N = F E,
MNEF is a parallelogram

A B M = 0,
A, B, M are collinear

A C N = 0,
A, C, N are collinear

C D E = 0,
C, D, E are collinear
(3.12)

B D F = 0,
B, D, F are collinear

M+E

O=
,
O is the midpoint of ME

## A B C D 6= 0, A, B, C, D are not coplanar.

The conclusion cannot be algebraized.

104

HONGBO LI

Figure 3. Example 3.

## The order is: A B C D M N E F O.

Step 1, 2, 3. Preparations for triangulation: the hypothesis becomes

2O M E,
(a)

F,
(b)

F E M + N,
(c)
C

E,
(d)

A C N,
(e)

A B M.
(f)

(3.13)

Step 4. Triangulation:
Pseudo-substitution. (3.13b) is replaced by
B D E + B D M B D N.

(3.13g)

## Equations solving for O, F. None.

Equations solving for E. Type (2, 3): (3.13d), (3.13g), which is an ordered pair.
Solution:
E(A B C D) D(A B C D)
D(A B D N) + D(A B D M) +
+ C(A B D N) C(A B D M) ,

(3.13h)

B C D N B C D M,

(3.13i)

where (2.14) and the default reference vector A are used. Let K = A B
C D. The nondegeneracy condition is K 6= 0.

105

## Pseudo-substitution. (3.13h) is replaced by

E(A B C D) D(A B C D)
D(A B D N) + C(A B D N) ,

(3.13j)

(3.13a), (3.13c) are also changed by the substitution of (3.13h). The new
expressions are omitted here and are referred by (3.13a), (3.13c).
Equations solving for N. Type (2, 1): (3.13e), (3.13i). Solution:
N(A B C D) C(A B C D)
C(B C D M) + A(B C D M) .

(3.13k)

Nondegeneracy condition: K 6= 0.
Pseudo-substitution. (3.13a), (3.13c), (3.13j) are changed by the substitution of
(3.13k). The new expressions are omitted here and are referred by (3.13a),
(3.13c), (3.13j), respectively.
Equations solving for M, D, C, B, A. None.
The triangular sequence is
(3.13a), (3.13c), (3.13j), (3.13k), (3.13f).

(3.14)

## The nondegeneracy condition is K 6= 0, which is in the original hypothesis. The

conclusion cannot be obtained from the triangular sequence, as M does not have
an explicit expression.
Step 5. Parametric triangulation:
Pseudo-substitution. None.
Equations solving for O, F, E, N. None.
Equations solving for M. Type (2, 4): (3.13f). Solution:
M A + A B,

(3.13l)

## where is a parameter. The nondegeneracy condition is B A 6= 0.

Pseudo-substitution. (3.13a), (3.13c), (3.13j), (3.13k) are replaced by
2O A D + A + D B C,

(3.13m)

F D + D B,

(3.13n)

E D + D C,

(3.13o)

N A + A C.

(3.13p)

106

HONGBO LI

## Equations solving for , D, C, B, A. None.

The parametric triangular sequence is
(3.13m), (3.13n), (3.13o), (3.13p), (3.13l).

(3.15)

A B C D,

B A 6= 0,

(3.16)

## which are all guaranteed by the original hypothesis.

The conclusion is obvious if we write (3.13m) as
O = (1 )

B+C
A+D
+
.
2
2

(3.17)

That is, O is on the line passing through the midpoint of AD and the midpoint of
BC.

4. Implementation
In this section, first we present some details on our implementation of the vectorial
equations solving method with Maple V Release 5 on a Pentium Pro/200MHz PC.
Then we talk about some phenomena when applying the method to prove theorems.
The following are some details on the implementation:
(1) A Clifford algebra calculator is necessary to carry out symbolic computation. The calculator should at least include the grade computation and the basic
operators, such as the inner product, outer product, geometric product and dual.
The basic properties for each operator, such as multilinearity, associativity and
commutativity, should be included in the definition of each operator. In our implementation, for a multilinear operator f , the following is implemented in its
definition:
f (1 a1 , . . . , m am ) = 1 m f (a1 , . . . , am ).
The following property is carried out through the function expand:
f (a1 , . . . , ( + )ai , . . . , am ) = f (a1 , . . . , am ) + f (a1 , . . . , am ).
(2) In pseudo-substitution, we required the output to be a sequence of reduced
polynomials. We can replace this requirement with that the polynomials are pseudoreduced. This can provide significant simplification in some cases.
(3) In our implementation, the function solvable is integrated into the function
solve, so that whenever a set of equations is detected to be solvable, the solution
and the nondegeneracy conditions are provided. The function solve-type solves an
equation set in two modes, depending on the solvable type. Given an equation
set and a solvable type, there can be more than one equation subset that belongs

## MECHANICAL GEOMETRY THEOREM PROVING

107

to the solvable type. The first solving mode is called solve-one mode, in which
only one equation subset is used, and only one solution is produced. Obviously,
the subset that creates the simplest solution provides the best choice. The second
solving mode is called solve-all mode, in which more than one equation subset is
used, and more than one solution is produced. The subsets are selected to form a
sequence in which every subset contains an equation that is not contained in any
precedent subset. The union of the selected subsets should equal the union of all
subsets that belong to the solvable type. In our implementation, we use the second
mode for types (0), (1, 1) and (2, 1), the first mode for the remaining types. This
two-mode solving technique helps to simplify triangulation.
(4) In the function eqn-simplify, we use given inequalities, factorization and
some simple reasoning to decide if an expression is always nonzero. For example,
if X Y Z is assumed to be nonzero by the given hypothesis, where X, Y, Z are
vectors, then X, Y, Z, X Y, Y Z, X Z are all nonzero, and 1 X + 2 Y + 3 Z
is nonzero if one of the scalars i is nonzero.
(5) For the nondegeneracy conditions that are neither leading coefficients nor
separants of the triangular sequence, they are simplified by pseudo substitution
with respect to the triangular sequence. Then all nondegeneracy conditions are
simplified by eqn-simplify.
We have implemented the vectorial equations solving method for plane geometry, solid geometry, and differential geometry and have proved some theorems
in hyperbolic geometry and spherical geometry by hand. In solid geometry, we
have applied the above-mentioned implementation to prove some typical theorems
without using coordinates. The polynomials that occur during the triangulation are
short (usually fewer than 100 terms, never over 1,000 terms), the computation
time ranges from several seconds to several minutes. The examples in Section 3
can illustrate how geometric meaning is kept in the triangular sequence, and how
nongeometric nondegeneracy conditions are avoided or reduced in number.
In plane geometry [22], we use Mathematica 1.2 to implement the method.
We use both the Clifford model and the Grassmann model for algebraization:
the Grassmann model is used for theorems involving intersection and parallelism;
the Clifford model is used when perpendicularity, distances, angles, or circles are
involved. For a theorem represented in the Grassmann model, after a triangular
sequence is computed, the proving by pseudo-substituting the conclusion with the
triangular sequence is similar to the proving by the signed area method, after we
add a substitution technique to our pseudo-substitution function. The technique is a
fundamental rule in the signed area method: let M be the intersection of lines AB,
CD. Then
CM
DM

SCAB
,
SDAB

CM
CD

SCAB
.

(4.1)

In Clifford algebra language (see Appendix C), this rule means that for point
vectors A, B, C, D M, if M satisfies

108

HONGBO LI

M A B = 0,

M C D = 0,

(4.2)

then
CM
(C A B)
,
=
DM
(D A B)

CM
(C A B)
.
=
CD
((C D) (A B))

(4.3)

We know that (4.2) is of the solvable type (2, 2), whose solution is
M=

(A B) (C D)
(C A B) D (D A B) C
.
=
((A B) (C D))
(C A B) (D A B)

(4.4)

## Therefore substituting the rule (4.1) into an expression is equivalent to solving M

from (4.2) and then substituting the solution into the expression. Similar correspondences can be set up for other fundamental rules in the signed area method,
which can explain the similarity of this method with the proving by a triangular sequence. However, when applying the rule (4.1) to an expression that has AM/BM
or AM/AB, we have
AM
BM

SACD
,
SBCD

AM
AB

SACD
;
SACBD

(4.5)

while when substituting (4.3) into A M/B M and A M/A B, we get two
complicated expressions. Although it is true that
(C A B) D (D A B) C = (A C D) B + (B C D) A,
we can use it in our method only after the introduction of affine coordinates!
To avoid using coordinates, we should use two different forms of solutions for
M in different occasions for substitution. This is the technique we add to our
pseudo-substitution function. It has greatly simplified both the triangulation and
the proving proecdure by our method. When proving a theorem that is represented
in the Clifford model, often the proving procedure does not have good geometric interpretation. For example, for vectors A, B, C, D, the inner product (AB)(CD)
has good geometric meaning, but not so after expansion, which is required by
vectorial equations solving. This problem has to be solved in the future.
In space curves theory [23], we also implement the method with Mathematica
1.2. We use only the Clifford model for Euclidean space. The method, called the
Clifford algebraic reduction method in that paper, includes both differential and
algebraic reductions, and can be applied to almost any theorem on local properties
of space curves. The triangulation and proving procedure are readable, and the
proofs generated are much the same with those by the Frenet frame method in
college textbooks on differential geometry.
In space surfaces theory [21], we implement the method with Maple V.3. There
we use both differential forms and vectors for algebraic representation. The method
can be applied to most theorems on local properties of space surfaces, and the
triangulation and proving procedure are readable. One unique phenomenon is that

## MECHANICAL GEOMETRY THEOREM PROVING

109

the proofs produced are generally shorter than those used in college textbooks.
A typical example is a much simplified proof for a theorem (a conjecture by E.
Cartan) that was first proved by S. S. Chern in 1985.
Appendices
A.

## In this appendix we introduce finite-dimensional positive definite Clifford algebra

and two most often used Clifford algebra models for Euclidean geometry. The
readers can find detailed introduction of Clifford algebra in [8, 15, 16, 20], and so
on.
A.1. Basic Definitions
Let R be the real numbers field and Rn be an n-dimensional real vector space. By
defining an anticommutative and associative product among the vectors in Rn , the
so-called outer product, or wedge product, we can generate a Grassmann algebra
Gn from Rn : its elements, called multivectors, are graded from 0 to up n, that is,
x = hxi0 + hxi1 + + hxin ,

for any x Gn ,

(A.1.1)

## where hxii is the so-called i-vector part of x. An i-vector is by definition a linear

combination of i-blades. The integer i is called the grade, or step, of an i-vector.
An i-blade, or extensor of step i [28], is defined as the wedge product of i
vectors in Rn : a multivector x is an i-blade if and only if there exist i vectors
A1 , . . . , Ai such that
x = A1 Ai .

(A.1.2)

The blade A1 Ai equals zero if and only if the Aj are linear independent.
The grade can also take the values 0 and 1. A 0-vector is a scalar, and a 1-vector
is a vector in Rn . An n-vector is also called a pseudoscalar, an (n 1)-vector
is called a pseudovector. For i = 0, 1, n 1, n, an i-vector is also an i-blade.
The outer product can be extended to 0-vectors as follows: both the outer product
of a scalar with a multivector and that of a multivector with a scalar are scalar
product. In this way the outer product can be defined among any finite number of
multivectors, and satisfies the following anticommutativity property: for i-vector
xi , j -vector xj ,
xi xj = (1)ij xj xi .

(A.1.3)

All i-vectors in Gn form a vector space, which is denoted by Gin . It has the
dimension Cni . The vector space Gn has the dimension 2n . A multivector x is said
to be even if all its (2i)-vector parts are zero. It is said to be odd if all its (2i 1)vector parts are zero. It is said to be homogeneous if it is an i-vector, for some
0 i n.

110

HONGBO LI

## For two vectors in Rn , the Euclidean metric defines a commutative product,

the so-called inner product, or dot product. The inner product of two vectors is a
scalar. We can extend the inner product to any two multivectors in Gn as follows:
for multivectors x, y, z, scalars , , vectors Ai , Bj ,

(x + y) z = x z + y z,

z (x + y) = z x + z y,

x = 0,

x = 0,

(A
Ap ) (B1 Bq )

1
= (1)pqmin(p,q) (B1 Bq ) (A1 Ap ),

A0 (A1 Ap )

=

i=1

Ap ) (B1 Bq )
(A
1

## = (A1 Ap1 ) (Ap (B1 Bq )), if p q.

(A.1.4)

Where Ai denotes that Ai does not occur in the outer product. From the last rule of
the above definition, we know that for i-vector xi and j -vector xj , where i, j 6= 0,
xi xj is an |i j |-vector.
Two multivectors x, y are said to be orthogonal if x y = 0. From linear algebra,
we know that for any i linear independent vectors A1 , . . . , Ai , we can find i linear
independent and mutually orthogonal vectors

B = A1 ,

1
B2 = A2 + 2,1 A1 ,

## Bi = Ai + i,i1 Ai1 + + i,1 A1

through the so-called GramSchmidt orthogonization process. Therefore, for an iblade x, there exist i mutually orthogonal vectors B1 , . . . , Bi such that x = B1
Bi .

## The magnitude, or length, of a vector A is the scalar A A. The magnitude

of a scalar is defined as its absolute value. Using inner product, we can extend the
concept of magnitude to any multivector x as follows:
|x| = |hxi0 | +

n
X
p
|hxii hxii |.

(A.1.5)

i=1

## In particular, for an i-vector x, |x| = |x x|. A pseudoscalar of magnitude 1 is

called a unit pseudoscalar. For an i-blade, we can further simplify the expression

## MECHANICAL GEOMETRY THEOREM PROVING

111

of its magnitude by introducing the following operator, called the reverse operator:
let , be scalars, x, y be multivectors the Aj be vectors. Then

= ,
(x + y) = x + y ,
(A.1.6)

(A1 Ai ) = Ai A1 .
Let x = B1 Bi , where the Bj are mutually orthogonal vectors. Then x =
(1)i(i1)/2x and x x = B21 B2i 0. Therefore

|x| = x x .
(A.1.7)
For two vectors A, B, we can add their outer product and inner product together,
and thus form a new product, which is called the Clifford product, or geometric
product:
AB = A B + A B.

(A.1.8)

The geometric product is denoted by juxtaposition, which will not confuse with
the scalar product, because we define both the geometric product of a scalar with
a multivector and that of a multivector with a scalar to be scalar product. The geometric product can be extended to any finite number of multivectors as follows: let
, be scalars, the xi be multivectors, and the Aj be mutually orthogonal vectors.
Then

x (x x ) = (x1 x2 )x3 ,

1 2 3
x1 (x2 + x3 )x4 = x1 x2 x4 + x1 x3 x4 ,
(A.1.9)
Ax = A x + A x,

## (A1 Aj )x = (A1 Aj 1 )(Aj x).

Now we are ready to provide a definition for a positive definite Clifford algebra:
the Grassmann algebra Gn , when taken as an 2n -dimensional vector space equipped
with the geometric product, is called a Clifford algebra, or geometric algebra.
A.2. Basic Geometric Interpretations
Let xi be an i-blade. In this section we always assume i 6= 0. The set of vectors
{A Rn | A xi = 0} is a vector subspace. It can be represented by xi uniquely
up to a nonzero scalar factor. We call it the space xi . The magnitude of xi =
A1 Ai equals the volume of the i-dimensional parallelepiped spanned by
vectors A1 , . . . , Ai .
Let xi , yj be i-blade, j -blade, respectively. Then xi yj , if nonzero, is an
(i + j )-blade representing the sum of the space xi and the space yj . Further let
xi = A1 Ai , yj be of magnitude 1 and i j . When i = j , the inner product xi yj is a scalar: if xi , yj are equal up to a nonzero scalar factor,
then xi yj is a signed volume of the i-dimensional parallelepiped spanned by

112

HONGBO LI

vectors A1 , . . . , Ai ; otherwise |xi yj | equals the volume of the i-dimensional parallelepiped, which is the orthogonal projection of the i-dimensional parallelepiped
spanned by A1 , . . . , Ai into the space yj . When i < j , the inner product xi yj is
an (j i)-blade representing a (j i)-dimensional vector subspace of the space
yj , which is the orthogonal complement of the orthogonal projection of the space
xi into the space yj ; |xi yj | equals the volume of the orthogonal projection of the
i-dimensional parallelepiped spanned by A1 , . . . , Ai into the space yj .
In particular, when i < j = n, xi yj represents the orthogonal complement
of the space xi . This leads to the definition of the dual operator. The dual of a
multivector x (with respect to a fixed unit pseudoscalar I ) is defined by
x = xI .

(A.2.1)

When x is a scalar, then x is a pseudoscalar, and vice versa; when x does not have
a scalar part, that is, hxi0 = 0, then x = x I . The reason we use I instead of I
is to make the dual of I to be 1. The following is the duality principle between the
inner product and the outer product: for a vector A, and a multivector x that does
not have a scalar part,
A x = (A x) ,
A x = (A x) .

(A.2.2)

The dual operator also helps us to define the meet of multivectors: let x1 , . . . , xp
be multivectors. Then their meet x1 xp is defined by
x1 xp = x1 (x2 xp ),
(x1 x2 ) = x1 x2 .

(A.2.3)

## For i-vector yi and j -vector yj , when i + j n, yi yj is an (i + j n)-blade

representing the intersection of the space yi and the space yj . The meet operator is
associative and satisfies
yi yj = (1)(ni)(nj ) yj yi .
In application, we use the following computation formula:

0,
if i + j < n,
yi yj =

yi yj , if i + j n.

(A.2.4)

(A.2.5)

The Grassmann algebra Gn , when equipped with the meet operator, is called a
GrassmannCayley algebra, which has important applications in projective geometry and invariant theory [28, 32, 33].
For multivectors x, y that represent two geometric entities, the geometric product xy represents the complete geometric relationship of x with respect to y. Further details can be found in [15, 16].
In the end, we mention the concept of invertibility in Clifford algebra. Let x be
a multivector. If there exists a multivector y such that xy = yx = 1, then x is said

113

## to be invertible, with inverse x 1 = y. Not all nonzero multivectors are invertible.

For example, in G2 a multivector x is invertible if and only if the scalar xx 6= 0,
and the inverse is x/(xx ). Here is the so-called main anti-automorphism in
Clifford algebra [8], whose definition is the same with that of the reverse operator
if we replace the last line of (A.1.6) with
(A1 Ai ) = (1)i Ai A1 .

(A.2.6)

Another example is, in G3 an even (or odd) multivector x is invertible if and only
if the scalar xx 6= 0, and the inverse is x/(xx ). Sometimes we write xy 1 as x/y
when xy 1 = y 1 x, and in particular, when x and y are equal up to a nonzero
scalar factor.
A.3. Two Clifford Algebraic Models for Euclidean Geometry
(1) Clifford model Gn :
This is the most often used model for Euclidean geometry. Let O be a point
in E n . By setting O to be zero vector we mean O is taken as the starting point of
all vectors in E n . These vectors form the space Rn and can generate the Clifford
algebra Gn . If the observer is not specified, the model is independent of the choice
of the observer in E n and is intrinsic.
In this model, a point A is represented by the vector from O to A, which is
denoted by A as well. A line AB is represented by its direction A B and the
point A, or by A+(B A) parametrically, where is the parameter. The distance
between points A and B is |A B|. An i-dimensional plane passing through point
B and parallel to the i-dimensional vector subspace spanned by vectors A1 , . . . , Ai ,
is represented by (A1 Ai , B), or by B + 1 A1 + + i Ai parametrically. A
point A can also be taken as an 0-dimensional plane and represented by (1, A); an
i-dimensional space xi , where xi is an i-blade, can also be represented by (xi , 0).
For an i-dimensional plane (xi , A) and a j -dimensional plane (yj , B), where
0 i j n 1, they represent the same plane if and only if i = j , xi and yj
are equal up to a nonzero scalar factor, and (A B) xi = 0. They are parallel
if and only if the space xi is a subspace of the space yj and (A B) yj = 0.
When they are parallel, the distance between them is |(A B) yj |/|yj |. They are
perpendicular if and only if xi yj = 0.
(2) Grassmann model Gn+1 :
This model is mainly for the study of affine geometry. In the Clifford model,
the observer is inside E n . What if we move it outside E n ? Then we need to imbed
E n into E n+1 and take E n as a hyperplane of E n+1 . Let the distance from the origin
O to E n be 1, the vector from O to the foot drawn from O to E n by e0 . By setting
a point A E n to be e0 , the observer O is fixed. If no point in E n is evaluated, the
model is independent of the choice of the observer outside E n and is intrinsic.
In this model, a point A is represented by the vector from O to A, which is
denoted by A as well. A line AB is the intersection of plane OAB with E n , and
is represented by A B. An i-dimensional plane containing points A1 , . . . , Ai+1

114

HONGBO LI

## is represented by xi+1 = A1 Ai+1 , if the outer product is nonzero or,

equivalently, if the As are affine independent. The scalar |xi+1 | is the volume of

## the i-dimensional simplex generated by these points. When i = n, xi+1

is the
signed volume of the n-dimensional simplex generated by the As.
By embedding E n into E n+1 , the Clifford model Gn is embedded into the Grassmann model Gn+1 . In the Grassmann model, the inner product has geometric meaning only when it is restricted to Gn . Because of this, we need to discern directions
from points. Let
= e0 ,

(A.3.1)

## then a vector A is in E n , called a direction vector, if and only if (A) = 0. Vector

A represents a point in E n , called a point vector, if and only if (A) = 1. When
(A) 6= 0, A represents the point A/(A). The operator is called the boundary
operator, as it satisfies
2 = 0.

(A.3.2)

x = A (x).

(A.3.3)

## For any multivector x, (x) is in Gn ; on the other hand, a multivector x is in Gn if

and only if (x) = 0. Therefore, : Gn+1 Gn transfers the Grassmann model to
the Clifford model.
For an i-dimensional plane xi+1 and a j -dimensional plane yj +1 , where i
j n 1, they represent the same plane if and only if i = j , xi+1 and yj +1
are equal up to a nonzero scalar factor. They are parallel if and only if the space
(xi+1 ) is a vector subspace of the space yj +1 . They are perpendicular if and only
if (xi+1 ) (yj +1 ) = 0. For pseudoscalars xn+1 and yn+1 , we have xn+1 /yn+1 =
(xn+1 )/(yn+1 ). In particular, xn+1 = yn+1 if and only if (xn+1 ) = (yn+1 ).
B.

## PROOFS OF THE FORMULAS IN SECTION 2

(1) The proofs of (2.1), (2.9), (2.10) are omitted because they are trivial.
(2) Proof of (2.5). From X A2 B2 = 0 we obtain
0 = A1 (X A2 B2 ) = (X A1 )A2 (A1 A2 )X A1 B2 .

(B.1)

## Substituting X A1 = c1 into it, we get (2.5).

(3) Proof of (2.6). Since the dimension of the space is n, we have
X A1 An = 0.

(B.2)

Therefore
X(A1 An ) = (X (A1 An ))
n
X

=
(X Ai )(A1 Ai An ) .
i=1

115

## Substituting X Ai = ci , i = 1, . . . , n into it, we obtain (2.6).

(4) Proof of (2.7). First, since
X A1 A2 = B1 A2 ,
X A2 A1 = B2 A1 ,
(2.7b) is the compatibility condition for the original equations to have a solution.
Second, we use (2.7a) to verify the first original equation:
(B B)X A1 =
=
=
=
=

## (B1 A2 ) B) A1 + (B1 B)B

(B1 A2 ) (B A1 ) + (B1 B)B
((A1 A2 ) B)B1 (B1 (B A1 )) A2 + (B1 B)B
(B B)B1 (B1 B)(A1 A2 ) + (B1 B)B
(B B)B1 .

## Third, when interchanging the indices 1, 2 in the right-hand side of (2.7a), B is

changed to B, B1 A2 is changed to B2 A1 = B1 A2 , and the right-hand side
of (2.7a) is unchanged. This and the first original equation conclude X A2 = B2 .
(5) Proof of (2.8). Let Z be a vector. Then the equations

X A1 B1 = 0,
(B.3)
X (A2 Z) (B2 Z) = 0
are of the solvable type (1, 1); hence X satisfies
(A1 A2 Z)X = (B2 Z) A1 (A2 Z) B1 .

(B.4)

## Let E1 , E2 , E3 be a basis of R3 . Using (XA2 Z) = X(A2 Z), the equation

X A2 B2 = 0 is equivalent to the three scalar equations:
X (A2 Ei ) (B2 Ei ) = 0,

i = 1, 2, 3.

(B.5)

## Assuming A1 A2 Y 6= 0, we can choose E1 = A1 , E2 = A2 and E3 = Y.

Substituting Z = A1 into (B.4), we obtain

(B2 A1 ) A1 = (A2 A1 ) B1 = (A
2 B1 )A1 = (A2 B1 ) A1 ;

therefore (2.8b) holds. Substituting Z = A2 into (B.4), we obtain 0 = 0. Substituting Z = Y into (B.4), we obtain (2.8a).
Actually, (2.7) can be obtained from (2.8) in the following way: from A1 A2 6=
0, we get B1 A2 = B2 A1 6= 0. Therefore, A1 , A2 , X, B1 , B2 are all in
the Clifford algebra G3 generated by vectors in the space B1 A2 . This is still a
3D problem. Use (2.8) and choose Y = (A1 A2 ) , where denotes the dual
operator in G3 ; we obtain (2.7).
(6) Proof of (2.11). From
(X B C) = B X (B) (C) = 0,

(B.6)

116

HONGBO LI

we obtain
0 = K (X B C)
= K (B (C)) (X K) (B) + ((B) K) X,
from which we obtain (2.11) by substituting X K = J into it.
(7) Proof of (2.12). First, (X) = (A)/(A) = 1. Second, for i = 1, . . . , n1,

## = (1)i Ji Ki (K1 Ki Kn1 ).

For 2 r n 1, we have
(K1 Kr ) =
=
=
=

e0 (K1 (K2 Kr ))
(e0 K1 ) (K2 Kr )
(e0 K1 ) (K2 Kr )
(K1 ) K2 Kr .

Therefore,
(A) X Ki =
=
=
=

Ji (K1 ) Ki Kn1
Ji (K1 Kn1 )
(A) Ji .

## (8) Proof of (2.13). First,


X (B1 ) (B1 (C1 )) = 0,
X (B2 ) (B2 (C2 )) = 0

(B.7)

is the solvable type (1, 3). From (2.7) we obtain (2.13b) and
(B B) X = ((B1 (C1 )) (B2 )) B
((B1 (C1 )) B)(B2 ) + ((B2 (C2 )) B)(B1 ).

(B.8)

## The inner products in (B.8) are geometrically meaningless; therefore we need to

seek an equivalent form for it. Using (A.1.4) to distribute inner products in the
right-hand sides of (B.8) and (2.13a), we find they are equal.
(9) Proof of (2.14). Let Z be a vector. Then the equations

X B1 C1 = 0,
(B.9)
X B2 Z C2 Z = 0

117

## are of the solvable type (2, 1); hence X satisfies

((B1 ) B2 Z) X = (B2 Z) ((C1 ) B1 ) + (C2 Z) (B1 ). (B.10)
Let E1 , E2 , E3 , E4 be a basis of R4 . Then X B2 C2 = 0 is equivalent to the
four pseudoscalar equations:
X B2 Ei C2 Ei = 0,

i = 1, 2, 3, 4.

(B.11)

## Assuming (B1 ) B2 Y 6= 0, we can choose E1 = (B1 ), E2 , E3 be in the space

(B1 ) B2 , and E4 be Y. Substituting either Z = E2 or Z = E3 into (B.10), we
obtain 0 = 0. Substituting Z = (B1 ) into (B.10), we get
0 =
=
=
=

## (C2 (B1 )) (B1 ) + (B2 (B1 )) ((C1 ) B1 )

(C2 (B1 )) (B1 ) + ((C1 ) B1 ) (B2 (B1 ))
(C2 (B1 )) (B1 ) + (((C1 ) B1 ) B2 ) (B1 )
(C2 (B1 ) + C1 (B2 ) B1 B2 ) (B1 ),

## which is equivalent to (2.14b). Substituting Z = Y into (B.10), we get (2.14a).

(10) Proof of (2.15). First, from (2.15), (X) = 1. Second, assuming 01 6= 0,
we let Y be a point vector in the blade A0 Ar which is orthogonal to the
blade 01 . Then
A0 Ar = Y 01 .

(B.12)

Therefore,
(0) 01
Y 01
01 01
(0)
=
(01 (Y 01 ))
01 01

X A0 Ar =

= (1)r+1 (0) Y
= Y (0)
= 0.
(11) Proof of (2.16). We can obtain (2.16) from the stereographic projection of
the circle from A to l, the line passing through O and perpendicular to line OA. Let
= tan , where is the angle from the vector O A to the vector X A. Let B
be the intersection of line AX with l. Then
B = O + (O A) .
Since X A and B A are collinear, we let X A = (B A). Then
(A O)2 = (X O)2
= (A O + (O A) + (O A) )2
= ((1 + 2 )2 2 + 1)(A O)2 .

118

HONGBO LI

C.

## CLIFFORD ALGEBRAIC REPRESENTATION OF THE GEOMETRIC

INVARIANTS IN THE AREA METHOD

The area method [3, 4] is based on a set of geometric invariants and high-level
geometric theorems. It is composed of several submethods: the signed area and
Pythagorean difference method for plane geometry; the directed chord method
for plane geometry on circles; the signed volume method for solid geometry; the
full-angle method for plane geometry on angles; the vector algebra method; the
complex numbers method; and the argument method for Lobachevski geometry.
In this appendix we present Clifford algebraic representations for the geometric
invariants in these methods.
In the signed area and Pythagorean difference method, let AB be the signed
length of vector AB, SABC be the signed area of triangle ABC and PABC be the
Pythagorean difference and SABCD be the signed area of quadrilateral ABCD and
PABCD be the Pythagorean difference. Then in the Grassmann model G3 of E 2 ,
with the notation that is used to denote the dual operator in G2 , we have
AB
AB
=
;
CD
CD
CD
SABC = (A B C) /2;
SABCD = ((A C) (B D))/2;
PABC = 2(A B) (C B);
PABCD = 2(A C) (B D).
AB

In the signed volume method, let VABCD be the signed volume of tetrahedron
ABCD; then in the Grassmann model G4 of E 3 ,
VABCD = (A B C D) /6.

119

## g be the directed chord of a chord AB in a

In the directed chord method, let AB
circle with diameter . Then in the Clifford model G2 of E 2 , with the observer set
to be a point J on the circle,

(A B) ;
|A||B|
1
cos 6 (J A, J B) =
A B.
|A||B|

g=
AB

In the full-angle method, since a full-angle (l1 , l2 ) describes the geometric relationship of line l1 with respect to line l2 , it can be represented by l1 /l2 . In the
Clifford model G2 of E 2 ,
(l1 , l2 )

l1
;
l2

l1 l3
(l1 , l2 ) + (l3 , l4 )
;
l l
 n 2 4
l1
n(l1 , l2 )
;
l2
(0) 1;
(1) I2 ;
= ' .
In the vector algebra method and the complex numbers method, vectors and
complex numbers are already included in Clifford algebra [15, 16].
In the argument method [4], let the hyperbolic cosine of the hyperbolic distance
between two points A, B in the hyperbolic plane be cosh(AB) and the signed hyperbolic sine be sinh(AB), let SABC be the argument of triangle ABC and PABC be
the Pythagorean difference, and SABCD be the argument of quadrilateral ABCD
and PABCD be the Pythagorean difference. Then in the Clifford algebraic model
G2,1 [24] of the hyperbolic plane, with the notation that when two lines AB,
CD intersect at point M, is used to denote the dual operator in G2 , which is
generated by vectors in the tangent plane at M, we have
AB
sinh(AB)
=
;
sinh(CD)
C D
cosh(AB) = A B;
SABC = (A B C) ;
SABCD = ((A C) (B D));
PABCD = (A C) (B D);
PABC = (A B) (B C).

120

HONGBO LI

Acknowledgment
I thank Professor W. T. Wu, Professor M. T. Cheng, Professor H. Shi and Professor
D. M. Wang for their help in my research work. The author also appreciates invaluable advice from Professor D. Kapur. The English usage is checked by P. Reany of
Arizona State University, to whom the author also owes his gratitude.
References
1.
2.
3.
4.
5.
6.

7.
8.
9.
10.
11.
12.
13.

14.
15.
16.
17.
18.
19.
20.
21.
22.
23.

## Chen, W. H.: Preliminaries of Differential Geometry, Peking Univ. Press, 1990.

Chou, S. C.: Mechanical Geometry Theorem Proving, D. Reidel, Dordrecht, Boston, 1988.
Chou, S. C., Gao, X. S. and Zhang, J. Z.: Machine Proofs in Geometry, World Scientific,
Singapore, 1994.
Chou, S. C., Gao, X. S., Yang, L. and Zhang J. Z.: Automated production of readable proofs
for theorems in non-Euclidean geometries, in LNAI 1360, Springer, 1997, pp. 171188.
Chou, S. C., Gao, X. S. and Zhang, J. Z.: Mechanical geometry theorem proving by vector
calculation, Proc. ISSAC93, Kiev, ACM Press, 1993, pp. 284291.
Corrochano, E. B. and Lasenby, J.: Object modeling and motion analysis using Clifford algebra,
in R. Mohr and C. Wu (eds.), Proc. Europe-China Workshop on Geometric Modeling and
Invariants for Computer Visions, Xian, China, 1995, pp. 143149.
Corrochano, E. B., Buchholz, S. and Sommer, G.: Self-organizing Clifford neural network, in
IEEE ICNN96, Washington D.C., 1996, pp. 120125.
Crumeyrolle, A.: Orthogonal and Symplectic Clifford Algebras, D. Reidel, Dordrecht, 1990.
Delanghe, R., Sommen, F. and Soucek, V.: Clifford Algebra and Spinor-Valued Functions,
D. Reidel, Dordrecht, 1992.
Doran, C., Hestenes, D., Sommen, F. and Acker, N. V.: Lie groups as spin groups, J. Math.
Phys. 34(8) (1993), 36423669.
Havel, T.: Some examples of the use of distances as coordinates for Euclidean geometry,
J. Symbolic Comput. 11 (1991), 579593.
Havel, T. and Dress, A.: Distance geometry and geometric algebra, Found. Phys. 23 (1992),
13571374.
Havel, T.: Geometric algebra and Mbius sphere geometry as a basis for Euclidean invariant
theory, in N. L. White (ed.), Invariant Methods in Discrete and Computational Geometry,
D. Reidel, Dordrecht, 1995.
Hestenes, D.: Space-Time Algebra, Gordon and Breach, New York, 1966.
Hestenes, D. and Sobczyk, G.: Clifford Algebra to Geometric Calculus, D. Reidel, Dordrecht,
Boston, 1984.
Hestenes, D: New Foundations for Classical Mechanics, D. Reidel, Dordrecht, Boston, 1987.
Hestenes, D. and Ziegler, R.: Projective geometry with Clifford algebra, Acta Appl. Math. 23
(1991), 2563.
Hestenes, D.: The design of linear algebra and geometry, Acta Appl. Math. 23 (1991), 6593.
Kapur, D.: Using Grbner bases to reason about geometry problems, J. Symbolic Comput. 2
(1986), 399408.
Lawson, H. B. and Michelsohn, M. L.: Spin Geometry, Princeton, 1989.
Li, H.: On mechanical theorem proving in differential geometry local theory of surfaces,
Scientia Sinica Series A 40(4) (1997), 350356.
Li, H. and Cheng, M.: Proving theorems in elementary geometry with Clifford algebraic
method, Chinese Math. Progress 26(4) (1997), 357371.
Li, H. and Cheng, M.: Clifford algebraic reduction method for automated theorem proving in
differential geometry, J. Automated Reasoning 21 (1998), 121.

## MECHANICAL GEOMETRY THEOREM PROVING

24.
25.

121

Li, H.: Hyperbolic geometry with Clifford algebra, Acta Appl. Math. 48(3) (1997), 317358.
Mourrain, B. and Stolfi, N.: Computational symbolic geometry, in N. L. White (ed.), Invariant
Methods in Discrete and Computational Geometry, D. Reidel, Dordrecht, 1995, pp. 107139.
26. Mourrain, B. and Stolfi, N.: Applications of Clifford algebras in robotics, in J.-P. Merlet and B.
Ravani (eds.), Computational Kinematics, D. Reidel, Dordrecht, 1995, pp. 141150.
27. Stifter, S.: Geometry theorem proving in vector spaces by means of Grbner bases, Proc.
ISSAC93, Kiev, ACM Press, 1993, pp. 301310.
28. Sturmfels, B.: Algorithms in Invariant Theory, Springer-Verlag, New York, 1993.
29. Wang, D. M.: Elimination procedures for mechanical theorem proving in geometry, Ann. of
Math. and Artif. Intell. 13 (1995), 124.
30. Wang, D. M.: Clifford algebraic calculus for geometric reasoning, in LNAI 1360, Springer,
1997, pp. 115140.
31. Wang, D. M.: A method for proving theorems in differential geometry and mechanics, J. Univ.
Computer Sci. 1(9) (1995), 658673.
32. White, N. L.: Multilinear Cayley factorization, J. Symbolic Comput. 11 (1991), 421438.
33. Whiteley, W.: Invariant computations for analytic projective geometry, J. Symbolic Comput. 11
(1991), 549578.
34. Wu, W. T.: Mechanical Theorem Proving in Geometries: Basic Principle (translated from
Chinese edition 1984), Springer-Verlag, Wien, 1994.