## On the non-existence of rational first integrals for systems of linear differential equations

### Andrzej Nowicki

### N. Copernicus University, Faculty of Mathematics and Computer Science, ul. Chopina 12–18, 87–100 Toru´ n, Poland

### (e-mail: anow@mat.uni.torun.pl) March 11, 1994

Abstract

We present a description of all systems of linear differential equations which do not admit any rational first integral.

### 1 Introduction.

Let k[X] = k[x_{1}, . . . , x_{n}] be the polynomial ring in n variables over a field k of char-
acteristic zero, and let k(X) = k(x1, . . . , xn) be the quotient field of k[X]. Assume that
f = (f_{1}, . . . , f_{n}) ∈ k[X]^{n} and consider a system of polynomial ordinary differential equa-
tions

dx_{i}(t)

dt = f_{i}(x_{1}(t), . . . , x_{n}(t)), (1.1)
where i = 1, . . . , n.

This system has a clear meaning if k is a subfield of the field C of complex numbers.

When k is arbitrary then there also exists a meaning. It is well known and easy to be proved that there exists a solution of (1.1) in k[[t]], the ring of formal power series over k in the variable t.

An element ϕ of k[X] rk (resp. of k(X)rk) is said to be a polynomial (resp. rational) first integral of the system (1.1) if the following identity holds

n

X

i=1

f_{i}∂ϕ

∂x_{i} = 0. (1.2)

We would like to describe all the sequences f ∈ k[X]^{n} such that the system (1.1) has
no any polynomial (or rational) first integral. The problem is known to be difficult even
for n = 2.

In this paper we study the above problem in the case when f1, . . . , fnare homogeneous linear polynomials.

Let us assume that

f_{i} =

n

X

j=1

a_{ij}x_{j}, i = 1, . . . , n, (1.3)
where each a_{ij} belong to k. Denote by λ_{1}, . . . , λ_{n} the n eigenvalues of the matrix [a_{ij}] in
an algebraic closure of k. Moreover, let Z and Z^{+} denote respectively the ring of integers
and the set of non-negative integers.

The following two theorems are the main results of the paper.

Theorem 1.1. Let f = (f_{1}, . . . , f_{n}) ∈ k[X]^{n} where f_{1}, . . . , f_{n} are linear polynomials of
the form (1.3). The following conditions are equivalent:

(1) The system (1.1) does not admit any polynomial first integral.

(2) The eigenvalues λ_{1}, . . . , λ_{n} of the matrix [a_{ij}] are Z^{+}-independent.

Theorem 1.2. Let f = (f_{1}, . . . , f_{n}) ∈ k[X]^{n} where f_{1}, . . . , f_{n} are linear polynomials of
the form (1.3). The following conditions are equivalent:

(1) The system (1.1) does not admit any rational first integral.

(2) The Jordan matrix of the matrix [a_{ij}] has one of the following two forms:

(a)

λ_{1} 0

. ..

0 λ_{n}

,

where the eigenvalues λ_{1}, . . . , λ_{n} are Z-independent; or

(b)

λ_{1} 0

. ..

λ_{i−1}

λ_{i} 1
0 λ_{i+1}

λ_{i+2}

. ..

0 λ_{n}

for some i ∈ {1, . . . , n − 1} where λ_{i} = λ_{i+1} and the eigenvalues λ_{1}, . . . , λ_{i}, λ_{i+2}, . . . , λ_{n}
are Z-independent.

### 2 Derivations and rings of constants.

Throughout the paper we use the vocabulary of differential algebra (see for example [3] or [4]).

Let us assume that R is a commutative ring containing the field k and d is a k-
derivation of R, that is, d : R −→ R is a k-linear mapping such that d(ab) = ad(b) + d(a)b
for all a, b ∈ R. We denote by R^{d} the ring of constants of d, i. e., R^{d}= {a ∈ R; d(a) = 0}.

Let k ⊂ k^{0} be a field extension and let R ⊗_{k} k^{0} be the tensor product of R and k^{0}
over k. Let us consider the k^{0}-linear mapping d ⊗ 1 : R ⊗_{k} k^{0} −→ R ⊗_{k} k^{0} such that
(d ⊗ 1)(a ⊗ α) = d(a) ⊗ α, for all a ∈ R and α ∈ k^{0}. It is obvious that R ⊗k k^{0} is a
commutative ring containing k^{0} and d ⊗ 1 is a k^{0}-derivation of R ⊗_{k}k^{0}.

Proposition 2.1. The k^{0}-vector spaces R^{d}⊗_{k}k^{0} and (R ⊗_{k}k^{0})^{d⊗1} are isomorphic.

Proof. The exact sequence 0 −→ R^{d} −→ R^{i} −→ R of vector spaces over k (where^{d}
i(x) = x, for x ∈ R^{d}) induces the exact sequence

0 −→ R^{d}⊗_{k}k^{0 i⊗1}−→ R ⊗_{k}k^{0 d⊗1}−→ R ⊗_{k}k^{0}

of vector spaces over k^{0}. Thus, (R ⊗_{k}k^{0})^{d⊗1}= Ker(d ⊗ 1) = Im(i ⊗ 1) ≈ R^{d}⊗_{k}k^{0}.
If σ : R −→ R is a k-automorphism of R and d is a k-derivation of R then the mapping
σdσ^{−1}is also a k-derivation of R. Two k-derivations d and δ of R are said to be equivalent
if there exists a k-automorphism σ of R such that d = σδσ^{−1}.

Recall that when R is without zero divisors, the derivation d can be extended in a
unique way to its quotient field by setting: d(a/b) = b^{−2}(d(a)b − ad(b)).

It is easy to check the following

Proposition 2.2. Let d and δ be equivalent k-derivations of R. Then R^{d} = k if and only
if R^{δ} = k. Moreover, if R is without zero divisors and R_{0} is the quotient field of R, then
R^{d}_{0} = k if and only if R^{δ}_{0} = k.

We shall use the above notations for the ring k[X] and its quotient field k(X). Let
us note that a k-derivation of k[X] is completely defined by its values on the variables
x1, . . . , xn. If f = (f1, . . . , fn) ∈ k[X]^{n} then there exists a unique k-derivation d of k[X]

such that d(x_{i}) = f_{i} for all i = 1, . . . , n. This derivation is defined by
d(ϕ) =

n

X

i=1

f_{i}∂ϕ

∂x_{i}, (2.1)

for any ϕ ∈ k[X].

Thus, the set of all polynomial first integrals of (1.1) coincides with the set k[X]^{d}r k
where

k[X]^{d} = {ϕ ∈ k[X]; d(ϕ) = 0}

and d is the k-derivation defined by (2.1). Moreover, the set of all rational first integrals
of (1.1) coincides with the set k(X)^{d}r k, where

k(X)^{d} = {ϕ ∈ k(X); d(ϕ) = 0}

and d is the unique extension of the k-derivation (2.1) to k(X).

The rings of constants k[X]^{d} and k(X)^{d} are intensively studied from a long time; see
for example [7], [6] and [1], where many references on this subject can be found.

Every k-derivation d of k[X] determines two subfields of k(X); namely the subfield
k(X)^{d}and the subfield (k[X]^{d})_{0}, the quotient field of k[X]^{d}. The following example shows
that these subfields could be different.

Example 2.3. Assume that n = 2 and k[X] = k[x, y]. Let d = x_{∂x}^{∂} + y_{∂y}^{∂} . Then
(k[X]^{d})_{0} 6= k(X)^{d}.

Proof. It is clear that k[X]^{d} = k. Thus (k[X]^{d})_{0} = k. However k(X)^{d} 6= k, since
x/y ∈ k(X)^{d} . Indeed:

d(x

y) = d(x)y − xd(y)

y^{2} = xy − xy
y^{2} = 0

y^{2} = 0.
Let us note the following proposition which is easy to be proved.

Proposition 2.4. Let d be a k-derivation of k[X]. If f and g are nonzero relatively prime polynomials from k[X], then d(f /g) = 0 if and only if d(f ) = pf and d(g) = pg for some p ∈ k[X].

Note also an immediate consequence of Proposition 2.1.

Proposition 2.5. Let d be a k-derivation of k[X], and let k^{0} be an overfield of k. Denote
by d^{0} the k^{0}-derivation of k^{0}[X] such that d^{0}(xi) = d(xi) for i = 1, . . . , n. Then k[X]^{d}= k
if and only if k^{0}[X]^{d}^{0} = k^{0}.

The next proposition shows that the field k(X)^{d} has a similar property.

Proposition 2.6. Let d be a k-derivation of k[X], and let k^{0} be an overfield of k. Denote
by d^{0} the k^{0}-derivation of k^{0}[X] such that d^{0}(xi) = d(xi) for i = 1, . . . , n. Then k(X)^{d}= k
if and only if k^{0}(X)^{d}^{0} = k^{0}.

For the proof of this proposition we need three lemmas.

Lemma 2.7. Assume that L(t) is the field of rational functions in one variable t over
a field L containing k. Let d be a k-derivation of L such that L^{d} = k, and let δ be the
k-derivation of L(t) such that δ | L = d and δ(t) = 0. Then L(t)^{δ} = k(t).

Proof. It is obvious that k(t) ⊆ L(t)^{δ}. Let w = f /g ∈ L(t)^{δ} with f, g ∈ L[t], and put
p = deg(g). If p = 0 then it is clear that w ∈ k[t] ⊂ k(t). Let p > 1. We may assume
that g is monic and that p is minimal. Set g = t^{p}+ b_{1}t^{p−1}+ · · · + b_{p}, where b_{1}, . . . , b_{p} ∈ L.

Then we have: δ(g) = d(b1)t^{p−1}+ · · · + d(bp).

Observe that δ(g) = 0. Indeed, if δ(g) 6= 0 then w = δ(f )/δ(g) and we have a contradiction with the minimality of p.

Thus, d(b1) = · · · = d(bp) = 0 and hence, g ∈ k[X] (because L^{d} = k). Consequently,
δ(f ) = δ(g)w = 0 · w = 0, that is, f ∈ k[X]. Therefore, w = f /g ∈ k(t).

Lemma 2.8. Let L be a field containing k and let S be a set of algebraically independent
elements over L. Let d be a k-derivation of L such that L^{d} = k. If δ is the k-derivation
of L(S) such that δ | L = d and δ(s) = 0, for every s ∈ S, then L(S)^{δ} = k(S).

Proof. It is obvious that k(S) ⊆ L(S)^{δ}. Let w ∈ L(S)^{δ}. Since w ∈ L(S), there
exists a finite subset S^{0} of S such that w ∈ L(S^{0}). Denote by δ^{0} the restriction of the
derivation δ to L(S^{0}). Then w ∈ L(S^{0})^{δ}^{0} and hence, by Lemma 2.7 and by induction,
w ∈ k(S^{0}) ⊆ k(S).

Lemma 2.9. Let F be a field containing k and let F ⊆ k^{0} be an algebraic field extension.

Assume that δ is an F -derivation of F [X] and let d^{0} be the k^{0}-derivation of k^{0}[X] such
that d^{0}(xi) = δ(xi) for i = 1, . . . , n. If F (X)^{δ}= F then k^{0}(X)^{d}^{0} = k^{0}.

Proof. Suppose that k^{0}(X)^{d}^{0} 6= k^{0} and let w ∈ k^{0}(X)^{d}^{0} r k^{0}. Since k^{0}(X) is algebraic
over F (X), there exist a_{1}, . . . , a_{p} ∈ F (X) such that w^{p} + a_{1}w^{p−1}+ · · · + a_{p} = 0. Let
us assume that p is minimal. Then δ(a_{1})w^{p−1}+ · · · + δ(a_{p}) = d^{0}(w^{p} + a_{1}w^{p−1}+ · · · +
ap) = d^{0}(0) = 0 and hence, by the minimality of p, δ(a1) = · · · = δ(ap) = 0, that is,
a_{1}, . . . , a_{p} ∈ F (because F (X)^{δ} = F ). Thus, w is algebraic over F . This implies that w
is algebraic over k^{0}. But k^{0} is algebraically closed in k^{0}(X). Hence, w ∈ k^{0} and we have a
contradiction.

Proof of Proposition 2.6. If k^{0}(X)^{d}^{0} = k^{0} then it is clear that k(X)^{d}= k. Assume
now that k(X)^{d}= k. Let S be a subset of k^{0} such that the extension k ⊂ k(S) is purely
transcendental and the extension k(S) ⊂ k^{0} is algebraic. Put L = k(X), F = k(S). Then,
by Lemma 2.8, F (X)^{δ} = L(S)^{δ} = k(S) = F , where δ is the F -derivation of F [X] such
that δ(x_{i}) = d(x_{i}) for all i = 1, . . . , n. Now Lemma 2.9 implies that k^{0}(X)^{d}^{0} = k^{0}.

### 3 Linear derivations.

In the present section we recall from [5] some useful facts on linear derivations.

Let us make precise some notions. Assume that d is a k-derivation of k[X] such that d(xi) =

n

X

j=1

aijxj for i = 1, . . . , n, (3.1)
where each a_{ij} belongs to k. Let λ_{1}, . . . , λ_{n}be the n eigenvalues (belonging to an algebraic
closure k of k) of the matrix [aij]. Moreover, let the Jordan matrix (in k) of the matrix
[a_{ij}] be of the form:

J_{m}_{1}(ρ_{1}) 0
. ..

0 J_{m}_{s}(ρ_{s})

(3.2)

where s > 1, m^{1} + · · · + ms = n, {ρ1, . . . , ρs} = {λ1, . . . , λn} and each Jmi(ρi) is the
m_{i}× m_{i} block:

Jmi =

ρ_{i} 1
. .. 1

ρ_{i}

. (3.3)

The homogeneity of polynomials d(x1), . . . , d(xn) together with the fact that they are of the same degree 1 has the following consequences.

Proposition 3.1 ([5]). Let d be the k-derivation defined by (3.1).

(1) If f ∈ k[X]^{d} then all homogeneous components of f belong also to k[X]^{d}.

(2) Let f, p ∈ k[X] and assume that f 6= 0 and d(f ) = pf . Then p ∈ k and d(g) = pg for all homogeneous components g of f .

The following proposition will be played an important role in our considerations.

Proposition 3.2 ([5] Lemma 2.3). Let d be the k-derivation of k[X] defined by (3.1).

Assume that f is a nonzero homogeneous polynomial from k[X] satisfying the equality d(f ) = pf for some p ∈ k. Then, there exist non-negative integers i1, . . . , in such that

i_{1}λ_{1} + · · · + i_{n}λ_{n} = p
i_{1} + · · · + i_{n} = deg f

. (3.4)

Using the above propositions one can deduce the following

Corollary 3.3. Let d be a k-derivation of k[X] defined by (3.1) with the Jordan matrix
(3.2). Assume that the elements ρ_{1}, . . . , ρ_{s} are Z-independent. Then the following three
properties hold:

(1) If f and g are nonzero homogeneous polynomials from k[X] such that d(f ) = pf and d(g) = pg for some p ∈ k, then deg f = deg g.

(2) If f is a polynomial from k[X] such that d(f ) = pf for some p ∈ k, then f is homogeneous.

(3) If d(f /g) = 0, where f and g are nonzero relatively prime polynomials from k[X], then f and g are homogeneous of the same degree.

Proof. (1). We know that {ρ_{1}, . . . , ρ_{s}} = {λ_{1}, . . . , λ_{n}}. Hence, by Proposition 3.2,
there exist non-negative integers i_{1}, . . . , i_{s} and j_{1}, . . . , j_{s} such that

i_{1}ρ_{1} + · · · + i_{s}ρ_{s} = p, i_{1} + · · · + i_{s} = deg f,
j_{1}ρ_{1} + · · · + j_{s}ρ_{s} = p, j_{1} + · · · + j_{s} = deg g.

Then (i1− j1)ρ1+ · · · + (is− js)ρs = 0 and hence, i1 = j1, . . . , is = jsbecause the elements
ρ_{1}, . . . , ρ_{s} are Z-independent. Therefore, deg f = i1+ · · · + i_{s} = j_{1}+ · · · + j_{s} = deg g.

(2). Suppose that f contains two nonzero homogeneous components f_{1} and f_{2} with
deg f1 6= deg f2. Then d(f1) = pf1, d(f2) = pf2 (by Proposition 3.1) and so, using (1), we
obtain a contradiction: degf_{1} = deg f_{2}.

(3). It is a consequence of (1), (2) and Proposition 2.4.

### 4 Proof of Theorem 1.1.

In view of Section 2 we must prove that if k is an arbitrary field of characteristic zero and d is the k-derivation of k[X] defined by (3.1), then the following two conditions are equivalent:

(1) k[X]^{d} = k;

(2) The n eigenvalues λ_{1}, . . . , λ_{n} of the matrix [a_{ij}] are Z^{+}-independent.

(2) ⇒ (1). Suppose that there exists f ∈ k[X] such that d(f ) = 0 and f 6∈ k. We
may assume (by Proposition 3.1) that f is homogeneous. Then d(f ) = 0f and hence, by
Proposition 3.2, there exist i_{1}, . . . , i_{n} ∈ Z^{+}satisfying (3.4). Since i_{1}λ_{1}+ · · · + i_{n}λ_{n}= 0 and
λ1, . . . , λn are Z^{+}-independent, i1 = · · · = in = 0. Therefore, deg f = i1 + · · · + in = 0,
which contradicts with the fact that f 6∈ k.

(1) ⇒ (2). Assume now that k[X]^{d} = k. Let k^{0} be an algebraically closed field
containing k and let d^{0} be the k^{0}-derivation of k^{0}[X] such that d^{0}(x_{i}) = d(x_{i}) for all
i = 1, . . . , n. Then (by Proposition 2.5) k^{0}[X]^{d}^{0} = k^{0} so, we may assume that the field k
is algebraically closed. Moreover, using Proposition 2.2, we may assume that [a_{ij}] is the
Jordan matrix (3.2).

Therefore, there exist a subset {y1, . . . , ys} contained in {x1, . . . , xn} such that d(yi) =
ρ_{i}y_{i} for all i = 1, . . . , s.

Suppose now that the eigenvalues λ_{1}, . . . , λ_{n} are not Z^{+}-independent. Then there
exists a nonzero sequence (j1, . . . , js) of non-negative integers such that j1ρ1+ · · · + jsρs =
0. Put f = y^{j}_{1}^{1}· · · y_{s}^{j}^{s}. Then f ∈ k[X] r k and

d(f ) = (j_{1}ρ_{1}+ · · · j_{s}ρ_{s})f = 0f = 0.

But it is a contradiction. This completes the proof ot Theorem 1.1.

### 5 Proof of Theorem 1.2.

In view of Proposition 2.6 we may assume that k is an algebraically closed field of characteristic zero. Let d be a k-derivation of k[X] defined by (3.1) with the Jordan matrix (3.2). We must show that the following two conditions are equivalent:

(1) k(X)^{d}= k;

(2) The elements ρ_{1}, . . . , ρ_{s} are Z-independent and s > n − 1.

In view of Proposition 2.2 without loss of generality we may assume that the matrix
[a_{ij}] of the derivation d coincides with the Jordan matrix (3.2). Therefore, there exist a
subset {y_{1}, . . . , y_{s}} contained in {x_{1}, . . . , x_{n}} such that d(y_{i}) = ρ_{i}y_{i} for all i = 1, . . . , s.

(1) ⇒ (2). Let us assume that k(X)^{d}= k.

First we shall show that ρ_{1}, . . . , ρ_{s} are Z-independent. Suppose that there exists a
nonzero sequence (j_{1}, . . . , j_{s}) of integers such that j_{1}ρ_{1}+· · ·+j_{s}ρ_{s}= 0. Put ϕ = y_{1}^{j}^{1}· · · y_{s}^{j}^{s}.
Then we have a contradiction: ϕ ∈ k(X) r k and d(ϕ) = (j1ρ_{1}+ · · · j_{s}ρ_{s})ϕ = 0ϕ = 0.

Now we shall prove that s > n − 1. Let us suppose that s < n − 1 and consider the
numbers m_{1}, . . . , m_{s} defined in (3.2). We know that m_{1}+ · · · + m_{s} = n. Therefore, we
must consider only two the following cases:

Case I: There exists j ∈ {1, . . . , s} such that m_{j} > 3.

In this case there exist three variables x, y, z ∈ {x_{1}, . . . , x_{n}} such that d(x) = ax,
d(y) = ay + x and d(z) = az + y, where a = ρ_{j}. Put ϕ = f /g, where f = x^{2} and
g = y^{2} − 2xz. Then d(f ) = 2af , d(g) = 2ag and hence, d(ϕ) = 0 and ϕ 6∈ k. It is a
contradiction with the fact that k(X)^{d} = k.

Case II: There exist i, j ∈ {1, . . . , s} such that i 6= j, m_{i} > and mj > 2.

In this case there exist four variables x, y, z, t ∈ {x1, . . . , xn} such that d(x) = ax,
d(y) = ay + x, d(z) = bz and d(t) = bt + z, where a = ρ_{i} and b = ρ_{j}. Then (xt − yz)/xz ∈
k(X)^{d}r k.

This completes the proof of implication (1) ⇒ (2).

For the proof of implication (2) ⇒ (1) we need two lemmas. Let us recall that if δ is
a k-derivation of a ring R then δ is called locally nilpotent if for ach r ∈ R there exists a
natural number m such that δ^{m}(r) = 0.

Lemma 5.1 ([8]). Let R be a commutative ring without zero divisors containing k and let δ be a locally nilpotent k-derivation of R. If δ(a) = ua for some u, a ∈ R then δ(a) = 0.

Proof. If r ∈ R then we define deg_{δ}(r) as follows:

deg_{δ}(r) = l if r 6= 0, δ^{l}(r) 6= 0 and δ^{l+1} = 0;

deg_{δ}(r) = −∞ if r = 0.

The function deg_{δ} behaves like the usual function degree over a polynomial ring (see [2]).

In particular, if a, b ∈ R then deg_{δ}(ab) = deg_{δ}(a) + deg_{δ}(b).

Assume now that δ(a) = ua and suppose that δ(a) 6= 0. Then a 6= 0, u 6= 0,
deg_{δ}δ(a) = deg_{δ}a − 1 and we have a contradiction:

deg_{δ}a − 1 = deg_{δ}δ(a) = deg_{δ}(ua) = deg_{δ}a + deg_{δ}u > degδa.
Lemma 5.2. Let δ be a k-derivation of k[x_{1}, x_{2}] such that

δ(x_{1}) = ax1

δ(x_{2}) = ax_{2}+ x_{1}
where a ∈ k. If a 6= 0 then k(x_{1}, x_{2})^{δ} = k.

Proof. Let δ_{a}and δ_{0} be the k-derivations of k[x_{1}, x_{2}] such that δ_{a}(x_{1}) = ax_{1}, δ_{a}(x_{2}) =
ax_{2}, δ_{0}(x_{1}) = 0 and δ_{0}(x_{2}) = x_{1}. Then δ = δ_{a}+ δ_{0}, δ_{0} is locally nilpotent, and k[x_{1}, x_{2}]^{δ}^{0} =
k[x_{1}].

Suppose that f and g are nonzero relatively prime polynomials from k[x_{1}, x_{2}] such
that δ(f /g) = 0. It follows from Corollary 3.3 that f and g are homogeneous of the same
degree; set m = deg f = deg g. Moreover (by Proposition 2.4), δ(f ) = pf and δ(g) = pg
for some p ∈ k. Observe that δ_{a}(f ) = maf . Indeed, if f =P

i+j=mu_{ij}x^{i}_{1}x^{j}_{2} with u_{ij} ∈ k,
then

δ_{a}(f ) = X

i+j=m

u_{ij}δ_{a}(x^{i}_{1}x^{j}_{2}) = X

i+j=r

u_{ij}max^{i}_{1}x^{j}_{2} = maf.

Hence,

δ_{0}(f ) = δ(f ) − δ_{a}(f ) = pf − maf = (p − ma)f

and hence, Lemma 5.1 implies that δ_{0}(f ) = 0. This means that f ∈ k[x_{1}], that is, f = ux^{m}_{1}
for some u ∈ k r {0}. Using the same argument we see that g = vx^{m}1 with v ∈ k r {0}.

Therefore, f /g = u/v ∈ k.

Now we may continue our proof of Theorem 1.2.

(2) ⇒ (1). Assume that ρ_{1}, . . . , ρ_{s} are Z-independent and s > n − 1. We must
consider two cases: ”s = n” and ”s = n − 1”.

Case (a): s = n. In this case the derivation d is diagonal; d(x_{i}) = ρ_{i}x_{i} for all i = 1, . . . , n.

Let f and g be nonzero relatively prime polynomials from k[X] such that d(f /g) = 0.

We know from Corollary 3.3 that f and g are homogeneous of the same degree. Moreover,
d(f ) = pf and d(g) = pg for some p ∈ k (Proposition 2.4). Since ρ1, . . . , ρn are Z-
independent, it is easy to see that f and g are monomials. Put f = ax^{i}_{1}^{1}· · · x^{i}_{n}^{n} and
g = bx^{j}_{1}^{1}· · · x^{j}_{n}^{n}, with a, b ∈ k r {0}. Then we have:

i_{1}ρ_{1}+ · · · + i_{n}ρ_{n}= p = j_{1}ρ_{1}+ · · · + j_{n}ρ_{n}

and hence, (i_{1}− j_{1})ρ_{1}+ · · · + (i_{n}− j_{n})ρ_{n}= 0. Thus, i_{1} = j_{1}, . . . , i_{n}= j_{n}, that is, f /g ∈ k.

Case (b): s = n − 1. Using an induction on n we shall show that k(X)^{d}= k. It is obvious
that n > 2. If n = 2 then our assertion follows from Lemma 5.2.

Let n > 2. Without loss of generality we may assume that the matrix [a_{ij}] of d is of
the form

a 1 0 a

0
b_{1}

. ..

0 br

(5.1)

where a, b_{1}, . . . , b_{r} are Z-independent elements of k and where r = n − 2. Thus, the
derivation d is of the form:

d(x_{1}) = ax_{1}
d(x_{2}) = ax_{2}+ x_{1}
d(x_{3}) = b_{1}x_{3}
d(x_{4}) = b_{2}x_{4}

...

d(x_{n}) = b_{r}x_{n}.
Observe that d(k[x1, . . . , xn−1]) ⊆ k[x1, . . . , xn−1].

Put R = k[x_{1}, . . . , x_{n−1}] and let δ be the restriction of d to R. Then δ is a linear
k-derivation of R. If n = 3 then δ is the derivation as in Lemma 5.2. If n > 3 then the
matrix of δ is of the form (5.1) with r = n − 1. Thus, by an induction, R^{δ} = k.

Assume now that f and g are nonzero relatively prime polynomials from k[X] such
that d(f /g) = 0. Then (by Proposition 2.4) d(f ) = pf , d(g) = pg for some p ∈ k, and (by
Corollary 3.3) f , g are homogeneous of the same degree; set m = deg f = deg g. Denote
by y the variable x_{n} and put

f = F y^{u}+ F_{1}y^{u−1}+ · · ·
g = Gy^{v}+ G_{1}y^{v−1}+ · · · ,

where u > 0, v > 0, and where F, G are nonzero homogeneous polynomials from R with deg F = m − u and deg G = m − v.

Comparing the coefficients of y^{u} in the equality d(f ) = pf we see that d(F ) = (p −
ub_{r})F , that is, δ(F ) = (p − ub_{r})F . Comparing the coefficients of y^{v} in the equality
d(g) = pg we have: δ(G) = (p − vb_{r})G.

Now, let us use Proposition 3.2 (for the derivation δ and the polynomials F and G). By
this proposition there exist non-negative integers j_{1}, j_{2}, i_{1}, . . . , i_{r−1} and j_{1}^{0}, j_{2}^{0}, i^{0}_{1}, . . . , i^{0}_{r−1}
such that

j_{1}a + j_{2}a + i_{1}b_{1} + · · · + i_{r−1}b_{r−1} = p − ub_{r},
j_{1}^{0}a + j_{2}^{0}a + i^{0}_{1}b_{1} + · · · + i^{0}_{r−1}b_{r−1} = p − vb_{r}.
Then

((j1+ j2) − (j_{1}^{0} + j_{2}^{0}))a + (i1− i^{0}_{1})b1+ · · · + (ir−1− i^{0}_{r−1})br−1+ (u − v)br = 0
and hence, u = v (because the elements a, b_{1}, . . . , b_{r} are Z-independent).

Thus, δ(F ) = qF and δ(G) = qG where q = p − ub_{r} = p − vb_{r}. This implies that
δ(F/G) = 0. Since R^{δ} = k, F = cG for some nonzero c ∈ k.

Let us now consider the homogeneous polynomial h ∈ k[X] equal to f − cg. It is obvious that d(h) = ph. We shall show that h = 0.

Suppose that h 6= 0. Since f and g are nonzero relatively prime, the polynomials
f and h are also nonzero and relatively prime. Repeating the above procedure for the
polynomials f and h we see that they have the same degree with respect to y. But
deg_{y}h < deg_{y}f = u because the coefficient of y^{u} in h is equal to F − cG = 0. It is
a contradiction. Therefore h = 0, that is, f /g = c ∈ k. This completes the proof of
Theorem 1.2.

### References

[1] A. van den Essen, Locally nilpotent derivations and their applications III, Catholic University, Nijmegen, Report 9330(1993).

[2] M. Ferrero, Y. Lequain, A. Nowicki, A note on locally nilpotent derivations, J.Pure Appl.Algebra 79(1992), 45 – 50.

[3] I. Kaplansky, An Introduction to Differential Algebra, Hermann, Paris, 1976.

[4] E. R. Kolchin, Differential Algebra and Algebraic Groups, Academic Press, New York, London, 1973.

[5] J. Moulin Ollagnier, A. Nowicki, J. -M. Strelcyn, On the non-existence of constants of derivations: The proof of a theorem of Jouanolou and its development, to appear in Bull. Sci. Math.

[6] A. Nowicki, Rings and fields of constants for derivations in characteristic zero, to appear in J. Pure Appl. Algebra.

[7] A. Nowicki, M. Nagata, Rings of constants for k–derivations in k[x1, . . . , xn], J. Math. Kyoto Univ., 28(1988), 111 – 118.

[8] R. Rentschler, Op´erations du groupe additif sur le plan affine, C.R.Acad.Sc.Paris 267(1968), 384 – 387.