• Nie Znaleziono Wyników

# On the non-existence of rational ﬁrst integrals for systems of linear diﬀerential equations

N/A
N/A
Protected

Share "On the non-existence of rational ﬁrst integrals for systems of linear diﬀerential equations"

Copied!
10
0
0

Pełen tekst

(1)

## On the non-existence of rational first integrals for systems of linear differential equations

### (e-mail: anow@mat.uni.torun.pl) March 11, 1994

Abstract

We present a description of all systems of linear differential equations which do not admit any rational first integral.

### 1 Introduction.

Let k[X] = k[x1, . . . , xn] be the polynomial ring in n variables over a field k of char- acteristic zero, and let k(X) = k(x1, . . . , xn) be the quotient field of k[X]. Assume that f = (f1, . . . , fn) ∈ k[X]n and consider a system of polynomial ordinary differential equa- tions

dxi(t)

dt = fi(x1(t), . . . , xn(t)), (1.1) where i = 1, . . . , n.

This system has a clear meaning if k is a subfield of the field C of complex numbers.

When k is arbitrary then there also exists a meaning. It is well known and easy to be proved that there exists a solution of (1.1) in k[[t]], the ring of formal power series over k in the variable t.

An element ϕ of k[X] rk (resp. of k(X)rk) is said to be a polynomial (resp. rational) first integral of the system (1.1) if the following identity holds

n

X

i=1

fi∂ϕ

∂xi = 0. (1.2)

We would like to describe all the sequences f ∈ k[X]n such that the system (1.1) has no any polynomial (or rational) first integral. The problem is known to be difficult even for n = 2.

In this paper we study the above problem in the case when f1, . . . , fnare homogeneous linear polynomials.

(2)

Let us assume that

fi =

n

X

j=1

aijxj, i = 1, . . . , n, (1.3) where each aij belong to k. Denote by λ1, . . . , λn the n eigenvalues of the matrix [aij] in an algebraic closure of k. Moreover, let Z and Z+ denote respectively the ring of integers and the set of non-negative integers.

The following two theorems are the main results of the paper.

Theorem 1.1. Let f = (f1, . . . , fn) ∈ k[X]n where f1, . . . , fn are linear polynomials of the form (1.3). The following conditions are equivalent:

(1) The system (1.1) does not admit any polynomial first integral.

(2) The eigenvalues λ1, . . . , λn of the matrix [aij] are Z+-independent.

Theorem 1.2. Let f = (f1, . . . , fn) ∈ k[X]n where f1, . . . , fn are linear polynomials of the form (1.3). The following conditions are equivalent:

(1) The system (1.1) does not admit any rational first integral.

(2) The Jordan matrix of the matrix [aij] has one of the following two forms:

(a)

λ1 0

. ..

0 λn

,

where the eigenvalues λ1, . . . , λn are Z-independent; or

(b)

λ1 0

. ..

λi−1

 λi 1 0 λi+1

 λi+2

. ..

0 λn

for some i ∈ {1, . . . , n − 1} where λi = λi+1 and the eigenvalues λ1, . . . , λi, λi+2, . . . , λn are Z-independent.

### 2 Derivations and rings of constants.

Throughout the paper we use the vocabulary of differential algebra (see for example [3] or [4]).

Let us assume that R is a commutative ring containing the field k and d is a k- derivation of R, that is, d : R −→ R is a k-linear mapping such that d(ab) = ad(b) + d(a)b for all a, b ∈ R. We denote by Rd the ring of constants of d, i. e., Rd= {a ∈ R; d(a) = 0}.

Let k ⊂ k0 be a field extension and let R ⊗k k0 be the tensor product of R and k0 over k. Let us consider the k0-linear mapping d ⊗ 1 : R ⊗k k0 −→ R ⊗k k0 such that (d ⊗ 1)(a ⊗ α) = d(a) ⊗ α, for all a ∈ R and α ∈ k0. It is obvious that R ⊗k k0 is a commutative ring containing k0 and d ⊗ 1 is a k0-derivation of R ⊗kk0.

(3)

Proposition 2.1. The k0-vector spaces Rdkk0 and (R ⊗kk0)d⊗1 are isomorphic.

Proof. The exact sequence 0 −→ Rd −→ Ri −→ R of vector spaces over k (whered i(x) = x, for x ∈ Rd) induces the exact sequence

0 −→ Rdkk0 i⊗1−→ R ⊗kk0 d⊗1−→ R ⊗kk0

of vector spaces over k0. Thus, (R ⊗kk0)d⊗1= Ker(d ⊗ 1) = Im(i ⊗ 1) ≈ Rdkk0.  If σ : R −→ R is a k-automorphism of R and d is a k-derivation of R then the mapping σdσ−1is also a k-derivation of R. Two k-derivations d and δ of R are said to be equivalent if there exists a k-automorphism σ of R such that d = σδσ−1.

Recall that when R is without zero divisors, the derivation d can be extended in a unique way to its quotient field by setting: d(a/b) = b−2(d(a)b − ad(b)).

It is easy to check the following

Proposition 2.2. Let d and δ be equivalent k-derivations of R. Then Rd = k if and only if Rδ = k. Moreover, if R is without zero divisors and R0 is the quotient field of R, then Rd0 = k if and only if Rδ0 = k. 

We shall use the above notations for the ring k[X] and its quotient field k(X). Let us note that a k-derivation of k[X] is completely defined by its values on the variables x1, . . . , xn. If f = (f1, . . . , fn) ∈ k[X]n then there exists a unique k-derivation d of k[X]

such that d(xi) = fi for all i = 1, . . . , n. This derivation is defined by d(ϕ) =

n

X

i=1

fi∂ϕ

∂xi, (2.1)

for any ϕ ∈ k[X].

Thus, the set of all polynomial first integrals of (1.1) coincides with the set k[X]dr k where

k[X]d = {ϕ ∈ k[X]; d(ϕ) = 0}

and d is the k-derivation defined by (2.1). Moreover, the set of all rational first integrals of (1.1) coincides with the set k(X)dr k, where

k(X)d = {ϕ ∈ k(X); d(ϕ) = 0}

and d is the unique extension of the k-derivation (2.1) to k(X).

The rings of constants k[X]d and k(X)d are intensively studied from a long time; see for example [7], [6] and [1], where many references on this subject can be found.

Every k-derivation d of k[X] determines two subfields of k(X); namely the subfield k(X)dand the subfield (k[X]d)0, the quotient field of k[X]d. The following example shows that these subfields could be different.

Example 2.3. Assume that n = 2 and k[X] = k[x, y]. Let d = x∂x + y∂y . Then (k[X]d)0 6= k(X)d.

(4)

Proof. It is clear that k[X]d = k. Thus (k[X]d)0 = k. However k(X)d 6= k, since x/y ∈ k(X)d . Indeed:

d(x

y) = d(x)y − xd(y)

y2 = xy − xy y2 = 0

y2 = 0.  Let us note the following proposition which is easy to be proved.

Proposition 2.4. Let d be a k-derivation of k[X]. If f and g are nonzero relatively prime polynomials from k[X], then d(f /g) = 0 if and only if d(f ) = pf and d(g) = pg for some p ∈ k[X]. 

Note also an immediate consequence of Proposition 2.1.

Proposition 2.5. Let d be a k-derivation of k[X], and let k0 be an overfield of k. Denote by d0 the k0-derivation of k0[X] such that d0(xi) = d(xi) for i = 1, . . . , n. Then k[X]d= k if and only if k0[X]d0 = k0. 

The next proposition shows that the field k(X)d has a similar property.

Proposition 2.6. Let d be a k-derivation of k[X], and let k0 be an overfield of k. Denote by d0 the k0-derivation of k0[X] such that d0(xi) = d(xi) for i = 1, . . . , n. Then k(X)d= k if and only if k0(X)d0 = k0.

For the proof of this proposition we need three lemmas.

Lemma 2.7. Assume that L(t) is the field of rational functions in one variable t over a field L containing k. Let d be a k-derivation of L such that Ld = k, and let δ be the k-derivation of L(t) such that δ | L = d and δ(t) = 0. Then L(t)δ = k(t).

Proof. It is obvious that k(t) ⊆ L(t)δ. Let w = f /g ∈ L(t)δ with f, g ∈ L[t], and put p = deg(g). If p = 0 then it is clear that w ∈ k[t] ⊂ k(t). Let p > 1. We may assume that g is monic and that p is minimal. Set g = tp+ b1tp−1+ · · · + bp, where b1, . . . , bp ∈ L.

Then we have: δ(g) = d(b1)tp−1+ · · · + d(bp).

Observe that δ(g) = 0. Indeed, if δ(g) 6= 0 then w = δ(f )/δ(g) and we have a contradiction with the minimality of p.

Thus, d(b1) = · · · = d(bp) = 0 and hence, g ∈ k[X] (because Ld = k). Consequently, δ(f ) = δ(g)w = 0 · w = 0, that is, f ∈ k[X]. Therefore, w = f /g ∈ k(t). 

Lemma 2.8. Let L be a field containing k and let S be a set of algebraically independent elements over L. Let d be a k-derivation of L such that Ld = k. If δ is the k-derivation of L(S) such that δ | L = d and δ(s) = 0, for every s ∈ S, then L(S)δ = k(S).

Proof. It is obvious that k(S) ⊆ L(S)δ. Let w ∈ L(S)δ. Since w ∈ L(S), there exists a finite subset S0 of S such that w ∈ L(S0). Denote by δ0 the restriction of the derivation δ to L(S0). Then w ∈ L(S0)δ0 and hence, by Lemma 2.7 and by induction, w ∈ k(S0) ⊆ k(S). 

(5)

Lemma 2.9. Let F be a field containing k and let F ⊆ k0 be an algebraic field extension.

Assume that δ is an F -derivation of F [X] and let d0 be the k0-derivation of k0[X] such that d0(xi) = δ(xi) for i = 1, . . . , n. If F (X)δ= F then k0(X)d0 = k0.

Proof. Suppose that k0(X)d0 6= k0 and let w ∈ k0(X)d0 r k0. Since k0(X) is algebraic over F (X), there exist a1, . . . , ap ∈ F (X) such that wp + a1wp−1+ · · · + ap = 0. Let us assume that p is minimal. Then δ(a1)wp−1+ · · · + δ(ap) = d0(wp + a1wp−1+ · · · + ap) = d0(0) = 0 and hence, by the minimality of p, δ(a1) = · · · = δ(ap) = 0, that is, a1, . . . , ap ∈ F (because F (X)δ = F ). Thus, w is algebraic over F . This implies that w is algebraic over k0. But k0 is algebraically closed in k0(X). Hence, w ∈ k0 and we have a contradiction. 

Proof of Proposition 2.6. If k0(X)d0 = k0 then it is clear that k(X)d= k. Assume now that k(X)d= k. Let S be a subset of k0 such that the extension k ⊂ k(S) is purely transcendental and the extension k(S) ⊂ k0 is algebraic. Put L = k(X), F = k(S). Then, by Lemma 2.8, F (X)δ = L(S)δ = k(S) = F , where δ is the F -derivation of F [X] such that δ(xi) = d(xi) for all i = 1, . . . , n. Now Lemma 2.9 implies that k0(X)d0 = k0. 

### 3 Linear derivations.

In the present section we recall from [5] some useful facts on linear derivations.

Let us make precise some notions. Assume that d is a k-derivation of k[X] such that d(xi) =

n

X

j=1

aijxj for i = 1, . . . , n, (3.1) where each aij belongs to k. Let λ1, . . . , λnbe the n eigenvalues (belonging to an algebraic closure k of k) of the matrix [aij]. Moreover, let the Jordan matrix (in k) of the matrix [aij] be of the form:

Jm11) 0 . ..

0 Jmss)

 (3.2)

where s > 1, m1 + · · · + ms = n, {ρ1, . . . , ρs} = {λ1, . . . , λn} and each Jmii) is the mi× mi block:

Jmi =

ρi 1 . .. 1

ρi

. (3.3)

The homogeneity of polynomials d(x1), . . . , d(xn) together with the fact that they are of the same degree 1 has the following consequences.

Proposition 3.1 ([5]). Let d be the k-derivation defined by (3.1).

(1) If f ∈ k[X]d then all homogeneous components of f belong also to k[X]d.

(2) Let f, p ∈ k[X] and assume that f 6= 0 and d(f ) = pf . Then p ∈ k and d(g) = pg for all homogeneous components g of f . 

(6)

The following proposition will be played an important role in our considerations.

Proposition 3.2 ([5] Lemma 2.3). Let d be the k-derivation of k[X] defined by (3.1).

Assume that f is a nonzero homogeneous polynomial from k[X] satisfying the equality d(f ) = pf for some p ∈ k. Then, there exist non-negative integers i1, . . . , in such that

i1λ1 + · · · + inλn = p i1 + · · · + in = deg f



.  (3.4)

Using the above propositions one can deduce the following

Corollary 3.3. Let d be a k-derivation of k[X] defined by (3.1) with the Jordan matrix (3.2). Assume that the elements ρ1, . . . , ρs are Z-independent. Then the following three properties hold:

(1) If f and g are nonzero homogeneous polynomials from k[X] such that d(f ) = pf and d(g) = pg for some p ∈ k, then deg f = deg g.

(2) If f is a polynomial from k[X] such that d(f ) = pf for some p ∈ k, then f is homogeneous.

(3) If d(f /g) = 0, where f and g are nonzero relatively prime polynomials from k[X], then f and g are homogeneous of the same degree.

Proof. (1). We know that {ρ1, . . . , ρs} = {λ1, . . . , λn}. Hence, by Proposition 3.2, there exist non-negative integers i1, . . . , is and j1, . . . , js such that

i1ρ1 + · · · + isρs = p, i1 + · · · + is = deg f, j1ρ1 + · · · + jsρs = p, j1 + · · · + js = deg g.

Then (i1− j11+ · · · + (is− jss = 0 and hence, i1 = j1, . . . , is = jsbecause the elements ρ1, . . . , ρs are Z-independent. Therefore, deg f = i1+ · · · + is = j1+ · · · + js = deg g.

(2). Suppose that f contains two nonzero homogeneous components f1 and f2 with deg f1 6= deg f2. Then d(f1) = pf1, d(f2) = pf2 (by Proposition 3.1) and so, using (1), we obtain a contradiction: degf1 = deg f2.

(3). It is a consequence of (1), (2) and Proposition 2.4. 

### 4 Proof of Theorem 1.1.

In view of Section 2 we must prove that if k is an arbitrary field of characteristic zero and d is the k-derivation of k[X] defined by (3.1), then the following two conditions are equivalent:

(1) k[X]d = k;

(2) The n eigenvalues λ1, . . . , λn of the matrix [aij] are Z+-independent.

(2) ⇒ (1). Suppose that there exists f ∈ k[X] such that d(f ) = 0 and f 6∈ k. We may assume (by Proposition 3.1) that f is homogeneous. Then d(f ) = 0f and hence, by Proposition 3.2, there exist i1, . . . , in ∈ Z+satisfying (3.4). Since i1λ1+ · · · + inλn= 0 and λ1, . . . , λn are Z+-independent, i1 = · · · = in = 0. Therefore, deg f = i1 + · · · + in = 0, which contradicts with the fact that f 6∈ k.

(7)

(1) ⇒ (2). Assume now that k[X]d = k. Let k0 be an algebraically closed field containing k and let d0 be the k0-derivation of k0[X] such that d0(xi) = d(xi) for all i = 1, . . . , n. Then (by Proposition 2.5) k0[X]d0 = k0 so, we may assume that the field k is algebraically closed. Moreover, using Proposition 2.2, we may assume that [aij] is the Jordan matrix (3.2).

Therefore, there exist a subset {y1, . . . , ys} contained in {x1, . . . , xn} such that d(yi) = ρiyi for all i = 1, . . . , s.

Suppose now that the eigenvalues λ1, . . . , λn are not Z+-independent. Then there exists a nonzero sequence (j1, . . . , js) of non-negative integers such that j1ρ1+ · · · + jsρs = 0. Put f = yj11· · · ysjs. Then f ∈ k[X] r k and

d(f ) = (j1ρ1+ · · · jsρs)f = 0f = 0.

But it is a contradiction. This completes the proof ot Theorem 1.1. 

### 5 Proof of Theorem 1.2.

In view of Proposition 2.6 we may assume that k is an algebraically closed field of characteristic zero. Let d be a k-derivation of k[X] defined by (3.1) with the Jordan matrix (3.2). We must show that the following two conditions are equivalent:

(1) k(X)d= k;

(2) The elements ρ1, . . . , ρs are Z-independent and s > n − 1.

In view of Proposition 2.2 without loss of generality we may assume that the matrix [aij] of the derivation d coincides with the Jordan matrix (3.2). Therefore, there exist a subset {y1, . . . , ys} contained in {x1, . . . , xn} such that d(yi) = ρiyi for all i = 1, . . . , s.

(1) ⇒ (2). Let us assume that k(X)d= k.

First we shall show that ρ1, . . . , ρs are Z-independent. Suppose that there exists a nonzero sequence (j1, . . . , js) of integers such that j1ρ1+· · ·+jsρs= 0. Put ϕ = y1j1· · · ysjs. Then we have a contradiction: ϕ ∈ k(X) r k and d(ϕ) = (j1ρ1+ · · · jsρs)ϕ = 0ϕ = 0.

Now we shall prove that s > n − 1. Let us suppose that s < n − 1 and consider the numbers m1, . . . , ms defined in (3.2). We know that m1+ · · · + ms = n. Therefore, we must consider only two the following cases:

Case I: There exists j ∈ {1, . . . , s} such that mj > 3.

In this case there exist three variables x, y, z ∈ {x1, . . . , xn} such that d(x) = ax, d(y) = ay + x and d(z) = az + y, where a = ρj. Put ϕ = f /g, where f = x2 and g = y2 − 2xz. Then d(f ) = 2af , d(g) = 2ag and hence, d(ϕ) = 0 and ϕ 6∈ k. It is a contradiction with the fact that k(X)d = k.

Case II: There exist i, j ∈ {1, . . . , s} such that i 6= j, mi > and mj > 2.

In this case there exist four variables x, y, z, t ∈ {x1, . . . , xn} such that d(x) = ax, d(y) = ay + x, d(z) = bz and d(t) = bt + z, where a = ρi and b = ρj. Then (xt − yz)/xz ∈ k(X)dr k.

This completes the proof of implication (1) ⇒ (2).

(8)

For the proof of implication (2) ⇒ (1) we need two lemmas. Let us recall that if δ is a k-derivation of a ring R then δ is called locally nilpotent if for ach r ∈ R there exists a natural number m such that δm(r) = 0.

Lemma 5.1 ([8]). Let R be a commutative ring without zero divisors containing k and let δ be a locally nilpotent k-derivation of R. If δ(a) = ua for some u, a ∈ R then δ(a) = 0.

Proof. If r ∈ R then we define degδ(r) as follows:

degδ(r) = l if r 6= 0, δl(r) 6= 0 and δl+1 = 0;

degδ(r) = −∞ if r = 0.

The function degδ behaves like the usual function degree over a polynomial ring (see [2]).

In particular, if a, b ∈ R then degδ(ab) = degδ(a) + degδ(b).

Assume now that δ(a) = ua and suppose that δ(a) 6= 0. Then a 6= 0, u 6= 0, degδδ(a) = degδa − 1 and we have a contradiction:

degδa − 1 = degδδ(a) = degδ(ua) = degδa + degδu > degδa.  Lemma 5.2. Let δ be a k-derivation of k[x1, x2] such that

 δ(x1) = ax1

δ(x2) = ax2+ x1 where a ∈ k. If a 6= 0 then k(x1, x2)δ = k.

Proof. Let δaand δ0 be the k-derivations of k[x1, x2] such that δa(x1) = ax1, δa(x2) = ax2, δ0(x1) = 0 and δ0(x2) = x1. Then δ = δa+ δ0, δ0 is locally nilpotent, and k[x1, x2]δ0 = k[x1].

Suppose that f and g are nonzero relatively prime polynomials from k[x1, x2] such that δ(f /g) = 0. It follows from Corollary 3.3 that f and g are homogeneous of the same degree; set m = deg f = deg g. Moreover (by Proposition 2.4), δ(f ) = pf and δ(g) = pg for some p ∈ k. Observe that δa(f ) = maf . Indeed, if f =P

i+j=muijxi1xj2 with uij ∈ k, then

δa(f ) = X

i+j=m

uijδa(xi1xj2) = X

i+j=r

uijmaxi1xj2 = maf.

Hence,

δ0(f ) = δ(f ) − δa(f ) = pf − maf = (p − ma)f

and hence, Lemma 5.1 implies that δ0(f ) = 0. This means that f ∈ k[x1], that is, f = uxm1 for some u ∈ k r {0}. Using the same argument we see that g = vxm1 with v ∈ k r {0}.

Therefore, f /g = u/v ∈ k. 

Now we may continue our proof of Theorem 1.2.

(2) ⇒ (1). Assume that ρ1, . . . , ρs are Z-independent and s > n − 1. We must consider two cases: ”s = n” and ”s = n − 1”.

Case (a): s = n. In this case the derivation d is diagonal; d(xi) = ρixi for all i = 1, . . . , n.

(9)

Let f and g be nonzero relatively prime polynomials from k[X] such that d(f /g) = 0.

We know from Corollary 3.3 that f and g are homogeneous of the same degree. Moreover, d(f ) = pf and d(g) = pg for some p ∈ k (Proposition 2.4). Since ρ1, . . . , ρn are Z- independent, it is easy to see that f and g are monomials. Put f = axi11· · · xinn and g = bxj11· · · xjnn, with a, b ∈ k r {0}. Then we have:

i1ρ1+ · · · + inρn= p = j1ρ1+ · · · + jnρn

and hence, (i1− j11+ · · · + (in− jnn= 0. Thus, i1 = j1, . . . , in= jn, that is, f /g ∈ k.

Case (b): s = n − 1. Using an induction on n we shall show that k(X)d= k. It is obvious that n > 2. If n = 2 then our assertion follows from Lemma 5.2.

Let n > 2. Without loss of generality we may assume that the matrix [aij] of d is of the form

 a 1 0 a



0 b1

. ..

0 br

(5.1)

where a, b1, . . . , br are Z-independent elements of k and where r = n − 2. Thus, the derivation d is of the form:

















d(x1) = ax1 d(x2) = ax2+ x1 d(x3) = b1x3 d(x4) = b2x4

...

d(xn) = brxn. Observe that d(k[x1, . . . , xn−1]) ⊆ k[x1, . . . , xn−1].

Put R = k[x1, . . . , xn−1] and let δ be the restriction of d to R. Then δ is a linear k-derivation of R. If n = 3 then δ is the derivation as in Lemma 5.2. If n > 3 then the matrix of δ is of the form (5.1) with r = n − 1. Thus, by an induction, Rδ = k.

Assume now that f and g are nonzero relatively prime polynomials from k[X] such that d(f /g) = 0. Then (by Proposition 2.4) d(f ) = pf , d(g) = pg for some p ∈ k, and (by Corollary 3.3) f , g are homogeneous of the same degree; set m = deg f = deg g. Denote by y the variable xn and put

f = F yu+ F1yu−1+ · · · g = Gyv+ G1yv−1+ · · · ,

where u > 0, v > 0, and where F, G are nonzero homogeneous polynomials from R with deg F = m − u and deg G = m − v.

Comparing the coefficients of yu in the equality d(f ) = pf we see that d(F ) = (p − ubr)F , that is, δ(F ) = (p − ubr)F . Comparing the coefficients of yv in the equality d(g) = pg we have: δ(G) = (p − vbr)G.

(10)

Now, let us use Proposition 3.2 (for the derivation δ and the polynomials F and G). By this proposition there exist non-negative integers j1, j2, i1, . . . , ir−1 and j10, j20, i01, . . . , i0r−1 such that

j1a + j2a + i1b1 + · · · + ir−1br−1 = p − ubr, j10a + j20a + i01b1 + · · · + i0r−1br−1 = p − vbr. Then

((j1+ j2) − (j10 + j20))a + (i1− i01)b1+ · · · + (ir−1− i0r−1)br−1+ (u − v)br = 0 and hence, u = v (because the elements a, b1, . . . , br are Z-independent).

Thus, δ(F ) = qF and δ(G) = qG where q = p − ubr = p − vbr. This implies that δ(F/G) = 0. Since Rδ = k, F = cG for some nonzero c ∈ k.

Let us now consider the homogeneous polynomial h ∈ k[X] equal to f − cg. It is obvious that d(h) = ph. We shall show that h = 0.

Suppose that h 6= 0. Since f and g are nonzero relatively prime, the polynomials f and h are also nonzero and relatively prime. Repeating the above procedure for the polynomials f and h we see that they have the same degree with respect to y. But degyh < degyf = u because the coefficient of yu in h is equal to F − cG = 0. It is a contradiction. Therefore h = 0, that is, f /g = c ∈ k. This completes the proof of Theorem 1.2. 

### References

[1] A. van den Essen, Locally nilpotent derivations and their applications III, Catholic University, Nijmegen, Report 9330(1993).

[2] M. Ferrero, Y. Lequain, A. Nowicki, A note on locally nilpotent derivations, J.Pure Appl.Algebra 79(1992), 45 – 50.

[3] I. Kaplansky, An Introduction to Differential Algebra, Hermann, Paris, 1976.

[4] E. R. Kolchin, Differential Algebra and Algebraic Groups, Academic Press, New York, London, 1973.

[5] J. Moulin Ollagnier, A. Nowicki, J. -M. Strelcyn, On the non-existence of constants of derivations: The proof of a theorem of Jouanolou and its development, to appear in Bull. Sci. Math.

[6] A. Nowicki, Rings and fields of constants for derivations in characteristic zero, to appear in J. Pure Appl. Algebra.

[7] A. Nowicki, M. Nagata, Rings of constants for k–derivations in k[x1, . . . , xn], J. Math. Kyoto Univ., 28(1988), 111 – 118.

[8] R. Rentschler, Op´erations du groupe additif sur le plan affine, C.R.Acad.Sc.Paris 267(1968), 384 – 387.

Cytaty

Powiązane dokumenty

We ﬁrst notice that if the condition (1.7) is satisﬁed then the a priori estimates for u − ε 1 (x) given in Corollary 3.3 can be modiﬁed so as to be independent of ε... Below

Using an explicit representation of the solutions by utilizing the Riemann-kernel of the equation under consideration, we obtain controllability and observability criteria in the

Theorem 1.1 was proved for a variety of nonlinear differential equations under homogeneous Dirichlet boundary conditions in [2, 4, 7, 8] and for a system of differential equations

Consider the Lyapunov function H B deﬁned as in the ﬁrst integral of the equation without the viscosity term.. We pass to the

[r]

.], whose entries are almost all zero;... Find eigenvalues and eigenvectors

(Row go horizontal and columns go up and down.) We locate entries in a matrix by specifying its row and column entry1. In the next two probelms you will develop some of the

[r]