• Nie Znaleziono Wyników

A LINEAR PROGRAMMING BASED ANALYSIS OF THE CP-RANK OF COMPLETELY POSITIVE MATRICES

N/A
N/A
Protected

Academic year: 2021

Share "A LINEAR PROGRAMMING BASED ANALYSIS OF THE CP-RANK OF COMPLETELY POSITIVE MATRICES"

Copied!
7
0
0

Pełen tekst

(1)

A LINEAR PROGRAMMING BASED ANALYSIS OF THE CP-RANK OF COMPLETELY POSITIVE MATRICES

Y INGBO LI , A NTON KUMMERT A NDREAS FROMMER ∗∗

∗ Department of Electrical and Information Engineering University of Wuppertal, Rainer-Gruenter-Street 21

42119 Wuppertal, Germany

e-mail: {yingbo, kummert}@uni-wuppertal.de

∗∗ Department of Mathematics University of Wuppertal, Gauß Street 20

42119 Wuppertal, Germany

e-mail: frommer@math.uni-wuppertal.de

A real matrix A is said to be completely positive (CP) if it can be decomposed as A = BB

T

, where the real matrix B has exclusively non-negative entries. Let k be the rank of A and Φ

k

the least possible number of columns of the matrix B, the so-called completely positive rank (cp-rank) of A. The present work is devoted to a study of a general upper bound for the cp-rank of an arbitrary completely positive matrix A and its dependence on the ordinary rank k. This general upper bound of the cp-rank has been proved to be at most k(k + 1)/2. In a recent pioneering work of Barioli and Berman it was slightly reduced by one, which means that Φ

k

≤ k(k + 1)/2 − 1 holds for k ≥ 2. An alternative constructive proof of the same result is given in the present paper based on the properties of the simplex algorithm known from linear programming.

Our proof illuminates complete positivity from a different point of view. Discussions concerning dual cones are not needed here. In addition to that, the proof is of constructive nature, i.e. starting from an arbitrary decomposition A = B

1

B

T1

(B

1

≥ 0) a new decomposition A = B

2

B

T2

(B

2

≥ 0) can be generated in a constructive manner, where the number of column vectors of B

2

does not exceed k(k + 1)/2 − 1. This algorithm is based mainly on the well-known techniques stemming from linear programming, where the pivot step of the simplex algorithm plays a key role.

Keywords: completely positive matrices, cp-rank, linear programming, simplex algorithm, basic feasible solution, pivot process

1. Introduction

An n × n real symmetric matrix A is called completely positive (belonging to the set CP of completely positive matrices) if an entry-wise non-negative n × m matrix B exists with A = BB T . The product A = BB T can alternatively be written in its rank 1 representation as A = P m

i=1 b i b T i , in which each b i ∈ R n + denotes the i-th non-negative column of the matrix B. The min- imum number of columns m, for which such a factoriza- tion A = BB T exists, is called the cp-rank of A, and denoted by Φ k in relation to the ordinary rank k of A.

It is to be pointed out that all vectors throughout this pa- per are column vectors, unless explicitly defined to be row vectors.

The non-negative decomposition of completely pos- itive matrices is of interest in various applications includ-

ing, e.g., the study of block designs arising in combinato- rial analysis (Hall, 1967), problems in statistics, the the- ory of inequalities, energy conservation (Gray and Wilson, 1980) and models of DNA evolution (Kelly, 1994).

No definitive test is known yet to determine whether

a given real symmetric matrix A is completely posi-

tive. Many studies on this problem have been performed

(Berman and Plemmons, 1979; Berman and Shaked-

Monderer, 2003). One sufficient condition for a non-

negative symmetric matrix to be completely positive given

in (Drew et al., 1994) says that the comparison matrix

of A is an M-matrix. A special case of this sufficient

condition is that the matrix A is diagonally dominant

(Kaykobad, 1988). A qualitative characterization of com-

pletely positive matrices using graphs is that the graph of

a cp matrix contains no odd cycle of length 5 or more

(Berman, 1993; Kogan and Berman, 1993).

(2)

The question of a general upper bound for the cp- rank which works for any completely positive matrix of dimension n × n was asked by Hall and Newman (1963), with the conclusion that Φ k < 2 n . The definition of the cp-rank as above implies that Φ k is obviously greater than or equal to the ordinary rank k of A. Hannah and Laffey (1983) derived the inequality Φ k ≤ n(n + 1)/2 (which is independent of k, with k ≤ n). Recently, the sharpened upper bound k(k + 1)/2 for Φ k was reduced by Barioli and Berman (2003) to k(k + 1)/2 − 1 if k ≥ 2. In (Drew et al., 1994) it was conjectured that Φ k < n 2 /4. How- ever, this conjecture was proved only for matrices with special graphs, which are bipartite with the two parts as balanced as possible, or for graphs which contain no odd cycle of length 5 or more (Drew and Johnson, 1996), or for all graphs on 5 vertices which are not a complete graph (Berman, 1993). In the present paper we give a construc- tive proof alternative to Barioli and Berman’s result that

Φ k ≤ k(k + 1)

2 − 1, k ≥ 2.

The technique of the proof presented here involves the properties of the simplex algorithm known from linear programming and illuminates the problem from a differ- ent point of view. Above all, discussions concerning dual cones are not required. Linear programming is the process of minimizing (or maximizing) a linear real-valued objec- tive function subject to a finite number of linear equality and inequality constraints. Linear programming problems are encountered in many branches of technology, science, and economics. In this work, the problem of determin- ing an upper bound for the cp-rank of completely posi- tive matrices is reduced to the problem of determining the maximal number of non-zero elements in the basic feasi- ble solutions of a linear programming problem. By means of appropriate matrix operations and based on the prop- erties of the simplex algorithm, the desired result will be achieved. Furthermore, according to the described con- structive procedure, such a decomposition which satisfies the claimed bound constraint can be generated step by step from a known decomposition of an arbitrary size.

2. First Reduction Step

We start with a known rank 1 representation A = P m

i=1 b i b T i of a given n × n completely positive ma- trix, where m is an arbitrary large, however finite, integer.

From now on, we assume that no column b i of B is zero, since otherwise such a column could have been removed from B. After the application of the vec-operator (stack- ing the columns of a matrix to form a column vector) to

both the sides, the following expression can be obtained:

vec(A) = vec  X m

i=1

b i b T i 

=

m

X

i=1

vec(b i b T i )

=

m

X

i=1

b i ⊗ b i , (1)

where ‘⊗’ denotes the Kronecker product. Consequently, we may define an n 2 × m matrix

C = h

b 1 ⊗ b 1 , b 2 ⊗ b 2 , . . . , b m ⊗ b m i (2) with c i = b i ⊗ b i , i = 1, . . . , m, so that (1) can be represented as

vec(A) = C h

1, 1, . . . , 1 i T

. (3)

It is evident that the non-negative linear system of equations vec(A) = Cλ with λ i ≥ 0 has at least one valid solution, namely the column vector λ 0 = [1, 1, . . . , 1 ] T . In fact, from now on we may look upon Cλ = vec(A) with λ i ≥ 0 as the linear constraints for an arbitrary linear objective function, which is a linear programming problem in a standard form with all vari- ables of λ non-negatively constrained.

First of all, it is important to indicate that the feasible polyhedron of this linear programming problem is on the one hand non-empty due to the existence of the feasible solution λ 0 and, on the other hand, bounded due to the non-negativity of C, vec(A) and λ, and the fact that no column in B, and thus no column in C, is zero. Thus, the feasible region represents a polytope. Consequently, the available feasible solutions which satisfy these linear con- straints can be expressed as convex combinations of the so-called basic feasible solutions, i.e. the extreme points (or vertices) of the polytope.

According to the properties of linear programming problems, in the basic feasible solutions of Cλ = vec(A) with λ i ≥ 0, only the elements λ i which cor- respond to a maximal set of linear independent columns of C may be non-zero. Therefore, all of the basic fea- sible solutions possess at most rank(C) strictly pos- itive elements. Because of the existence of the so- lution λ 0 = [1, 1, . . . , 1 ] T , it can be concluded that at least one basic feasible solution with at most rank(C) strictly positive elements for Cλ = vec(A) with λ i ≥ 0 exists. Writing rank(C) = r, and, if necessary, permuting the columns of B, we may as- sume that there is a basic feasible solution of the form λ B = [λ 1 , λ 2 , . . . , λ r , 0, . . . , 0 ] T with λ i ≥ 0 for 1 ≤ i ≤ r. It is obvious that

vec(A) = Cλ B =

r

X

i=1

c i λ i =

r

X

i=1

p λ i b i ) ⊗ ( p λ i b i 

,

(4)

(3)

which is equivalent to

A =

r

X

i=1

p λ i b i p λ i b i  T

. (5)

Defining e B = [ √

λ 1 b 1 , √

λ 2 b 2 , . . . , √

λ r b r ], we have established an alternative non-negative factorization of A, namely

A = e B e B T = h √

λ 1 b 1 , . . . , √ λ r b r

i

· h √

λ 1 b 1 , . . . , √

λ r b r i T . (6) It should be noted that a basic feasible solution can always be generated constructively by using linear programming techniques (Glashoff and Gustafson, 1983;

Karloff, 1991).

3. Upper Bounds on the CP-Rank

In this section we first clarify that the cp-rank of A, Φ k , is bounded by the rank of the matrix C, which is in turn limited by k(k + 1)/2 again, where k = rank(A).

The proof of the sharpened upper bound of Φ k , namely k(k + 1)/2 − 1, will be provided afterwards.

Lemma 1. The cp-rank Φ k of A ∈ CP is bounded by the rank of the matrix C, i.e. we have

Φ k ≤ r = rank(C). (7) Proof. Based on the non-negative factorization of A achieved in (6), which can always be realized construc- tively from an arbitrary non-negative decomposition of A, and the definition of Φ k as the least number of non- zero columns in any decomposition, it is obvious that Φ k ≤ r = rank(C).

Our final goal is to prove that Φ k is bounded by k(k + 1)/2 − 1. In order to keep the proof clear and pro- gressive, we will first prove that rank(C) ≤ k(k + 1)/2.

Theorem 1. The ordinary rank of matrix C as defined in (2) is bounded by

rank(C) ≤ k(k + 1)

2 . (8)

Proof. Since A = BB T with rank(A) = k, it fol- lows immediately that rank(B) = k. With respect to a geometric interpretation, this means that the dimen- sion of the vector space generated by the b i ’s is k, i.e.

dim(span(b i | 1 ≤ i ≤ m)) = k. As a consequence, there must be an isometry available which maps b i ∈ R n +

onto R k , i = 1, . . . , m. In matrix terms, this means that we may find an orthonormal matrix V (V V T = I, where I is the identity matrix), so that V B = ( F F F 0 0 0 ), the dimension of F being k × m. Computationally, V and F can be obtained from the standard QR-decomposition of B, see (Golub and Van Loan, 1989). The following equality results from this isometry:

(V ⊗ V )C = (V ⊗ V ) h

b 1 ⊗ b 1 , . . . , b m ⊗ b m

i

= h

(V b 1 ) ⊗ (V b 1 ) ,. . . ,(V b m ) ⊗ (V b m ) i

=

"

f 1 0

!

⊗ f 1 0

!

, . . . , f m 0

!

⊗ f m 0

!#

, (9)

where f i = V b i is the i-th column vector of matrix F . Since there are at most k(k + 1)/2 distinct non-zero row vectors in the matrix (V ⊗ V )C and V ⊗ V is orthonormal and thus non-singular, we have

r = rank(C) = rank((V ⊗ V )C) ≤ k(k + 1)

2 .

Corollary 1. In view of (7) and Theorem 1, we can con- clude that Φ k is at least bounded by

Φ k ≤ k(k + 1)

2 . (10)

Let us note that the above proofs are constructive:

Given A = BB T , we have shown how to obtain e B in A = e B e B T by computing a basic feasible solution of a certain polytope.

The next step we want to take is proving the sharp- ened bound Φ k ≤ k(k + 1)/2 − 1 for k ≥ 2. The corresponding proof can be divided into distinct steps.

If for a given matrix A the associated matrix C has a rank less than k(k + 1)/2, we can stop, otherwise we have rank(C) = k(k + 1)/2 and the reduction proce- dure must be continued. The outline of this algorithm is as follows: After extension of C by a column vector, which is the Kronecker product of an adequately selected col- umn of A with itself, we will show that the rank of the extended matrix C is identical to its original rank. Then a pivot step known from simplex techniques will be used, so that a new non-negative decomposition A = GG T could be created, whereby the column number of G does not exceed that of e B. Relying on some requirements re- garding G, the column number of G can be further re- duced by 1 after some straightforward computations. By this last step the proof will be completed.

Theorem 2. For k ≥ 2 the cp-rank of any completely

positive matrix A with its ordinary rank equal to k is

(4)

bounded by

cp-rank (A) = Φ k ≤ k(k + 1)

2 − 1. (11)

Proof. From Theorem 1 and Corollary 1 we know that Φ k ≤ rank(C) ≤ k(k + 1)/2. If rank(C) <

k(k + 1)/2, nothing has to be proven, so that from now on we assume that r = rank(C) = k(k + 1)/2.

Before the actual proof, we would like to point out a useful property of the matrix B (cf. (6)) at e first. From the decomposition A = e B e B T in (6) with B = [ e √

λ 1 b 1 , √

λ 2 b 2 , . . . , √

λ r b r ], the fact that 2 ≤ rank( e B) = k ≤ n and the assumption r = k(k + 1)/2, it can be concluded that at least one of the rows of the non-negative matrix e B has more than one non-zero entry.

Otherwise, if each row of the non-negative matrix e B had at most one non-zero element, all columns of e B (which does not have zero columns) would be linearly indepen- dent, so k = r and thus Φ k = r = k ≤ k(k + 1)/2 − 1 for k ≥ 2 and we have a contradiction to the assumption r = k(k + 1)/2. So, for the rest of the proof we may therefore assume that at least one row of e B contains at least two non-zero entries.

Now we extend vec(A) = Cλ B in (4), which is as- sociated with A = [a 1 , a 2 , . . . , a n ] and rank(C) = r = k(k + 1)/2, to

vec(A) = h

C, a j ⊗ a j

i λ E , (12)

with λ E = [ λ λ λ 0 0 0

B

], where, in view of the above discussion, the integer j, 1 ≤ j ≤ n, can be chosen in such a way that the j-th row of non-negative e B has at least two non-zero entries. The rank of the extended matrix [C, a j ⊗ a j ] in (12) is unmodified compared with that of the matrix C, i.e.

rank h

C, a j ⊗ a j

i = rank(C) = r = k(k + 1)

2 ,

(13) as we show now. The orthonormal matrix V from the proof of Theorem 1 will be reused. From A = BB T and V B = ( F F F 0 0 0 ) it follows that V A = V BB T = ( F F F 0 0 0 )B T = ( E E E 0 0 0 ) and, accordingly, V a j = ( e e e 0 0 0

j

) with e j as the j-th column of E, whereby the dimension of E is k × n, so that

(V ⊗ V ) h

C, a j ⊗ a j

i

= (V ⊗ V ) h

b 1 ⊗ b 1 , . . . , b m ⊗ b m , a j ⊗ a j

i

= h

(V b 1 ) ⊗ (V b 1 ) , . . . , (V b m ) ⊗ (V b m ) ,

(V a j ) ⊗ (V a j ) i

=

"

f 1 0

!

⊗ f 1 0

!

, . . . , f m 0

!

⊗ f m 0

! ,

e j

0

!

⊗ e j

0

!#

. (14)

As in the proof of Theorem 1, we have rank([C, a j ⊗ a j ]) ≤ k(k + 1)/2. Since rank(C) was assumed to be equal to k(k + 1)/2 and for the reason that rank(C) ≤ rank([C, a j ⊗ a j ]), we get rank([C, a j ⊗ a j ]) = rank(C) = k(k + 1)/2.

Because all entries of the extended linear constraints system (12) are also non-negative, and there exists a fea- sible solution λ E = [ λ λ λ 0 0 0

B

], the feasible region is bounded and non-empty, i.e. it is again a polytope. Consequently, basic feasible solutions of the linear programming prob- lem (12) exist and have at most r = k(k + 1)/2 non- negative variables.

Furthermore, λ E is a basic solution where the basis is represented exactly by the variables λ 1 , . . . , λ r . To be more specific, since rank([C, a j ⊗ a j ]) = rank(C), any basis of the old linear system is also a basis of the extended system. (This would not be true if the rank of the extended system were increased by one when com- pared with that of the old system.) Suppose that the sim- plex tableau associated with the extended linear system is given for the basis λ 1 , . . . , λ r , i.e. λ m+1 , the variable as- sociated with column a j ⊗ a j of the extended system is a non-basic variable yet.

Next, the simplex algorithm will be considered, which has been the method of choice used to solve linear programming problems for decades. Due to the fact that the basic feasible solutions are nothing else than extreme points of the polytope, and the optimal point of the ob- jective function subject to the polytope is always among these vertices, the simplex algorithm can be regarded as an ordered way of scanning through such vertices. The simplex algorithm starts from an arbitrary vertex of the feasible polytope and tries to find a cheaper adjacent ver- tex, till ideally no neighbouring vertex with a cheaper cost is found.

The process of changing the basis to move to an adja-

cent vertex is called the pivot step. Basically, a pivot step

takes one basic variable out of the basis and, at the same

time, it shifts another, originally non-basic, variable into

the basis. Hence, during a pivot step carried out on the

simplex tableau defined above, the last variable of λ E ,

(5)

namely λ m+1 , can be deliberately forced to enter the ba- sis, for which one of the formerly basic variables, say λ s , leaves the basis.

It is worth noticing that through the exchange of the basic variable and the corresponding modification of the basis, the maximal number of non-zero elements in the new basic feasible solution, r = k(k + 1)/2, is un- changed, and all of the elements are still non-negative.

Denoting by d 1 , . . . , d r the new non-negative basic variable values, we have

vec(A)

= h

b 1 ⊗ b 1 , . . . , b s−1 ⊗ b s−1 , a j ⊗ a j , b s+1 ⊗ b s+1 , . . . , b r ⊗ b r i

· [d 1 , . . . , d s , . . . , d r ] T . (15)

Following the same procedure as above, a new non- negative factorization of A of the form

A = hp

d 1 b 1 , . . . , p

d s−1 b s−1 , p d s a j , p d s+1 b s+1 , . . . , p

d r b r

i

· hp

d 1 b 1 , . . . , p

d s−1 b s−1 , p d s a j ,

p d s+1 b s+1 , . . . , p d r b r i T

(16)

is obtained. If at least one of the non-negative num- bers d l , l = 1, . . . , r, is zero, then cp-rank (A) ≤ r − 1 = k(k + 1)/2 − 1, so that the desired re- sult is obtained. Otherwise we proceed as fol- lows: After a reordering of the columns of the matrix [ √

d 1 b 1 , . . . , pd s−1 b s−1 , √

d s a j , pd s+1 b s+1 , . . . ,

√ d r b r ] in such a way that the column √

d s a j enters the last position, we have the decomposition

A = GG T = h

g 1 , . . . , g r−1 , √ d s a j i

· h

g 1 , . . . , g r−1 , √ d s a j

i T (17)

with g l = h

g 1l , . . . , g nl i T

≥ 0, l = 1, . . . , r − 1, where the g l ’s denote the newly arranged non-negative columns. Based on these results, we finally can construct the non-negative decomposition

A = e G e G T , (18)

with

 

 

 

 

 

 

G = e g 1 , . . . , g r−1  + αa j [g j1 , . . . , g j,r−1 ] ,

α = v u u u u t

(1 − d s a jj )

r−1

P

l=1

g 2 jl

! 2 + d s

r−1

P

l=1

g 2 jl

− (1 − d s a jj )

r−1

P

l=1

g 2 jl .

(19) Since d s > 0, the scalar α is also strictly positive, i.e. α > 0, which shows that e G is non-negative. Further- more, α always exists since P r−1

l=1 g jl 2 6= 0, which is true since P r−1

l=1 g jl 2 = P r

l=1,l6=s d l b 2 jl , and we have chosen j in such a manner that at least two values from among b jl , l = 1, . . . , r are non-zero. The validity of the decom- position in (18) can be simply checked by inserting (19) into (18) and using

g 1 , . . . , g r−1  [g j1 , . . . , g j,r−1 ] T = (1 − d s a jj )a j , (20) which follows from

a j = h

g 1 , . . . , g r−1 , √ d s a j

i

· h

g j1 , . . . , g j,r−1 , √ d s a jj

i T

, (21) see (17). In view of (18) and (19), the number of columns in e G equals k(k + 1)/2 − 1. Hence we have proved that

cp-rank (A) = Φ k ≤ r − 1 ≤ k(k + 1)

2 − 1. (22)

In order to have all reduction steps in a condensed form without being interrupted by proofs, we summarize them in the form of the sequential algorithm, shown in Fig. 1.

4. Illustrative Example

In this section the constructive algorithm is demonstrated by applying it to an illustrative example. Consider a 3 × 3 completely positive matrix

A =

4 3 4

3 7 7

4 7 12

 ,

whose rank also equals three. There exists a non-negative decomposition A = BB T with

B =

1 0 0 1 0 1 1 0 1 0 1 1 0 2 0 0 1 0 1 1 3

 .

(6)

start, given a cp-matrix A(n × n) with rank(A) = k, let B(n × m) be any known solution to A = BB

T

, B ≥ 0 if column number of B, m, is less than k(k + 1)/2

we have done, exit otherwise

generate the linear system vec(A) = Cλ, λ ≥ 0 with C defined by

C =



b

1

⊗ b

1

, b

2

⊗ b

2

, . . . , b

m

⊗ b

m



compute a basic feasible solution λ

B

by a known linear programming method

generate e B by means of the non-zero elements of λ

B

if column number of e B, r, is less than k(k + 1)/2 A = e B e B

T

is already the desired factorisation, done and exit

otherwise λ

E

=

 λ

TB

0



T

extend C by a

j

⊗ a

j

, with 1 ≤ j ≤ n generate the new linear system

vec(A) =



C, a

j

⊗ a

j



· λ

E

apply one pivot step to bring non-basic variable λ

m+1

into the basis, which leads to a new basic feasible solution d generate G by means of the non-zero elements of d if column number of G is less than k(k + 1)/2

A = GG

T

is already the desired factorisation, done and exit

otherwise

compute e G according to (19), herewith we have A = e G e G

T

and the column number of

G is at most k(k + 1)/2 − 1 e end

Fig. 1. Summary of all algorithmic steps.

The linear system vec(A) = Cλ is given by

 4 3 4 3 7 7 4 7 12

=

1 0 0 1 0 1 1 0 0 0 1 0 0 2 0 0 0 0 0 1 3 0 0 0 1 0 0 2 0 1 0 1 1 0 4 0 0 0 0 1 0 6 0 0 0 0 0 1 3 0 0 0 0 1 0 6 0 0 1 0 1 1 9

 λ 1 λ 2

λ 3 λ 4

λ 5 λ 6

λ 7

 .

By means of standard techniques, the basic solution λ B = [5/3 5/3 1 2/3 0 1/2 7/6 ] T

can be generated . From (6) we have a new non-negative factorization, namely A = e B e B T , where e B has only six columns,

B = e

p5/3 0 0 p2/3 p1/2 p7/6

0 p5/3 0 p2/3 0 p14/3

0 0 1 0 p1/2 p21/2

 .

Our next step is to construct a new linear system by extending the matrix C by a 3 ⊗ a 3 , which leads to

 4 3 4 3 7 7 4 7 12

=

1 0 0 1 0 1 1 0 0 0 1 0 0 2 0 0 0 0 0 1 3 0 0 0 1 0 0 2 0 1 0 1 1 0 4 0 0 0 0 1 0 6 0 0 0 0 0 1 3 0 0 0 0 1 0 6 0 0 1 0 1 1 9

16 28 48 28 49 84 48 84 144

 λ 1 λ 2

λ 3 λ 4

λ 5 λ 6

λ 7 λ 8

 .

The Kronecker product a 3 ⊗ a 3 is used here, which fulfils the requirement that the third row of e B has at least two strictly positive entries. It can be verified that the rank of the new extended system matrix is the same as that of C, namely r = 6. Starting from the already available ba- sic solution λ E = [5/3 5/3 1 2/3 0 1/2 7/6 0 ] T , we can include λ 8 into the basis by applying only one pivot step, which leads to the new basic solution λ E = [0 0.15 0.4 1.6 2.8 1.6 0 0.05 ] T . From this a newly arranged non-negative factorization A = GG T results in

G =

0 0 p8/5 0 p8/5 p4/5

p3/20 0 p8/5 p14/5 p8/5 p49/20

0 p2/5 0 p14/5 0 p36/5

 .

Since the matrix G has still six columns, we have to apply (19) in order to reduce the number of columns.

In our example the scalar α has a positive value of p5/288 − 1/12, which leads to the final decomposition A = e G e G T with

G = e

0 0.1225 1.2649 0.3241 1.5099 0.3873 0.2144 1.2649 2.2406 0.4288

0 1 0 2.6458 2

 .

(7)

As has been expected, the non-negative factorisation A =

G e e G T has at most k(k + 1)/2 − 1 = 5 columns. By

equating e G e G T to A the correctness of this factorisation

can be easily verified. It has thus been demonstrated that

given any known decomposition of a cp-matrix A a de-

composition with at most k(k + 1)/2 − 1 columns can

be generated in a constructive manner. Obviously, .

Cytaty

Powiązane dokumenty

Diagnostics of material damages and their description are of importance for the development of the methods for improving the reliability, prediction of the

The levels of such parameters characterizing dynamic loads and overloads of examined movement structures as: maximal and average values of vertical ground reaction forces, total

The point (c) of Lemma 11 gives directly linear bound on µ(n) (the sum of exponents of periodicities of all runs), though the constant coefficient is still not satisfactory4.

Mechanizm p ra­ widłowości występującej w procesie odbioru polega na tym, że ile­ kroć na ekranie zjawia się napis, choćby nawet sformułowany w języku

w dzie dzi nie eko no mii. Ka pi tał ludz ki nie jest war to ścią sta łą.. Je śli ktoś so bie nie ra dzi na da nym sta no wi sku, prze su wa ny jest na in ne, gdzie jest w sta

[r]

If the line hits an interior point of a segment γ of Γ, then we continue along the segment γ until we reach the next node, where we follow the procedure described for the first

We first notice that if the condition (1.7) is satisfied then the a priori estimates for u − ε 1 (x) given in Corollary 3.3 can be modified so as to be independent of ε... Below