• Nie Znaleziono Wyników

Resource allocation to information processing in a firm

N/A
N/A
Protected

Academic year: 2021

Share "Resource allocation to information processing in a firm"

Copied!
16
0
0

Pełen tekst

(1)

No. 6(13) 2010

Andrzej Baniak

Department of Economics Central European University, 1051 Budapest, Hungary, and Wrocław University of Economics, Komandorska Street 118/120, 53-345 Wrocław, Poland.

E-mail: baniaka@ceu.hu Jacek Cukrowski

UNIDO, Vienna International Centre, Wagramerstr. 5, PO Box 300, A-1400 Vienna, Austria. E-mail: J.Cukrowski@unido.org

RESOURCE ALLOCATION

TO INFORMATION PROCESSING IN A FIRM

Andrzej Baniak, Jacek Cukrowski

Abstract. This paper provides a coherent framework which allows understanding the economics of information processing in the management of a firm. Data processing is modelled as a dynamic parallel-processing model of associative computation with an endogenous set-up cost. In such a model, the conditions for the efficient organization of data processing are defined, and the architecture of efficient structures is analyzed. It is shown that, as in computer systems, the so-called “skip-level reporting” structures are efficient. However, if the information workload of managers cannot be equalized, then the best pattern of information workload has to be determined, and resources allocated to the managers have to be adjusted to it. The method of adjustment of resources to the information workload of managers in one-shot skip-level reporting structures is presented and an example of an organization of data processing in demand forecasting is considered. Keywords: Internal organization of the firm, information processing, resource allocation, decentralization, hierarchy.

JEL Classification: D8, D2.

1. Introduction

In classical microeconomic theory, a firm is usually considered as a simple profit-maximizing unit. A complex organizational system, contain-ing a number of interconnected parts, is visualized as a large “black box” transforming inputs into outputs according to a rule described by a produc-tion funcproduc-tion. The attenproduc-tion of economists is focused mostly on producproduc-tion, and it is implicitly assumed that changes in the volume of the firm’s output affect also parts of the firm that are not directly involved in production (administration, managing and control, production planning, etc.). It should

(2)

be stressed, however, that in the modern firm more than one third of the employees carry out activities that are not directly connected with the pro-duction process such as, processing and communicating information, moni-toring actions of other members of the firm, analyzing the market, planning, training employees, making decisions, and so on (Radner, 1992). All these actions (called “managing activities”) are based on the processing of infor-mation and require a number of economic resources (labour, computational and telecommunication equipment, offices, etc.), which can be used in many different ways producing better or worse results. The way how those re-sources are organized affects a profitability of the firm and therefore has to be analysed from the microeconomic point of view.

The present paper focuses on data processing in the management of a firm and attempts to explore the relationship between organizational aspects of informational processes, economic efficiency, resource allocation and the firm’s profit. Information processing is modelled using a dynamic parallel processing model where the computational abilities of each manager are determined by the resource he uses. We introduce the concept of “informa-tion-processing function”, which describes the relationship between the resources allocated to a single manager and his computational abilities. Then, we define an efficiency criterion and analyse the efficient hierarchical structures. We show that so called skip-level structures are efficient ones. On the other hand, when the information workload of the managers is not equal, the computational abilities of the managers have to be adjusted to their information workload.

Our approach is closely related to a stream of literature on organization design which draws on insights from computer science, starting with Radner (1992, 1993), and Radner and Van Zandt (1992). There exists a significant body of economic literature, based on the model of Radner, focusing on the role and the importance of the organizational aspects of informational processes in the management of the firm. Returns to scale in information processing and their implications on the firm’s size are studied by Radner and Van Zandt (1992, 2001). Efficient organization of data processing is investigated by Van Zandt (1995, 1999), Bolton and Dewatripont (1994), and Prat (1997). Cukrowski (1997) studies the necessary and sufficient conditions for decentralization of data processing. The effects of changes in information-processing technology on the efficient organization of data-processing are investigated by Cukrowski and Baniak (1999). Van Zandt (1997, 1998), Orbay (2002) and Meagher et al. (2003) study the case when new data arrives to the firm before the processing of the old set is finished.

(3)

The quality and speed of information processing is studied by Jehiel (1999) and Schulte and Grüner (2007). Meagher (2003) and Grüner and Schulte (2009) investigate the problem of incentives for managers in different in-formation processing structures.

The rest of the paper is organized as follows. In Section 2, data process-ing in the firm, for the purpose of predictprocess-ing demand, is considered in order to show the trade-off between information processing in decision-making in the firm and the firm’s profit. The costs and benefits from information processing are formally defined, and the objective of the firm in data pro-cessing is specified. In Section 3, information propro-cessing in the firm is described in the conceptual framework of the dynamic parallel-processing model of associative computation with an endogenous set-up cost. The original model is extended to include the assumption that the computational abilities of the managers are determined by the technology of data process-ing and the resources assigned to them. In Section 4 the efficiency condi-tions are defined and it is shown that so the called “skip-level reporting” structures are efficient for data processing in the firm. However, if the information workload of managers in such structures cannot be equalized, the best pattern of information workload has to be determined and resources allocated to data processing have to be not equally distributed among man-agers. Section 5 illustrates the concepts presented in the paper by means of an example of the optimal organization of information processing for the purpose of predicting demand in the firm.

2. Demand forecasting in a firm

Consider a monopolistic profit-maximizing firm operating in a sto-chastic environment. The firm’s decision about its output level is based on periodical estimations of stochastic demand Qt coming from N different

sources. Demand in each individual source i is described by the stochastic process Qi,t (where i = 1, 2, ..., N, and t is an integer number), such that

Qi,t = μi + Xi,t, where μi is the expected value of demand from source i, and

Xi,t is the deviation from the mean, which depends on the history of the process

(Xi,t can be given, for instance, by a linear first order autoregressive process).

The accuracy of estimated demand depends in a crucial way on the de-lay of computation. If the computation of total demand is instantaneous, then the estimation At of demand Qt in moment t is perfectly accurate, i.e.

1 N i= i,t

t t

A Q =  Q . In this case the firm produces an efficient output Q*=Qt =At

(4)

small delay, then prediction At is close to Qt, and the profit of the firm is close

to its maximum. However, if the delay is substantial, then the expected abso-lute value of the error between real demand Qt and its prediction At is high, and

the information produced is almost worthless (Radner, Van Zandt, 1992). Thus, the value of the prediction and, consequently, the value of the computa-tional service (which is measured as a difference between the value of the decisions based on the computational service and the value of the decisions without the service provided) depend on how good the resulting prediction is compared to how good it would be without the service. It turns out that the value of the computational service V is inversely proportional to the absolute value of the prediction error E=|Qt – At|, which, under an assumption that the

error in computation is not possible, is fully determined by the delay in infor-mation processing DN.

The value of the computational service can be therefore represented as a decreasing and continuous function of the delay in information processing, i.e. 1,t, 2,t, ..., N t,( ) max – 1,t, 2,t, ..., N t,( ), Q Q Q Q Q Q V DNΨ Ψ DN where ) ( , 2, 1,, ,..., DN ΨQ Q Q t N t t

is the loss in the firm’s profit when demand is predicted with delay DN; we

have

/dD D

dΨQ1,t,Q2,t,...,QN,t( N) N 0

and ΨQ1,t, Q2,t, ..., QN t,(0)0.

The value Ψmax is the maximum loss in the firm’s profit:

1, 2, , max lim t, t, ..., N t( ). N Q Q Q D Ψ Ψ D   N

The delay DN depends upon the resources allocated to information

processing and on the way in which these resources are organized. In particular, DN depends on the architecture of data processing structure S(L),

where L denotes the number of managers in the structure, and upon the way in which data items are distributed among managers, i.e. on the vector of information workload N = (n1, n2, …, nL), such that n1 + n2 + … + nL = N,

where nj denotes the number of data items assigned to the manager j.

Moreover, since better equipped managers process information faster, for a given structure S(L) and workload vector N the delay DS(L),N can be

considered as a decreasing function of capital kj allocated to each manager,

(5)

prediction error should also be considered as a function of capital k1, …, kL

assigned to managers, related to the given structure S(L), workload vector N, and stochastic processes underlying demands in their sources:

)). ..., , ( ( ( ), 1 ..., , , 2, , 1, Q Q SL L Q D k k Ψ t N t t N

Assuming that the cost of data items is small in relation to the cost of capital and labour, and, consequently, can be neglected (Radner, Van Zandt (1992)), the total cost of the computational service is C(K,L) = rK+wL, where w is the price of labour (i.e. manager’s wages), r is the price of capital, and K = k1 + … + kL denotes the total amount of capital allocated to

data processing. Then profit Π of the monopolistic firm can be specified as

1, 2,

0 Q t, Q t, ..., QN,t( S L( ), ( ,...,1 L)) ( , ),

Π = +VD N k kC K L

where π0 = ρQ* Ψmax is profit of the firm when demand for its production

is not estimated (ρ denotes profit per unit of output, Q* is the optimal output, Ψmax is the maximum loss in the firm’s profit);

1,t, 2,t, ..., N t,( S L( ), ( ,..., ))1 L max 1,t, 2,t, ..., N t,( S L( ), ( ,..., )1 L

Q Q Q Q Q Q

V D N k kΨΨ D N k k

is the value of the computational service.

After rearrangement, the profit of the firm can be represented as

1,t, 2,t, ..., N t,( ( ), ( , ...,1 )) ( 1 , ..., , ).

*

S L L L

Q Q Q

ΠρQΨ D N k kC k k L

If the deviation from the highest profit due to noninstantaneous and costly information processing is

1, 2, , 1, 2, , ( ), , , ..., ( ,...,1 , ) , , ..., ( ( ), ( , ...,1 )) ( 1 ,..., , ), t t N t t t N t S L L S L L L Q Q Q Q Q Q Φ N k k LΨ D k kC k  k L N

then the profit of the firm can be written as

1, 2, , ( ), , , ..., ( ,...,1 , ). t t N t * S L L Q Q Q ΠρQΦ N k k L

Therefore, the profit of the firm depends on the stochastic processes underlying demand in its sources (i.e. is random), and, consequently, the objective of the firm is to maximize its expected value. Maximization of the expected profit is equivalent to the minimization of the expected value of the deviation from the highest profit, therefore, the objective of the firm forecasting demand for its production is to determine: (1) the number of managers involved in data processing L, (2) the architecture of information processing structure S(L), (3) the information workload of managers N, and (4) the amount of capital kj (j = 1, …, L) assigned to each manager in the

structure, which minimize the expected value of the deviation from the highest profit: [ ( ), ( 1,..., , )]. ..., , , 2, , 1, k k L Φ L L S Q Q Qt t Nt N E

(6)

3. Information processing in a firm

To focus on data processing in demand forecasting in a firm, consider the information-processing sector in which cohorts of N data items are summarized, and assume that the information-processing system works in a one-shot regime, i.e. delays between subsequent cohorts of data coming into the system are greater (or at least equal to) than the time of a single cohort processing (this ensures that queues of data in the information-processing structure cannot arise).

Suppose that the demand is estimated by managers (we use the term “managers” in the broader context to describe: accountants, staff, clerks, secretaries and so forth) and each manager performs similarly to the processor in the computer system. In particular, assume that each manager has an external memory for information storage, and can perform simple operations with numerical data. Each particular operation consists in retrieving a single data item from the memory and either keeping the value in the “brain” of the manager or summarizing the value with the actual contents of his “brain”. The duration of any operation is assumed to be independent of the values of the data used. Moreover, for the sake of simplicity, assume that managers cannot make errors in computation and each manager can send the result computed (contents of its “brain”) to an output or to the external memory of any other manager in zero time (since the time of data transfer is negligible comparing with the time needed for the analysis and processing of large data structures).

Since, in management, similarly to other parts of the firm, not only managers (i.e. labour) but also capital (embodied in computers, buildings, telecommunication channels or other equipment) is employed in the computational process, the speed of data processing by each individual manager is assumed to be a function of capital k he uses. The relationship between resources assigned to the manager and the number of operations he can compute in a unit of time is determined by the existing technology of information processing, and can be represented in functional form as F(k): R+ R+ R+, where F(k) is continuous, twice differentiable, increasing and

strictly concave function of capital k, such that F(0) = 0. By analogy to production function, F(k) is called an “information processing function”.

Each manager summarises data in a serial fashion. Thus, to speed up this process, data processing can be organized in a decentralized way in the team of managers, i.e. in decentralized information processing structure1

1

(7)

(we shall call a one-manager structure centralized and more-than-one-manager structure decentralized). Note, however, that even a few more-than-one-managers can be organized in many different ways computing results with a different delay. Thus, the analysis below focuses on the issues of efficient use of the resources in data processing in the firm (in particular, on the architecture of the efficient data processing structures, pattern of information workload of the managers, and allocation of resources within the structure).

4. Efficient data processing in a firm

Since the delay in data processing as well as the resources allocated to information processing (i.e. managers L, and capital K) are costly for the firm, the computational process is organized in efficient way if, for a given number of data items processed N, it is not possible to get the same delay in information processing using less of one input to information processing (i.e. capital or managers) and no more of the other.

Radner (1992, 1993) shows that the minimum time (number of cycles) needed to add N items of data with the help of L managers (in his terminology: processors with fixed processing power and duration of individual operations d = 1) is determined by the time of computation of CN(L) operations, where CN(L) is given as

L N + L + L N L CN( )=  log2( mod )

and attained by the so-called “skip-level reporting” structures with as-equally-as-possible-loaded managers (if 1 < L < N/2),2 or by a fully centralized structure (if L = 1). In the simplest case, when all managers are identical, then the duration of each individual operation can be specified as d(K/L) = 1/F(K/L), where L is the number of managers and K denotes a total amount of capital allocated to information processing. Therefore, for a given K, the minimum delay can be determined as CN(L)/F(K/L), and,

consequently, skip-level reporting structures are efficient for decentralized data processing in the firm (note that a centralized structure L = 1 could be efficient as well).3

at nodes, and the directed link from one manager to another if, and only if, the first sends the results computed to the second.

2

The number of managers L in any skip-level reporting structure is limited (L N/2) because at least two data items have to be assigned to each of them.

3

(8)

The term skip-level reporting refers to an organization where managers form a hierarchical network, defined as an inverted ranked tree (with the root in the top) in which each manager (except for the top one) sends data (“reports to”) exactly to the one superior manager above him. An example of the skip-level reporting structure with L = 8 managers, designed for the summation of N = 40 items of data is presented in Fig. 1a. In this process each manager receives five data items. All the managers spend periods 1 through 5 summarizing data assigned to them. At this point, four of the managers send their total to the other four, with each manager receiving one data item. This is summarized with the manager’s previous total in period 6. At the end of this period, two of the managers send their partial results to the other two. These data items are summarized with previous totals in period 7, after which one manager sends its total to the other. Finally, the result is computed in period 8. The time diagram describing this process is shown in Fig. 1b.

Fig. 1. The skip-level reporting structure (L = 8, N = 40, managers are represented as ellipses, data items are represented by octagons) (a),

and the time diagram of the computational process (b)

Since skip-level reporting structures are efficient for decentralized data processing in the firm, the number of managers L in the efficient structure S*(L) (centralized or decentralized skip-level reporting) is always a power of 2. Consequently, the possible values of L increase quickly. On the other hand, L is bounded (L  N/2). Consequently, for a significantly high number of data items processed N, only structures with few possible sizes

1 2 3 4 5 6 7 8 level 0 level 1 level 2 level 3 1 2 3 4 5 0 1 2 3 4 5 6 7 8 9 Manager 1 Manager 2 Manager 3 Manager 4 Manager 5 Manager 6 Manager 8 Manager 7 d=1 Time 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 a b d = 1

(9)

should be considered as possibly efficient (For example, if N = 67 000 000, then only 25 structures of a different size could be efficient). This implies that if all managers are identical, then the efficiency frontier can be simply derived from the following optimization problem:

)}. / ( )/ ( { Min Min ) , (K L C L F KL D N K L N  (1)

where L L* and L* is the set of possible numbers of managers in the efficient structure so L*={2x, x = 0, 1, 2, ..., log2(N/2)}, and K  0.

Note however, that (1) does not always characterize the efficiency frontier if computational abilities of managers are not the same, i.e. if the resources are not equally distributed among the managers. To clarify the statement above, consider the skip-level reporting structure with L = 4 managers (Fig. 2a) working in a one-shot regime.

Fig. 2. The skip-level reporting structure with L = 4 managers (a), the time diagram of the computational process with identical managers (b) and the time diagram of the computational process with non-identical managers (c)

Assume that the information workload of the managers is given by the vector (n1, n2, n3, n4), such that n1 + n2 + n3 + n4 = N, where nj denotes the

number of data items assigned to the manager j (j = 1, 2, 3, 4), and suppose that data items cannot be equally distributed among the managers in the structure, e.g. that n1 = N/L + 1 and n2 = n3 = n4 = N/L.

If all managers are identical, then the partial results computed by managers marked as 2 and 3 cannot be immediately used for the remaining computations (see Fig. 2b). Waiting can be eliminated from the process if the computational abilities of the managers are adjusted to the information

1 2 3 4 1 2 n Manager 1 Manager 2 Manager 3 Manager 4 Time 1 2 n 1 2 n 1 2 n . . . . . . . . . . . . n+1 Waitingg 4 Waitingg nd (n+1)d (n+2)d (n+3)d Manager 1 Manager 2 Manager 3 Manager 4 Time 4 (n+1)d1 = nd2 (n+3)d1 (n+2)d1 = (n+1)d3 nd 3 = nd4 n n n n 2 1 3 4 a b c

(10)

workload. The time diagram describing the computational process in the structure with non identical managers is presented in Fig. 2c.

Note that waiting will be eliminated from the computational process if the following conditions are satisfied:4

n1d1 = n2d2,

(n1 + 1)d1 = (n3 + 1)d3,

n3d3 = n4d4,

where dj denotes the duration of a single operation performed by manager j

(j = 1,2,3,4). It turns out that durations of the operations performed by the top-level manager have to be such that d1 = (n2/n1)d2, d1 = [(n3 + 1)/(n1 + 1)]d3,

and d1 = (n4/n3)[(n3 + 1)/(n1 + 1)]d4. This implies that if n1 > n2 = n3 = n4

then d1 < d2, d3, d4. Consequently, d1 < d(K/L), and the total delay in

information processing (DN = (n1 + log2L)d1) is smaller than in the case

when d1 = d(K/L) and all managers have the same computational abilities.

Consider now the efficient (skip-level reporting) structure S* of the optional size L, i.e., S*(L). Assume that the vector (n1, n2, ..., nL) such that

n1 + n2 + ... + nL = N, describes the information workload of the managers

enumerated according to a recursive procedure: NUM(J, I)5. The algorithm of this simple procedure includes the following steps6:

Step 1. Set the level i of the immediate subordinate manager equal to zero (i.e. set i = 0);

Step 2. Assign the number J + 2i to the immediate subordinate manager of the manager J, on the level i;

Step 3. If i > 0 then call (recursively) the procedure NUM(J + 2i, i); Step 4. Increase the level of the immediate subordinate manager, i.e. set i = i + 1;

Step 5. If i < I (where I is the level of manager J) then execute step 2. Note that for any information workload (n1, n2, ..., nL) the waiting states

are eliminated from the computational process organized in data processing structure S*(L), if

4 Each condition corresponds to one communication channel in the structure (or to one

ar-row on the time diagram in Fig. 2c).

5 To enumerate the processing elements in the skip-level reporting structure, one has to

as-sign the number 1 to the top-level processor, and call the procedure NUM(J, I) with parameters

J = 1 and I = log2L. 6

The processing elements in the structures presented in Fig. 1a or Fig. 2a are enumerated according to this procedure.

(11)

, ) ( ) (nj+z dj= n(j+2z)+z d(j+2z) j = 2m – 1 (m = 1, 2, ..., L/2), z = 0, 1, ..., level(j) – 1,

where level(j) denotes the level of manager j (j = 1, 2, …, L).

We represent the duration of the individual operation performed by manager j (j = 1, 2, …, L) as dj(kj) = 1/F(kj), where kj = αjK/L (K and L

denote respectively the amount of capital and the number of managers employed in data processing), αj (j = 1, 2, …, L) are adjustment coefficients,

such that L, K α = K L j j  1 / or, equivalently L= L α j j. 1  

Note that for any given efficient information structure S*(L) and information workload vector (n1, n2, …, nL), adjustment coefficients

α1, α2, …, αL can be derived from the following system of equations:

L K F z + n = L K F z + n z z + j + j j j , ) / ( ) / ( ( 2) ) 2 (   (2) 1 L j j= L =

 ,

where j = 2m – 1 (m = 1, 2, …, L/2), and z = 0, 1, …, level(j) – 1. Note that (2) specifies L – 1 equations.

Since for a given structure S*(L), and the number of data items processed N, different vectors of information workload may lead to different values of adjustment coefficients α1, α2, …, αL (see Appendix), and,

consequently, to differentdelay in data processing, the efficiency frontier in data processing in the firm is fully determined by the following expression

}, ) / ) ,..., , ( ( log { Min Min Min ) , ( 2 1 1 2 1 ) ..., , , (1 2 F α n n n KL L + n = L K D L K n n n L N L where (n1, n2, …, nL)NS*(L) (NS*(L) = {(n1, n2, …, nL): n1 + n2 + … + nL = N, ni – nj 1 (i, j = 1, 2, …, L)}).

Finally, taking into account that for any given information structure S*(L) and information workload (n1, n2, …, nL) there exists a single vector of

adjustment coefficients (α1, α2, …, αL), so that an optimal allocation of

capital to manager j can be easily determined as kj = αjK/L (j = 1, 2, ..., L),

the objective of the firm in data processing can be represented as

*

1, 2, , 1 2

1 2

, ..., ( ),( , ,..., ) ( , ,..., )

Min Min Min { [ ( ( ))] },

t t N t L L , Q Q Q S L n n n L n n n K E Ψ D K rK wLwhere L = 2x, x = 0, 1, 2, …, log2(N/2)), (n1, n2, …, nL)  NS*(L), K  0.

(12)

5. Optimal allocation of the resources to data processing in the demand forecasting in a firm

To illustrate the concept of the optimal organization of demand forecasting and data processing in enterprises, consider an example of monopolistic firm which estimates demand for its production. Assume that the technology of information processing is described by information-processing function: F(k) = k, where  (0 <  < 1) is a constant coefficient. Moreover, suppose that the loss due to the prediction error is proportional to the square of the difference between the estimation of demand At and the

real value of demand Qt in the moment t, i.e. Ψ = (At Qt)2. Assume also

that the stochastic processes generating demands Qi,t (where i = 1, 2, ..., N,

and t is an integer number) are independent and identically distributed, specified as Qi,t =  + Xi,t, where μ is the mean value of demand, and Xi,t is

the difference between Qi,t and its mean described as a first order autoregressive

process: Xi,t = γXi,t–1 + εi,t (|γ| < 1), where εi,t are independent and identically

distributed Gaussian variables with mean equal to zero and variance ω2.

The variance ξ2 of each individual stochastic process around its mean is

2 2 2 2

,

( i t) /(1– ).

= E X γ

The demand estimation in moment t performed

on the basis of the history of process Xi,t up to date (t s), is given as γsXi,t–s.

The expected value of the square of the error in estimation (for each individual source of demand) is E[(γsXi,t–s – Xi,t)2]=(1 – γ2s)ξ2. If the demand

coming from N data sources is estimated with lag s, then the expected value of the loss due to the prediction error equals

1, 2, , 2 2 , , ..., [ ( )]= (1 ) . t t N t s Q Q Q Ψ s N   E If ( ) ) ,..., , ( ), ( 1 2 * K D L n n n L

S is the delay in information processing in an efficient

structure with L managers, then the expected value of the loss due to prediction error is * ( ),( , , ..., )1 2 * 2,t 1, , 1 2 2 ( ) 2 , Q , ..., ( ), , ,..., ) [ ( ( ))]= (1 S L n n nL ) . t N t L K D Q Q S L (n n n Ψ D K N   E

The delay in information processing in the efficient structure with L managers is given as ), / ) ,..., , ( )/ log ( ) ( 1 2 1 1 2 ) ,..., , ), ( 1 2 * K = n+ L F (n n n KL D L n n (n L S L

where n1 is the number of data items assigned to the top-level manager

(n1 = N/L, if (N mod L) = 0, or n1 = N/L+1, otherwise), α1(n1, n2 ,…, nL)

(13)

information workload. The expected value of the loss due to the prediction error is therefore 1 2 1 1 2 * 1, 2, , 1 2 log 2 2 ( ( , , ..., ) / ) , , ..., ( ),( , ,..., ) [ ( ( ))]= (1 L ) . t t N t L + L n F n n n K L Q Q Q S L n n n Ψ D K N    E

Finally, the optimal size of the efficient information-processing structure and the optimal allocation of resources should be derived (numerically) from the following optimization problem:

1 2 1 1 2 1 2 2 log 2 ( ( , ,..., ) / ) 2 ( , ,..., )

Min Min Min { (1 },

1 L L + L n n n n K L L n n n K N ) rK wL       

where L = 2x, x = 0, 1, 2, ..., log2(N/2); (n1, n2, …, nL)  NS*(L), K  0, and

r and w denote price of capital and labour (managers), respectively.

6. Conclusions

The analysis of different aspects of information processing in the firm has appeared frequently in the economic literature. In a number of recent papers, data processing in the firm has been described in terms of a dynamic parallel processing model of associative computation which has been directly adopted from computer science literature, and, consequently, its conceptual framework differs from that which is usually used in microeconomic research. The present paper shows how information processing in a firm should be described and analyzed in a typical microeconomic setting.

The analysis focuses on numerical computations in a firm for the purpose of predicting demand. Information processing is modelled using a dynamic parallel processing model of associative computation extended to include the assumption that computational abilities of each manager (the speed of computation) is determined by the resources he uses. To describe the relationship between the resources allocated to a single manager and its computational abilities, the concept of an information-processing function is introduced. For such a model, the efficiency criterion is defined and the architecture of the efficient structures is analyzed. The paper shows that, in a firm, similar to parallel computers, the so called “skip-level reporting” structures are efficient. However, in the case when the information workload of the managers cannot be equalized, the pattern of the workload of the managers has to be selected, and the computational abilities of the managers (resources allocated to the managers), have to be adjusted to their information workload.

One important contribution of this paper to the current research in the internal theory of the firm is the introduction of the concept of the

(14)

information-processing function to the dynamic parallel information-processing model of associative computation. This concept provides the same methodological framework for the analysis of management and production sectors of the firm, and allows one to employ the model presented for the study of more complex economic issues in which these parts of the firm have to be analyzed together.

APPENDIX

Adjusting resources to information workload in a simple one-shot skip-level reporting structure

Consider a skip-level reporting structure with L = 4 managers (as in Fig. 2a) working in the one-shot regime. Assume that cohorts of N items of data are summarized, and data items are distributed among the managers as (n1, n2, n3, n4), where n1 + n2 + n3 + n4 = N, and nj denotes the number of

data items assigned to the manager j (j = 1, 2, 3, 4). Suppose that information-processing function has the form: F(kj) = kj, where  (0 <  < 1) is a

constant coefficient, j = 1, 2, 3, 4.

The delay of a single operation performed by manager j is specified as dj = d(kj) = 1/F(kj) = kj, where kj = αjK/L denotes the amount of the capital

allocated to manager j (j = 1, 2, 3, 4), K denotes the total amount of the capital allocated to data processing, L (L = 4) is the number of managers, αj is the coefficient of adjustment of resources to information workload,

such that (L = 4). Since the duration of a single operation dj can be

represented as dj = (αjK/L), coefficients αj (j = 1, 2, 3, 4) can be

deter-mined from the following system of equations:

n1α1(K/L) = n2α2(K/L), (3)

(n1 + 1)α1(K/L) = (n3 + 1)α3(K/L), (4)

n3α3(K/L) = n4α4(K/L), (5)

α1 + α2 + α3 + α4 = L. (6)

The solution to the system of equations (3)-(6) can be represented as

1 1 2 3 4 1 1 1 3 1 2 1 4 3 ( , , , ) , 1 ( )/ λ [( 1) ( 1)] [1 (/ λ )/ λ] L = n n n n + n n + n + n + + n n  1 2 1 2 1 2 3 4 1 1 1 3 1 2 1 4 3 ( ) ( , , , ) , 1 ( ) [( 1) ( 1)] [1 ) ] / λ / λ / λ / λ n n L = n n n n + n n + n + n + +( n n

(15)

1 3 1 3 1 2 3 4 1 1 1 3 1 2 1 4 3 [( 1) ( 1)] ( , , , ) , 1 ( ) [( 1) ( 1)] [1 ( ) ] / λ / λ / λ / λ + + L n n = n n n n + n n + n + n + + n n  . ] ) ( + [1 )] 1 ( 1) [( ) ( 1 )] 1 ( 1) [( ) ( ) , , , ( 1 3 4 1 1 3 1 2 1 1 3 1 3 4 4 3 2 1 4 n n + n + n + n n + L + n + n n n = n n n n

Therefore the efficiency requires that in the structure S*(4), manager j has to use capital kj = αjK/L (j = 1, 2, 3, 4), where K is the total amount of

the capital allocated to data processing, L (L = 4) is the number of managers, and coefficients of adjustment of resources to information workload αj

(j = 1, 2, 3, 4) are specified above.

The example considered shows explicitly that if n1 = n2 = n3 = n4 = n = N/L

then α1 = α2 = α3 = α4, and, consequently, the resources need to be equally

distributed among the managers, and all the managers should have the same computational abilities.

Note, however, that if managers cannot be equally loaded, then for a given number of data items processed, values of αj (j = 1, 2, 3, 4) depend

on the particular distribution of data items among the managers. For example, if N = 10, and 2 (out of 6 possible) information processing vectors are specified as: (3, 3, 2, 2) and (3, 2, 3, 2), then

L = α ] (3/4) + 2[1 (3,3,2,2) 1 1 and . ] (2/3) + 2[1 (3,2,3,2) 1 1 L = α

Since 0 <  < 1, α1(3, 3, 2, 2) < α1(3, 2, 3, 2). Consequently, the pattern of

resource distribution within the efficient structures depends only on the number of managers and the vector of information workload.

Finally, note that if the number of data items assigned to the manager with the lowest information workload n = N/L is big enough (and all managers are loaded as equally as possible) resources are distributed among managers almost equally. This suggests that allocation of resources according to the workload of managers is especially important when small cohorts of data are processed.

Literature

Bolton P., Dewatripont M. (1994). The firm as a communication network. The Quarterly Journal of Economics. Vol. 59. Pp. 809-840.

(16)

Cukrowski J. (1997). Parallel data processing in decision making: Necessary and

sufficient conditions. Central European Journal for Operations Research and

Economics. Vol. 5. No. 2. Pp. 99-110.

Cukrowski J., Baniak A. (1999). Organizational restructuring in response to

changes in information-processing technology. Review of Economic Design.

Vol. 4. Pp. 295-305.

Grüner H.P., Schulte E. (2009). Speed and quality of collective decision making:

Incentives for information provision. Mimeo. Forthcoming in Journal of

Eco-nomic Behavior and Organization.

Jehiel P. (1999). Information aggregation and communication in organizations. Management Science. Vol. 45. Pp. 659-669.

Meagher K. (2003). Generalizing incentives and loss of control in an optimal hierarchy:

The role of information technology. Economics Letters. Vol. 78. No 2. Pp. 273-280.

Meagher K., Orbay H., Van Zandt T. (2003). Hierarchy size and environmental

uncertainty. In: M.R. Sertel, S. Koray (Eds.). Advances in Economic Design.

Springer-Verlag.

Orbay H. (2002). Information processing hierarchies. Journal of Economic Theory. Vol. 105. Pp. 370-407.

Prat A. (1997). Hierarchies of processors with endogeneous capacity. Journal of Economic Theory. Vol. 77. Pp. 214-222.

Radner R. (1992). Hierarchy: The economics of managing. Journal of Economic Literature. Vol. 30. Pp. 1382-1415.

Radner R. (1993). The organization of decentralized information processing. Econometrica. Vol. 61. Pp. 1109-1146.

Radner R., Van Zandt T. (1992). Information processing in firms and return to

scale. Annales d’Economie et de Statistique. Vol. 25/26. Pp. 265-298.

Radner R., Van Zandt T. (2001). Real-time decentralized information processing

and return to scale. Economic Theory. Vol. 17. Pp. 545-575.

Schulte E., Grüner H. (2007). Speed and quality of collective decision making:

Imper-fect information processing. Journal of Economic Theory. Vol. 134. Pp. 138-154.

Van Zandt T. (1995). Continuous approximations in the study of hierarchies. Rand Journal of Economics. Vol. 26. Pp. 575-590.

Van Zandt T. (1997). The scheduling and organization of periodic associative

computation: Essential networks. Review of Economic Design. Vol. 3. Pp. 15-27.

Van Zandt T. (1998). The scheduling and organization of periodic associative

computation: Efficient networks. Review of Economic Design. Vol. 3. Pp. 93-127.

Van Zandt T. (1999). Decentralized information processing in the theory of

or-ganizations. In: Contemporary Economic Issues. Vol. 4: Economic Design and Behavior. Ed. by Murat Sertel. MacMillan Press. London. Pp. 125-160.

Cytaty

Powiązane dokumenty

To obtain a better understanding of the cause for these differ- ences, the radial distribution functions (RDFs) of the Na + and Cl − ions in respect to the center of mass of the CDs

Hieronim nie zajmował się oddzielnie żadnym z tego rodzaju wykro- czeń, stąd omówienie ich, jak też ukazanie osobistego jego stosunku do nich, będzie jedynie próbą

This method allowed to combine four important elements: (1) the foundation of the selection of input variables on theories of voting behaviour; (2) the analysis of dependence of

This paper reports a technique for measuring the velocity and dissolved oxygen (DO) concentration fields simultaneously in a micro-scale water flow using oxygen sensitive

Prawdziwy dialog realizuje się wówczas, gdy wizja dru- giej osoby jest pozytywna; gdy dziecko jest postrzegane nie jako ktoś do rozkazy- wania czy tłamszenia za pomocą

Można jednak także wskazać na pewne jego elementy negatywne... Dotyczy to przede wszystkim studiów

• skutki spadku dzietności, przede wszystkim zmiany w strukturze ludności według wieku oraz ich efekty dla rynku pracy, szkolnictwa, czy zabezpieczenia emerytalnego;..