• Nie Znaleziono Wyników

Optimal Contracting under Adverse Selection: The Implications of Mentalizing

N/A
N/A
Protected

Academic year: 2021

Share "Optimal Contracting under Adverse Selection: The Implications of Mentalizing"

Copied!
18
0
0

Pełen tekst

(1)

We study a model of adverse selection, hard and soft information, and mentalizing ability—the human capacity to represent others’ intentions, knowledge, and beliefs. By allowing for a continu- ous range of different information types, as well as for different means of acquiring information, we develop a model that captures how principals differentially obtain information on agents. We show that principals that combine conventional data collection techniques with mentalizing ben- efit from a synergistic effect that impacts both the amount of information that is accessed and the overall cost of that information. This strategy affects the properties of the optimal contract, which grows closer to the first best. This research provides insights into the implications of mentalizing for agency theory.

1 Introduction

Agency theory posits that informational asymmetry, whether modeled as an instance of hidden action or hidden knowledge, hinders the contracting parties from obtaining the first-best outcome (Holmström, 1979; Laffont & Martimort, 2002; Ross, 1973). The theory also allows individuals to partly reduce infor- mational barriers by (in the case of the agent) signal- ing or (in the case of the principal) learning the agent’s type and monitoring his effort. These activities, how- ever, are treated in a highly stylized manner. For in- stance, in the standard moral hazard model, all signals

on the agent’s effort can be included in the contract between the principal and the agent and are assumed to be verifiable. In fact, many signals on agents’ efforts are verifiable but some are not, and principals may rely on non-verifiable information (e.g., body language and facial expressions) to assess an agent’s effort. Similarly, in the adverse selection model, principals may rely on such soft psychological information in assessing agents’ types.

In other words, information is an essential compo- nent of agency theory, and yet it is often modeled in a way that abstracts from some potentially key features of the real world. This paper addresses exactly this problem. Specifically, information differs substantially depending on its form, and recent research has begun to capture this fact by classifying it in terms of how hard versus soft it is (Godbillon–Camus & Godlewski,

Optimal Contracting under Adverse

Selection: The Implications of Mentalizing

ABSTRACT

D82; D83 KEY WORDS:

JEL Classification:

Adverse selection, mentalizing, hard information, soft information, contract

Correspondence concerning this article should be addressed to:

Nicolai J. Foss, Copenhagen Business School - Department of Strategic Management and Globalization, Kilevej 14, Frederiks- berg 2000, Denmark. E-mail: njf.smg@cbs.dk

Jonatan Lenells1, Diego Stea2, Nicolai J. Foss2

Primary submission: 28.10.2014 | Final acceptance: 06.05.2015

1 Department of Mathematics, KTH Royal Institute of Technology, Stockholm, Sweden; 2 Copenhagen Business School - Department of Strategic Management and Globalization, Frederiksberg, Denmark

(2)

2006; Peterson, 2004). Hard information (e.g., a per- son’s education level, experience, or income) is easily reduced to numbers, it can be collected in an imper- sonal way, and its meaning is less contingent on sub- jective judgements, opinions, or perceptions. On the other hand, soft information (e.g., a person’s feelings, perceptions, values, or motivations) is difficult to ac- curately reduce to a numeric score, and its meaning is highly dependent on the context in which it is col- lected and on the personal opinions and perceptions of the person collecting it.

If information differs in terms of how hard or soft it is, it is pertinent to ask whether there are ways of obtaining it that are particularly suitable, depend- ing on the type of information. Recent convergent developments in evolutionary anthropology (Call &

Tomasello, 2008), cognitive neuroscience (Gallagher

& Frith, 2003), and neuroeconomics (Singer & Fehr, 2005) highlight the importance of players’ mental- izing—that is, their intersubjective understanding of preferences, intentions, knowledge, and beliefs. Infor- mation about these mental states is soft in nature and is crucially important in making sense of and predict- ing the behaviors of others (Singer & Fehr, 2005). Thus, mentalizing is ideally suited for the acquisition of soft information.

There is no reason to suppose that principals should not make use of mentalizing as a preferred method of inferring information about other players.

As Singer and Fehr (Singer & Fehr, 2005) note, howev- er, economists take a technical shortcut by assuming a common prior distribution over agent types without considering the determinants of this distribution. In other words, agency theory does not make explicit room for mentalizing. Yet, the theory effectively if im- plicitly assumes that the principal has perfect access to and knowledge of certain mental states of the agent (Foss & Stea, 2014). For example, in the standard mor- al hazard model (Holmström, 1979), the principal is assumed to know the risk preferences and reservation utility of the agent.

The object of this paper is to provide insights into the implications of mentalizing for agency theory. We base our analysis on a manager-worker relationship under adverse selection (Laffont & Martimort, 2002) where we allow for a continuous range of different in- formation types and different means of acquiring in-

formation. We obtain three main sets of results. First, we show that mentalizing can be a low-cost method of acquiring information. Second, we show that mental- izing provides access to information that may be dif- ficult to elicit in other ways. Third, we highlight that mentalizing impacts the design of the bilateral con- tract that the principal and agent sign, resulting in an increase in the volume of trade achieved under asym- metric information. All in all, this research suggests that a more nuanced description of how principals differentially obtain information on agents leads to a more accurate modeling of agency relationships.

2 The basic model with an informative signal

The basic adverse selection model with an informa- tive signal (Laffont and Martimort, 2002) includes a principal P and an agent A. The principal wants to delegate to the agent the production of q units of a good. The value for the principal of these q units is

( )

S q , where S q( ) is a strictly increasing concave func- tion (i.e., S q( ) > 0 and S q′′( ) < 0 for all q) such that

(0) = 0

S . The cost for the agent to produce q units is ( , ) =

C qθ θq, where θ> 0 is the type of the agent. In exchange for the production of the q units, the agent receives a transfer t from the principal. The agent’s utility is

= ,

U tθq

while the principal’s utility is

= ( ) . V S q t

If the principal offers the agent a transfer t R in ex- change for the production of q> 0 units, we say that the principal offers the agent a ( , )q t contract.

For simplicity, we assume that the agent can be of only two types: He is either efficient θ θ= or inefficient θ θ= , where θ θ< . The cost for the agent to produce q units is θq if he is efficient and θq if he is inef- ficient.

It is common knowledge that the probability that

=

θ θ is ν(0,1), while the probability that θ θ= is 1ν. Before the contracting process begins the agent discovers his type, but the principal only receives a sig- nal σ with certain probabilistic information about θ.

(3)

Thus, the agent has more information than the princi- pal (the agent has hidden knowledge). This asymmetry in information is the reason that only a second-best solution can be achieved.

For simplicity, we assume that σ may take only two values, σ1 and σ2. Let the conditional probabilities of these respective realizations of the signal be

1= Pr( = 1| = ) 1 and 2= Pr( = 2| = ) 1.

2 2

µ σ σ θ θ µ σ σ θ θ

1 1 2 2

1 1

= Pr( = | = ) and = Pr( = | = ) .

2 2

µ σ σ θ θ µ σ σ θ θ

If µ1=µ2= 1/ 2, the signal is uninformative. Other- wise, the signal σ1 brings good news in the sense that it is more likely that the agent is efficient if σ σ= 1

than if σ σ= 2.

Let us consider the case where the principal offers a menu of contracts {( , ),( , )}q t q t hoping that an agent of type θ will select ( , )q t and an agent of type t will select ( , )q t . The timing is as follows:

1. The agent discovers his type θ{ , }θ θ . 2. The principal receives the signal σ{ , }σ σ1 2 . 3. The principal offers a menu of contracts.

4. The agent accepts one or none of the contracts.

5. If a contract is accepted, the contract is executed.

Before receiving the signal σ, the principal expects the agent to be efficient with probability ν. After receiving the signal σ, the principal can compute an updated probability that the agent is efficient. According to Bayes’ law, after receiving the signal σ the principal expects that the agent is efficient with probability

1 1 1 1

1 2

ˆ = Pr( = | = ) = if = ,

(1 )(1 )

ν θ θ σ σ νµ σ σ

νµ + −ν µ

1 1 1

1 2

ˆ = Pr( = | = ) = if = ,

(1 )(1 )

ν θ θ σ σ νµ σ σ

νµ + −ν µ (2.1)

2 2 1 2

1 2

(1 )

ˆ = Pr( = | = ) = if = .

(1 ) (1 ) ν µ

ν θ θ σ σ σ σ

ν µ ν µ

+ −

2 2 1 2

1 2

(1 )

ˆ = Pr( = | = ) = if = .

(1 ) (1 ) ν µ

ν θ θ σ σ σ σ

ν µ ν µ

+ − (2.2)

2.1 Optimal contracts

The requirement that agent θ (resp. θ) weakly prefers the contract ( , )q t (resp. ( , )q t ) leads to the following incentive constraints:

tθq t≥ −θq, (2.3)

.

tθq t≥ −θq (2.4)

Moreover, for a menu to be accepted, the following two participation constraints must be satisfied:

tθq0, (2.5)

tθq0. (2.6)

The principal’s problem consists of finding the solu- tions {(q tSBj, SBj),(qSBj ,tjSB)}, j= 1, 2, of the two optimi- zation problems

{( , ),( , )}sup [ ( ( ) ) (1ˆj ˆj)( ( ) )]

q t q t ν S q t− + −ν S q t

subject to (2.3)-(2.6), j =1,2,

where j= 1 if σ σ= 1 and j= 2 if σ σ= 2. The solu- tions are given on p. 43 of (Laffont and Martimort, 2002); the optimal contract {(q tSBj,SBj ),(qjSB,tjSB)} that the principal should offer if he receives the informa- tion signal σj is characterized by

( ) = , ( ) = ˆ , ˆ 1

SB SB j

j j

j

S q S q ν

θ θ θ

+ ν

= , = ,

SB SB SB SB SB

j j j j j

t θq + ∆θq t θq (2.7)

where θ θ θ= and j= 1, 2. In particular, the in- efficient agent’s production levels q1SB and q2SB as- sociated with the signals σ1 and σ2, respectively, satisfy

1 1

1

1 2

2 1

2

2 2

( ) = 1ˆˆ = (1 )(1 ) ,

ˆ (1 )

( ) = 1 ˆ = (1 ) .

SB

SB

S q S q

ν νµ

θ θ θ θ

ν ν µ

ν ν µ

θ θ θ θ

ν ν µ

 ′ + +

+ +

(2.8)

Thus, compared with the first-best contract

* * * *

{( , ),( , )}q t q t for which S q( ) =* θ, the optimal contract entails a downward distortion of the ineffi- cient agent’s production in the presence of imperfect information. Indeed, because S′′< 0, the inequality

( SBj ) > ( )*

S q S q implies that qSBj <q*. Because

1 1

2 2

1 1 ,

1

µ µ

µ µ

≤ ≤

the downward shift is larger if σ σ= 1 than if σ σ= 2.

(4)

This shows that

*

*

1SB SB 2SB< , 1SB= SB= 2SB= ,

q q q q q q q q (2.9)

where {(q tSB, ),(SB q tSB, SB)} is the second-best con- tract offered in the absence of an informative signal.

2.2 The principal’s expected utility

Thus far we have followed (Laffont & Martimort, 2002). We now want to change our viewpoint slightly and formulate the optimization problem in terms of the principal’s overall expected utility.

This provides a way for us to merge the optimi- zation problems for σ σ= 1 and σ σ= 2 into one problem.

The principal’s expected utility when offering a menu of contracts {( , ),( , )}q tj j q tj j 2j=1 fulfilling the in- centive and participation constraints is

1

{( , ),( , )} 1 1 1 1 1

EVq tj j q tj j = Pr( = , = )[ ( )σ σ θ θ S q t] Pr( = , = )[ ( )+ σ σ θ θ S q t]

1

{( , ),( , )} 1 1 1 1 1

EVq tj j q tj j = Pr( = , = )[ ( )σ σ θ θ S q t] Pr( = , = )[ ( )+ σ σ θ θ S q t]+

2

2 2 2 2 2

Pr( = , = )[ ( )σ σ θ θ S q t ] Pr( = , = )[ ( )σ σ θ θ S q t].

+ +

2

2 2 2 2 2

Pr( = , = )[ ( )σ σ θ θ S q t ] Pr( = , = )[ ( )σ σ θ θ S q t].

+ +

The basic optimization problem that maximizes the principal’s utility is therefore

1

{( , ),( , )} 1 1

{( , ),( , )}q tjsupj q tj j E q tj j q tj j ={( , ),( , )}q tjsupj q tj j{ [ ( ) ]

V νµ S q t + −(1 )(1ν µ2)[ ( )S q1 − +t1] ν(1µ1)[ ( )S q2 t2] (1 ) [ ( )+ −ν µ2S q2 t2]},

2

2 1 1 1 2 2 2 2

(1 )(1ν µ )[ ( )S q t] ν(1 µ)[ ( )S q t] (1 ) [ ( )ν µ S q t]},

+ − − + + −

2

2 1 1 1 2 2 2 2

(1 )(1ν µ)[ ( )S q t] ν(1 µ)[ ( )S q t ] (1 ) [ ( )ν µ S q t]},

+ − − + + − (2.10)

where the contracts are subject to (2.3)-(2.6). Writing the right-hand side of (2.10) in the form

1

1 2 {( , ),( , )}11 1 1 1 1 1 1 1

ˆ ˆ

( (1 )(1 )) sup { [ ( ) ] (1 )[ ( ) ]}

q t q t S q t S q t

νµ + −ν µ ν + −ν

1

1 2 {( , ),( , )}11 1 1 1 1 1 1 1

ˆ ˆ

( (1 )(1 )) sup { [ ( ) ] (1 )[ ( ) ]}

q t q t S q t S q t

νµ + −ν µ ν + −ν +

2

1 2{( , ),( , )}22 2 2 2 2 2 2 2

ˆ ˆ

( (1 ) (1 ) ) sup { [ ( ) ] (1 )[ ( ) ]},

q t q t S q t S q t

ν µ ν µ ν ν

+ + − + −

2

1 2 2 2 2 2 2

{( , ),( , )}22 2 2

ˆ ˆ

( (1 ) (1 ) ) sup { [ ( ) ] (1 )[ ( ) ]},

q t q t S q t S q t

ν µ ν µ ν ν

+ + − + −

we see that the solution is given by (2.7). It follows that the principal’s expected utility EV when offering the optimal menu of contracts is

1 1 1 1 2 1 1

E =V νµ[ ( )S qSB θqSB− ∆θqSB] (1 )(1+ −ν µ)[ (S qSB)θqSB]

1 1 1 1 2 1 1

E =V νµ[ ( )S qSB θqSB− ∆θqSB] (1 )(1+ −ν µ)[ (S qSB)θqSB]+ν(1µ1)[ ( )S q2SB θq2SB− ∆θq2SB] (1 ) [ (+ −ν µ2S q2SB)θq2SB]

1 2 2 2 2 2 2

(1 )[ ( )S qSB qSB qSB] (1 ) [ (S qSB) qSB]

ν µ θ θ ν µ θ

+ − ∆ + −

1 2 2 2 2 2 2

(1 )[ ( )S qSB qSB qSB] (1 ) [ (S qSB) qSB]

ν µ θ θ ν µ θ

+ − ∆ + − = [ ( )νS q* θq*− ∆µ θ1 q1SB− −(1 µ θ1) q2SB]

* *

1 1 1 2

= [ ( )νS q θq− ∆µ θqSB− −(1 µ θ) qSB]+ −(1 ν)[(1µ2) (S q1SB)+µ2S q( 2SB) (1− −µ θ2) q1SBµ θ2 q2SB].

2 1 2 2 2 1 2 2

(1 ν)[(1 µ) (S qSB) µS q( SB) (1 µ θ) qSB µ θqSB].

+ − + +− −

2 1 2 2 2 1 2 2

(1 ν)[(1 µ) (S qSB) µS q( SB) (1 µ θ) qSB µ θqSB].

+ − + − − (2.11)

3 More information is better

Intuitively, we expect it to be advantageous for the prin- cipal to have access to additional information about the agent. In this section, we prove that this is indeed the case within the framework of the basic model of Section 2.

For simplicity, we henceforth suppose that

1= 2=

µ µ µ, where µ is the informativeness of the sig- nal. Then, in view of (2.8),

1 1

1SB= q ( ( )), 2SB= q( (1 )), q S h µ q S h µ where the function h( )µ is defined by

( ) =

(1 )(1 )

hµ θ νµ θ

ν µ

+

and the inverse Sq1 of SqS′ exists because of our assumption that Sqq< 0. Equation (2.11) implies that

E ( ) = ( )V µ f µ +f(1µ), (3.1) where the function f( )µ is defined by

* * 1

( ) = [ ( ) ] ( ( ))

2 q

f µ ν S q θq − ∆νµ θS h µ + −(1 )(1ν µ)[ ( ( ( )))S S hq1 µ θS hq1( ( ))].µ

1 1

(1 )(1ν µ)[ ( ( ( )))S S hq µ θS hq( ( ))].µ

+ − (3.2)

The expected utility in the absence of an informative signal is obtained by setting µ= 1/ 2:

(5)

nosignal 1 E = E ( ).

V V 2

The following theorem expresses the fact that it is al- ways beneficial for the principal to take additional in- formation into account when formulating the contract.

The more informative the signal σ is, the higher is the principal’s expected utility.

Theorem 3.1 The principal’s expected utility function E ( )V µ is strictly convex and attains its minimum at

= 1/ 2

µ . In particular, E ( )V µ is a strictly increasing function of µ for 12≤ ≤µ 1.

Proof. We compute

1 1 1

( ) = q( ( )) (1 )[ ( ( ( )))q q ( ( ))]

fµ − ∆ν θS h µ − −+ ν S S h µ θS h µ

1 1 1

( ) = q( ( )) (1 )[ ( ( ( )))q q ( ( ))]

fµ − ∆ν θS h µ − −ν S S h µ θS h µ +

1( ( )) (1 )(1 )[ ( 1( ( ))) ] 1( ( )).

q q q q

d S h S S h d S h

d d

νµ θ µ ν µ µ θ µ

µ µ

− ∆ + −

1( ( )) (1 )(1 )[ ( 1( ( ))) ] 1( ( )).

q q q q

d S h S S h d S h

d d

νµ θ µ ν µ µ θ µ

µ µ

− ∆ + − (3.3)

The calculation

(1 )(1 )[ ( ( ( )))S S hq q1 ]

νµ θ ν µ µ θ

− ∆ + − = (1 )(1 ) = 0

(1 )(1 )

νµ θ ν µ νµ θ

ν µ

− ∆ + −

= (1 )(1 ) = 0

(1 )(1 )

νµ θ ν µ νµ θ

ν µ

− ∆ + −

shows that the last two terms on the right-hand side of (3.3) cancel. Thus,

1 1 1

( ) = q( ( )) (1 )[ ( ( ( )))q q ( ( ))].

fµ − ∆ν θS h µ +− −ν S S h µ θS h µ

1 1 1

( ) = q ( ( )) (1 )[ ( ( ( )))q q( ( ))].

fµ − ∆ν θS h µ − −ν S S h µ θS h µ

Differentiating once more, we find

1 1 1

( ) = d q( ( )) (1 )[ (q q( ( ))) ]d q ( ( ))

f S h S S h S h

d d

µ ν θ µ ν µ θ µ

µ µ

′′ − ∆ − −+

1 1 1

( ) = d q( ( )) (1 )[ (q q ( ( ))) ]d q( ( ))

f S h S S h S h

d d

µ ν θ µ ν µ θ µ

µ µ

′′ − ∆ − − =

= [ (1 ) ] 1( ( ))

(1 )(1 ) d S hq d

ν θ ν νµ θ µ

ν µ µ

− ∆ − −

=

= 1( ( )).

1 d S hq d

ν θ µ

µ µ

Figure 1. The graph of E ( )V µ in the case of S q( ) = 2 q and the parameter values given in (3.4).

(6)

Using that

1

1 1 2

( ) 1

( ( )) = =

( ( ( ))) ( ( ( ))) (1 )(1 )

q

qq q qq q

d S h h

d S S h S S h

µ ν θ

µ µ µ µ ν µ

1

1 1 2

( ) 1

( ( )) = ( ( ( )))= ( ( ( ))) (1 )(1 )

q

qq q qq q

d S h h

d S S h S S h

µ ν θ

µ µ µ µ ν µ

1

1 1 2

( ) 1

( ( )) = =

( ( ( ))) ( ( ( ))) (1 )(1 )

q

qq q qq q

d S h h

d S S h S S h

µ ν θ

µ µ µ µ ν µ

we obtain

2 2

3 1

( ) = ( ) .

(1 )(1 ) qq( ( ( )))q

f S S h

ν θ

µ ν µ µ

′′

Because Sqq< 0 by assumption, this implies that ( ) > 0, 0 < < 1.

f′′µ µ

Hence,

(E ) ( ) =V′′µ f′′( )µ +f′′(1µ) > 0,

showing that E ( )V µ is indeed strictly convex. More- over, because (E ) (1/ 2) = (1/ 2)V f f(1/ 2) = 0, E ( )V µ attains its minimum at µ= 1/ 2. This com- pletes the proof.

Remark 3.2 The conclusion of Theorem 3.1 is remi- niscent of the conclusion of Holmström’s sufficiency theorem (Holmström, 1979). The contexts of these theorems differ in that the timing and setup of the contracting process are different.

Example 3.3 Consider the special case of S q( ) = 2 q. In this case,

1/2 1 2 1 3/2

( ) = , ( ) = , ( ) = .

q q qq 2

S q q S x x S q q

Moreover, E ( )V µ is given by (3.1)-(3.2) and

* 2

1 2 1

( ( )) = 1SB= [ (1 )(1 ) ] , ( ) = = .

q q

S hµ q θ νµ θ S θ q θ

ν µ

+

* 2

1 2 1

( ( )) = 1SB= [ (1 )(1 ) ] , ( ) = = .

q q

S hµ q θ νµ θ S θ q θ

ν µ

+

Because

1 1 3

( ( ( ))) = [ ] ,

2 (1 )(1 )

qq q

S S hµ θ νµ θ

ν µ

+

we find

2 2

3 3

2 ( )

( ) = > 0.

(1 )(1 ) ( )

(1 )(1 )

f µ ν µ θν θ νµ θ

ν µ

′′

+

Hence, in accordance with Theorem 3.1, (E ) ( ) =V′′µ f′′( )µ +f′′(1µ) > 0 (E ) ( ) =V′′µ f′′( )µ +f′′(1µ) > 0. In Figure 1 the graph of E ( )V µ

is shown for the following choices of the parameters:

= 1, = 0.5, = 0.6.

θ θ ν (3.4)

4 The basic model with a costly informative signal

We saw in the preceding section that the principal always benefits from additional information when formulating the contract. Thus, if information is free, the principal will always choose to acquire maximal information. In a more realistic scenario, there is a cost associated with the information in the signal σ (for example, the effort cost of the princi- pal to obtain that information). In this section, we analyze the consequences of the information signal being costly.

Consider the model of Section 3 with an informa- tion signal of informativeness µ, where µ ranges from 1/ 2 (no additional information) to 1 (full in- formation), but suppose now that the information in the signal σ is costly for the principal. More precisely, suppose the principal’s utility has the form

= ( ) ( ),

V S q t C− − µ

where C( )µ is the cost of obtaining a signal of in- formativeness µ. The principal’s problem consists of solving the optimization problem (cf. (2.10))

1 1 1

{( , ),( , )} 1

1 1

2

{ [ ( ) ] (1 )(1 )[ ( ) ]

q tjsupj q tj j

S q t S q t

µ

νµ ν µ

≤ ≤

+ − +

2 2 2

(1 )[ ( )S q2 t] (1 ) [ ( )S q t] C( )}

ν µ ν µ µ

+ + −

subject to (2.3)-(2.6).

For fixed µ, the solution is given by (2.7). The prob- lem therefore reduces to maximizing the principal’s expected utility

(7)

Figure 2. The graph of the function C( )µ in (4.2) for c= 1.

Figure 3. The graph of the principal’s expected utility E ( )V µ given by (4.1) as a function of µ with S q( ) = 2 q, the cost function C given by (4.2), c= 0.02, and the parameter values given in (3.4).

Cytaty

Powiązane dokumenty

(b) Find the Cartesian equation of the plane Π that contains the two lines.. The line L passes through the midpoint

(b) Write down an expression, in terms of p, for the probability that he wins exactly four games.. A test has

(b) Find the probability that a randomly selected student from this class is studying both Biology and

(d) Copy and complete the following table, which gives a probability distribution for the

A method for constructing -value functions for the Bolza problem of optimal control class probably it is even a discontinuous function, and thus it does not fulfil

Thus eigenfunctions of the Fourier transform defined by the negative definite form −x 2 in one variable are the same as eigenfunctions of the classical in- verse Fourier

4.5.. Denote this difference by R.. In a typical problem of combinatorial num- ber theory, the extremal sets are either very regular, or random sets. Our case is different. If A is

The circle number π was already investigated by the ancients (particularly by Greeks like Archime- des), and everyone knows that the old problem of squaring the circle was answered