• Nie Znaleziono Wyników

Fuzzy Argumentation for Thrust

N/A
N/A
Protected

Academic year: 2021

Share "Fuzzy Argumentation for Thrust"

Copied!
16
0
0

Pełen tekst

(1)

RubenStranders,MathijsdeWeerdt,and CeesWitteveen

DelftUniversityofTe hnology

R.Strandersgmail. om, {M.M.deWeerdt, C.Witteveen}tudelft.nl

Abstra t. InanopenMulti-AgentSystem,thegoalsofagentsa tingon

behalfoftheirownersoften oni twithea hother,andthereforesu h

agents anbeunreliableorde eitful.Consequently,anagentrepresenting

ahumanownerneedsto reasonabouttrusting(informationor servi es

providedby)otheragents.Existingalgorithmstoperformsu hreasoning

mainlyfo usontheimmediateutilityofatrustingde ision,butdonot

provideanexplanationoftheira tionstotheuser.Thismayhinderthe

a eptan e of agent-based te hnologies insensitive appli ations where

usersneedtorelyontheirpersonalagents.

Inthis paper, we proposeanewapproa htotrust inMulti-Agent

Sys-temsbasedonargumentationthataimstoexposetherationalebehind

su htrustingde isions.Oursolutionfeaturesa learseparationof

oppo-nentmodeling andde isionmaking. Itusespossibilisti logi to model

behavior ofopponents,andwepropose anextensionofthe

argumenta-tionframeworkby Amgoud and Prade [1 ℄to translate the fuzzyrules

withinthesemodelsintowell-supportedde isions.

1 Introdu tion

An open Multi-Agent System (MAS) is hara terized by an agent's freedom

to enter and exit the system as they please, and the la k of entral

regula-tion and ontrol of behavior. In su h a MAS, often agents are not only

de-pendentupon ea h other, as forexample in Computer-Supported Cooperative

Work (CSCW)[2℄,webservi es[3℄, e-Business [4,5℄,and Human-Computer

in-tera tion [6℄, but also their goals may easily be in oni t. As a onsequen e

agentsinsu hasystemarenotreliableortrustworthybydefault,andanagent

needs to take into a ount the trustworthiness of other agentswhen planning

howtosatisfytheirowner'sdemands.

Severalalgorithmshavealreadybeendevisedto onfronttheproblemof

es-timatingtrustworthinessby apturingpastexperien eswithotheragentsinone

ortwovaluesthat anbeusedtoestimatefuturebehavior[7℄.Thesealgorithms,

however,primarilyfo usonimprovingtheimmediatesu essofan agent.Less

emphasishasbeenlaidondis overingpatternsinthebehaviorofother agents,

ormore hallengingtheirmotivesandin entives(orgoals).Moreover,the

ra-tionale ofthede isionofteneludestheuser:inmostapproa hesitis`hidden'in

alargeamountofnumeri aldata,orsimplyin omprehensible.Atanyrate,these

(2)

personalagentto buy apaintingforhis olle tion. Whenaninteresting

paint-ing is oered, this agent startsby estimating the valueof this painting before

submitting any bids. The agent retrieves this information by sear hing some

databasesandaskinganumberofexperts.Toobtaintoagoodestimate,itthen

assignsweightstothevariousre eivedappraisals.Amorereliableand

trustwor-thy sour e gets a higher weight. When the user plans to buy a veryvaluable

painting, he isnot just interested in the nal estimate ofthis agent, orin the

retrieved estimates and their weights.When so mu h is at stake, he wants to

knowwheretheseweights omefrom.Whyistheweightforthisfamousexpert

solow? Ifthe agenttold himthat this isbe ausethis expert is known to

mis-representhisestimatein aseswhereheisinterestedinbuyinghimself,andthis

maybesu h a ase,wouldnotthisagentbesomu hmoreuseful?

Thela kofsu hexplanations anseverelyhamperthea eptan eof

agent-basedte hnology,espe iallyinareaswhereusersrelyonagentstoperform

sensi-tivetasks.Withouttheavailabilityoftheseexplanations,theuserneedstohave

almostblindfaithinhisagent'sabilitytotrustotheragents.Webelievethatthe

stateoftheartindealingwithtrustinMulti-AgentSystemshasnotsu iently

addressedthisissue.Therefore,in ourresear h,weexploretherequirementsfor

a new approa h to trust in Multi-Agent Systems that lays more emphasis on

therationaleof trustingde isions,and in thispaperwework towardsa

proof-of- on eptofsu h anapproa h.

Thisgoal givesriseto thefollowing onditionsfor su h an approa h: (i) A

personalagentshould beableto explainwhy ertainde isionsweremade, and

why alternativeswere dis arded,(ii) it should formulate these explanationsin

terms of the per eived behavior of other agents, and (iii) it should present a

logi al(symboli )reasoningsupportingitsde isionsbythisobservedbehavior.

Ea h(personal)agenthasthereforeaknowledgebase, apturingthebehavior

ofotheragentsinrules,asetofa tionsorde isionsit anmake,andsomegoals

to attain, whi h are given by the human owner. Due to the un ertainty,

am-biguousness,and in ompletenessof informationregardingtrustin Multi-Agent

Systems, this setting givesrise to somespe i requirements of the opponent

model anagentshouldbeabletobuild:

1. Themodelshouldbeabletorepresentinherentlyun ertain,in ompleteand

ambiguousknowledgeaboutotheragents,and

2. itshould supportanargumentationframework apableofmakingde isions

andexplainingthem.Thisimpliesthatitshouldbe omposedoflogi alrules.

Moreover,we havea strong preferen e fora model that is ommonlyused, to

ensuretheexisten eofsu ientlytestedanda eptedindu tionalgorithms.We

put forwardsu h amodelin Se tion 2,where the oreideaof ourapproa h is

presented:a unique ombination of afuzzy rule opponent modeling te hnique

andasolidframeworkforargumentationappliedtothepro essofmakingtrust

(3)

abletoagivensituation.InSe tion3weshowtheresultsofapplyingthismodel

within the ontextof anart appraisal domain,asdes ribedin theAgent

Rep-utation andTrust(ART)testbed[8℄.Thenalse tionsummarizesthebenets

of anargumentation-basedapproa hto explainingtrustingde isions,dis usses

related work,and givessome interesting waysof extendingthe ideas given in

thispaper.

2 An ar hite ture for fuzzy argumentation

Thegoaloftheapproa hin thispaperis torepresenttheun ertainknowledge

aboutotheragentsusinglogi alrules,andusethisknowledgetoderivenotonly

goodde isions,butalsoanargumenttosupportthosede isions.Inthisse tion

wedes ribethe globalar hite ture ofour approa h, theformal argumentation

frameworkfor making thede isions,and the opponentmodeling algorithm we

usedin ourproofof on ept.

2.1 Ar hite ture

The two main omponents of ourframework are opponentmodelingand

de i-sionmaking.Theopponentmodeling omponentisresponsibleformodelingthe

behavior of other agents, based on past experien es with these agents. These

past experien es are stored in a transa tion database. Data from the

transa -tiondatabaseis used toidentify behavioralpatterns. This isdoneby applying

adata miningalgorithm. Together,the behavioralpatterns form anopponent

modelades riptionof howanopponentrea tsin dierentsituations.

Thede ision making omponentis responsible for makingthe a tual

de i-sions. It uses the opponent models to predi t the out omes of ea h available

a tion. Using these out omes, and the knowledge a quired by the opponent

model,argumentsare onstru tedtosupport(orreje t)thea tion.These

argu-ments expli itlyreferto the out omesin termsof the agent's goals.Themore

the predi ted out omes are favorable in terms of these goals, the greater the

strength of the argument supporting the a tion. Based on this, the generated

arguments an beparaphrased as: whenI takede ision

d

to exe ute that a -tion,themodelthatIhaveofthebehavioroftheotheragentpredi tsa ertain

out ome, whi h oni ts with/attainssome positive goals.De ision

d

is there-fore desirable/undesirable. Figure 1shows therelation between theopponent

modelingand thede isionmaking omponent.

Thenalstepinde isionmakingisdeterminingthemostappropriatea tion

andexe utingit.Thea tionthatissupportedbytheargumentwiththehighest

strengthistheonethatisthemostprudent.Whenthisa tionhasbeenexe uted,

the a tual out omes are observed and re orded in the transa tion database.

(4)

Agent Modelof Arguments BehaviorAgent

A

1

Sele ted A tion(s) BehaviorAgent

A

n

. . . Rule Indu tion Transa tion Database Goals Available A tion(s) Argumentation De ision Making

A tionspertainingtoAgent

A

1

A tionspertainingtoAgent

A

n

De isionMaking

OpponentModeling

Agent

A

1

Agent

A

n

Fig.1.Thear hite turefortheModelingAgent

The symboli method of reasoning needed in our approa h operates on a

dierentlevelthansimplenumeri aldatathatisobservedfromtheenvironment.

Moreover,weneed to reasonaboutthe inherent vaguenessand ambiguousness

ofinformationinatrustdomain. Fuzzy(possibilisti )logi [9℄providesuswith

a way to ta kle this modeling problem, be ause it provides a natural way of

translatingba kand forthbetweenlogi rules ontheone hand,andun ertain

dataontheotherhand.

Several dierent algorithms exist to inferfuzzy rules from numeri al data.

Together,these rulesformafuzzyrulebase thatapproximates thedata.Many

learningalgorithmsalsoassign ameasure of onden e toea hrulein therule

base.Usually,thismeasureis(inversely)proportionaltotheerrortherulemakes

withrespe tto thepasttransa tions.

2.2 Argumentation

To arefullyweightheprosand onsofea hde isionunder onsideration,andto

sele tthede isionthatismostlikelytohavea eptable onsequen es,weuseda

frameworkforargumentation.We onsidertheworkbyAmgoudandPrade[1,10℄

tobeagoodpointofdepartureforsu hanargumentationframework.Itsupports

reasoningunder un ertainty with fuzzylogi . This framework usesthe agent's

knowledgebase

K

,asetofitsgoals

G

,andasetofpossiblede isions(ora tions)

(5)

Denition1. An argument

A

infavorof ade ision

d

isatriple

A = hS, C, di

, where:



S

isthe supportof the argument.The support ofthe argument ontainsthe knowledgefromtheagent'sknowledgebase

K

usedtopredi tthe onsequen es ofde ision

d

.



C

arethe onsequen esoftheargument.These onsequen esaregoalsrea hed byde ision

d

,andformasubsetof the goal base

G

.



d

isthe on lusionoftheargument,andisamemberofthesetofallavailable de isions

D

.De ision

d

isre ommendedbyargument

A

.

Moreover,

S ∪ {d}

shouldbe onsistent,

S ∪ {d} ⊢ C

,

S

should beminimal,and

C

maximal amongthe setssatisfyingthe above onditions.

Theset

A

gathersallthe argumentswhi h anbe onstru ted from

hK, G, Di

. The onstru tionof these argumentsis verystraightforward:forea hde ision,

the onsequen es are predi ted using the knowledge base

K

. Next, the onse-quen esareevaluatedintermsoftheagent'sgoals

G

.Finallytheargumentsare orderedbytheirstrength,andthede isionsupportedbythestrongestargument

issele ted.

Thisleavesopenonlythe on eptofanargument'sstrength.Asinthe

orig-inal framework wemakeadistin tionbetweentheLevel and theWeight ofan

argument.Theformerreferstothe onden einsupport

S

oftheargument,the lattertotheimportan eofthegoalsin

C

.Intheoriginalframeworkthe knowl-edge base

K

onsisted of elements

(k

i

, ρ

i

)

where

k

i

is a propositional formula, and

ρ

i

anbethoughtofasthe onden etheagenthasinthisruleorfa t.In our framework

k

i

is afuzzy rule. Consequently,givenan environment state

ω

, the valuation

v

ω

of afuzzy rule orfa t

k

i

is not just 0 or1as in the original framework,but

0 ≤ v

ω

(k) ≤ 1

.Thismeansthatrules anbepartially appli able tothe urrentstateoftheenvironment.We allthisthemat hstrengthofarule.

We generalize the original framework to deal with this partial appli ability of

knowledge.TheLevel ofanargument

A

dependsonthestrengthoftheweakest rule

k

j

usedin theargument:

Level(A) = ρ

j

· v

ω

(k

j

)

(1) where

j

(theindexoftheweakestrule)isobtainedusingthefollowingequation:

j = arg min

i

ρ

i

v

ω

(k

i

)

for

{(k

i

, ρ

i

) | (k

i

, ρ

i

) ∈ S, v

ω

(k

i

) 6= 0}

(2)

Thisredenitionensuresthat:

1. Forequal onden elevels

ρ

,theknowledgewiththehighestmat hstrength determinestheLevel of the argument.The higherthemat h strength, the

(6)

determinestheLevel oftheargument.Thisis onsistentwiththe

argumen-tationframeworkpresentedin[1℄.

3. Inboundary aseswherearuleisfullymat hed,ornotmat hedatall(e.g.

v

ω

(k) = {0, 1}

),Equation2redu estothedenitionofLevel intheoriginal framework.

The Weight of an argument

A

depends onthe goalsthat anberea hed. The goalsaregivenastuples

(g

j

, λ

j

)

intheset

G

.Likeanelementfromtheknowledge base,agoal

g

j

isafuzzyruleorfa t.Theatta hedvalue

0 ≤ λ

j

≤ 1

denotesthe preferen e ofthe goal.Theoriginal framework didnotfa tor inthe possibility

ofpartiallysatisedgoals.Todealwiththis,weredeneWeight asfollows:

W eight(A) =

X

(g

j

j

)∈G

v

ω

(g

j

) · λ

j

(3)

This denition ensures that the weight of the argument in reaseswith the

utility oftheexpe ted onsequen esof thede ision.Morespe i ally,ifagoal

g

withpreferen e

λ

is50%true,weexpe ttheutilitytoin reasewith

λ/2

.We sumoverallgoalsoftheagenttoobtaintheweightoftheargument.

Finally,weneedto omparetheWeight andLevel ofea hargumentto

deter-minewhi hargumentisthemostpowerful.Putdierently,apreferen erelation

amongargumentsisrequired:

Denition2. Let A and B be two arguments in

A

. A is preferred to B i

Level(A) · W eight(A) ≤ Level(B) · W eight(B)

.

2.3 Opponentmodeling

Forourproofof on ept,weneedafuzzy(possibilisti )rulelearningalgorithmto

buildarulebase.Forthis,weuseasimpletheoryrevisionalgorithm alledFuzzy

RuleLearner(FURL)[11℄.Takingobservationsfromtheenvironmentasinput,

FURLis apableof reatingarulebaseoffuzzyrules.Rules anbemoreorless

plausible,dependingonthepredi tionerrorthey auseonpastobservations.In

fa t,FURLusesaHierar hi alPrioritized Stru ture[12℄ onsistingof layersof

ruleswhere ea h layer onsistsof rulesthatareex eptionsto rulesin thelayer

belowit. However,forourappli ationwe anthinkoftheresultjust asa(at)

rulebase

K

withfuzzyrules.

Ea hruleinour aseisanimpli ationfromanobservation( ondition)toan

expe ted/learnedee t( on lusion).Forexample,afuzzyrulelikeif ertainty

is

c

1

(low)thenappraisal-erroris

ae

5

(high) shouldbeinterpretedasfollows:if thevaluefor ertaintyisamemberofthefuzzyset

c

1

,whi h representsalllow values for ertainty, then we anexpe t that the valuefor the appraisal error

(ofthisopponent)will beamemberofthefuzzyset

ae

5

, whi hrepresentshigh valuesforappraisalerror.Membershipofafuzzysetisnotjusttrueorfalse,but

(7)

the ertaintywithwhi hthis ruleisbelievedtobetrue.Inourframework,this

measureof onden eisobtainedby al ulatingtheinverseoftheerrormeasure

produ edbyFURL.

3 Evaluation

Asweremarkedintheintrodu tionweareinterestedinthewaytrustinMAS an

benetfromargumentationforde isions.Tohaveapreliminaryimpressionabout

the ontributionofourapproa htothatend,wewouldliketoinvestigate(i)how

anagentbasedon ourapproa hbehavesin asimpleart appraisalenvironment

whereotheragentswithxedde isionta ti soperate,andwhetheritis apable

ofexplainingitsde isions,and(ii)howthisagentperformswithrespe ttothese

otheragentsin a ompetitivesetting.

TheAgentReputationandTrust(ART)testbedprovidesasimple

environ-ment to do our experiments [8℄. ART is be oming the de fa to standard for

experimenting withtrust algorithms and evaluating their performan e. In this

environmentour personal agent is put in ompetition with anumber of other

agentsto estimatethevalue ofapainting. Agents anask ea h other fortheir

opinion,andmayexpe tanansweranda laimed ertaintyofthisanswer.The

agentsneed to ombine the opinions of othersto arriveat a nal appraisal of

the painting. Ea h agent has its own area of expertise for whi h it an give

goodopinionstoothers.Allagents ompetewithea hotheranumberofrounds

(appraisingdierentpaintings)inmakingthebestestimate,soitmaybe

worth-while to tryto feed theother agents the wronginformation at somepoint(s).

Knowingwhenandwhomtotrustisessentialto besu essfulinthisdomain.

Inthes enariosthatfollow,westudythede isionmakingpro essofouragent

whilein ompetitionwithtwootheragents:HonestandRe ipro al.Honest

is an agent that always honestly tells how ertain it is that it an a urately

appraiseapainting.Ifithaslowexpertise,itgivesalow ertainty,andvi eversa.

The behaviorof Re ipro al is somewhat more ompli ated: the behaviorof

itsopponentinuen esitsbehaviortowardsthat opponent.Iftheopponenthas

beendishonestbymisrepresentingitsexpertise,Re ipro alrespondsinkind:

it be omes dishonest as well. However, if Re ipro al's opponent is honest,

Re ipro albehavesexa tlythesameasHonest.

Inea hofthefollowings enarios,ouragenthasintera tedwithbothagentsin

200transa tions.Fromtheobservationsmadeduring200transa tions,weused

FURL tobuild anopponentmodel. The onden e in thetruthof ea h ruleis

al ulatedusing the errormeasures FURL asso iates withea h ofthe rulesin

theknowledgebase.Themodelsouragenthaslearnedafter200transa tionsare

presentedin Tables1and 2.These models ontainmultiple fuzzy if-thenrules

des ribingtheopponent'sbehavior.

Usingtheopponentmodel,theagentneedstomakeade isionabouttrusting

(8)

Rule Conden e

1if ertaintyis

c

0

thenappraisal-erroris

ae

5

0.00381 2if ertaintyis

c

1

thenappraisal-erroris

ae

4

0.00832 3if ertaintyis

c

2

thenappraisal-erroris

ae

3

0.00408 4if ertaintyis

c

3

thenappraisal-erroris

ae

2

0.00847 5if ertaintyis

c

4

thenappraisal-erroris

ae

1

0.02008 6if ertaintyis

c

5

thenappraisal-erroris

ae

0

0.00520

Table 2.Modelof Re ipro al'sbehaviorafter200intera tions

Rule Conden e

1if ertaintyis

c

0

thenappraisal-erroris

ae

7

0.09824 2if ertaintyis

c

1

thenappraisal-erroris

ae

5

0.01450 3if ertaintyis

c

2

thenappraisal-erroris

ae

5

0.00601 4if ertaintyis

c

3

thenappraisal-erroris

ae

3

0.00759 5if ertaintyis

c

4

thenappraisal-erroris

ae

3

0.00876 6if ertaintyis

c

5

thenappraisal-erroris

ae

2

0.01042 7if ertaintyis

c

6

thenappraisal-erroris

ae

2

0.01403 8if ertaintyis

c

1

anddishonestyis

d

2

thenappraisal-erroris

ae

6

0.03902 9if ertaintyis

c

1

anddishonestyis

d

3

thenappraisal-erroris

ae

6

0.03702 10if ertaintyis

c

1

anddishonestyis

d

4

thenappraisal-erroris

ae

6

0.04522 11if ertaintyis

c

1

anddishonestyis

d

5

thenappraisal-erroris

ae

6

0.06350 12if ertaintyis

c

1

anddishonestyis

d

6

thenappraisal-erroris

ae

6

0.05282 13if ertaintyis

c

2

anddishonestyis

d

0

thenappraisal-erroris

ae

0

0.03136 14if ertaintyis

c

2

anddishonestyis

d

0

thenappraisal-erroris

ae

1

0.03136 15if ertaintyis

c

2

anddishonestyis

d

0

thenappraisal-erroris

ae

2

0.03136 16if ertaintyis

c

2

anddishonestyis

d

1

thenappraisal-erroris

ae

0

0.02665 17if ertaintyis

c

2

anddishonestyis

d

1

thenappraisal-erroris

ae

1

0.02665 18if ertaintyis

c

2

anddishonestyis

d

2

thenappraisal-erroris

ae

0

0.02267 19if ertaintyis

c

2

anddishonestyis

d

3

thenappraisal-erroris

ae

4

0.02633 20if ertaintyis

c

3

anddishonestyis

d

1

thenappraisal-erroris

ae

1

0.06546 21if ertaintyis

c

3

anddishonestyis

d

2

thenappraisal-erroris

ae

0

0.01612 22if ertaintyis

c

3

anddishonestyis

d

2

thenappraisal-erroris

ae

1

0.01612 23if ertaintyis

c

3

anddishonestyis

d

3

thenappraisal-erroris

ae

0

0.02104 24if ertaintyis

c

3

anddishonestyis

d

5

thenappraisal-erroris

ae

7

0.02960 25if ertaintyis

c

3

anddishonestyis

d

6

thenappraisal-erroris

ae

0

0.04653 26if ertaintyis

c

3

anddishonestyis

d

6

thenappraisal-erroris

ae

5

0.04653 27if ertaintyis

c

4

anddishonestyis

d

3

thenappraisal-erroris

ae

0

0.01640 28if ertaintyis

c

4

anddishonestyis

d

3

thenappraisal-erroris

ae

1

0.01640 29if ertaintyis

c

4

anddishonestyis

d

4

thenappraisal-erroris

ae

0

0.02558 30if ertaintyis

c

5

anddishonestyis

d

4

thenappraisal-erroris

ae

0

0.02189 31if ertaintyis

c

5

anddishonestyis

d

4

thenappraisal-erroris

ae

1

0.02189 32if ertaintyis

c

6

anddishonestyis

d

5

thenappraisal-erroris

ae

1

0.01295

toassignmoreweighttotheopponentthatisthemoreskilledinappraisingthe

painting.

3.1 S enario 1: RequesterRole

In this s enario our agent onsults Honest and Re ipro al to appraise its

ownpainting.Forea h agent,itsear hesforargumentstosupportthede ision

to getanopinion frombothHonestand Re ipro al. Thestrengthsof these

argumentsareusedtodeterminethedelegationweight,i.e.theextenttowhi h

otheragents'appraisalsareused.

Goals Be auseitisinouragent'sinteresttoappraisethepaintingasa urately

as possible, it has asingle goal

g

1

= (appraisal-error is

acceptable

, 1). In this ase,

acceptable

is a fuzzy set that assessesthe a eptability of the expe ted

(9)

acceptable(x) = 1 − x

(4) Putdierently,goal

g

1

statesthatouragentfavorsa urateappraisalsfrom itsopponents.So,inthisparti ulartransa tion,ouragenttriestondoutfrom

whomit an getthemosta urateappraisal.

Observations Beforede idingwhi hopponenttotrust,ea hopponenttellshow

ertainitisofitsownexpertise.Honestassertsa ertaintyof

c

1

,while Re ip-ro alrepliesthatit anappraisethepaintingwitha ertaintybetween

c

4

and

c

5

.Also,inthepreviousround,ouragenthasbeensomewhatdishonesttowards Re ipro al(thedishonestywasamemberofthefuzzyset

d

3

).

AvailableDe isions Assaidbefore,inthistransa tion,ouragent anrequestan

appraisalfromtwoopponents.Consequently,itmust onsidertwopossible

de i-sions:

d

Honest

, i.e.a epttheappraisal fromHonest,or

d

Reciprocal

,i.e.a ept the appraisal from Re ipro al. Of ourse, these de isions are not mutually

ex lusive.Forexample,ouragent ande idetoweightheappraisalsfrom both

agentsequally,resultingin anal appraisalthat istheaverageof bothagent's

appraisals.

Ontheonehand,weexpe tapoorappraisalfromHonest,be auseits

er-taintyisquitelow.Ontheotherhand,Re ipro al's ertaintyisveryhigh,but

ouragenthastotakeitsowndishonestytowardsRe ipro alintoa ount.The

opponentmodel hastode idewhat theee t ofthis willbeonRe ipro al's

appraisal. Using these goals, observations, and de isions, our agent generates

twoarguments.Therstargument

A

Honest

supportsde ision

d

Honest

, the se -ondargument

A

Reciprocal

supportsde ision

d

Reciprocal

.

De isionfor Honest RememberfromDenition1thatanargument onsistsof

threeparts:support, onsequen esand on lusion.Thesupportoftheargument

is asubsetof theknowledge baseof theagent,and onsistsof knowledge used

to predi t the onsequen es of the de ision under onsideration. The support

of

A

Honest

onsistsofparts of theopponentmodelof Honestrelevantto this

parti ulartransa tion.This issummarized inTable3.

The onsequen es of

A

Honest

relate to the desirability of the onsequen es of de ision

d

Honest

in terms oftheagent'sgoals.Fora ertaintyof

c

1

,asingle rule in theopponentmodel res,and predi tsanappraisal errorof

ae

4

. Given thispredi tion,we andeterminetheutilityintermsofgoal

g

1

(seeTable4(a)). Whenwedefuzzify

ae

4

1

,weobtainanumeri alvalueof0.75.Usingthe

member-shipfun tion of

acceptable

from Equation4,wedeterminethat goal

g

1

isonly 25%satised.FromtheinformationinTables3and4(a),we annow al ulate

the Level and Weight of argument

A

Honest

(see Equations1on page5, and3 1

Defuzzi ation is a mapping from membership of one or more fuzzy sets to the

(10)

Table3.ThesupportforArgument

A

Honest

Knowledge Mat hConden e

ertaintyis

c

1

100%

if ertaintyis

c

1

thenappraisal-erroris

ae

4

100% 0.00832

appraisal-erroris

ae

4

100%

Table 4. The onsequen es, and the Level and Weight al ulations of argument

A

Honest

GoalMat hPreferen e

g

1

0.25 1

(a) The onsequen es

ofArgument

A

Honest

Property Cal ulation Result

Level(A

Honest

)

1 × 0.00832

0.00832

W eight(A

Honest

)

1 × 0.25

0.25 (b)The Level and Weight

al ula-tionsofArgument

A

Honest

onpage6).Table4(b)liststhestepsforthis al ulation.Ouragent annow

de-terminethethestrengthoftheargumentforHonest:

0.00832 × 0.25 = 0.00208

(seeDenition 2).

De isionfor Re ipro al Next,ouragentperformsthesamestepsfor

Re ip-ro al.Fordeterminingthesupportand onsequen esofargument

A

Reciprocal

, we follow the same pro edure as above. They are summarized in Tables 5

and6(a),respe tively.Thistime,fourrulesrebasedontheinformation

Re ip-ro alprovided.We anseethattheappraisalerrorisexpe tedtobesomewhere

between

ae

0

and

ae

3

. Afterdefuzzifyingtheoutput oftheopponentmodeland using the membership fun tion of the

acceptable

set, we nd that goal

g

1

is satisedfor 75%.Table6(b) showsthe al ulation ofthe Level and Weight of

this argument. Basedon these measures,wenow al ulate thestrength of the

argument:

0.00438 × 0.75 = 0.00329

.

Con luding In the nal step,our agent ompares the strengths of both

argu-ments.ThisisdoneinTable7.Whennormalized,thestrengthsofthearguments

providetheappraisalweights towardsbothagents.Aswe ansee,Re ipro al

determines61%oftheappraisal.Apparently, ouragentsfavors alowappraisal

Table 5.ThesupportforArgument

A

Reciprocal

Knowledge Mat hConden e

ertaintyis

c

4

50%

ertaintyis

c

5

50%

dishonestyis

d

3

40%

if ertaintyis

c

4

thenappraisal-erroris

ae

3

50% 0.00876 if ertaintyis

c

5

thenappraisal-erroris

ae

2

50% 0.01042 if ertaintyis

c

4

anddishonestyis

d

3

thenappraisal-erroris

ae

0

40% 0.01640 if ertaintyis

c

4

anddishonestyis

d

3

thenappraisal-erroris

ae

1

40% 0.01640

(11)

A

Reciprocal

GoalMat hPreferen e

g

1

0.75 1

(a) The

onse-quen es of Argument

A

Reciprocal

Property Cal ulation Result

Level(A

Reciprocal

)

0.5 × 0.00876

0.00438

W eight(A

Reciprocal

)

1 × 0.75

0.75 (b)TheLevel and Weight al ulations

ofArgument

A

2

Table7.ThedelegationweightsforHonestandRe ipro alins enario1.

Agent LevelWeightStrength Delegationweight

Honest 0.00832 0.25 0.00208 0.39

Re ipro al0.00438 0.75 0.00329 0.61

error,andmoreorlesstakestheredu ed onden eoftheknowledgeof

Re ip-ro al'sbehaviorforgranted.

Inthiss enario,wehaveseenthatouragenthadtomakeatrade-obetween

anagentwhose behavior anbereliablypredi ted(Honest) and anagentfor

whi halessreliableopponentmodelisavailable,but probablyprovidesamore

a urate appraisal (Re ipro al). The strengthsof the arguments supporting

bothde isionree tthistrade-o.Intheend,thelowerpredi tedappraisalerror

forRe ipro alprovedtobede isive.Consequently,ouragent hosetodepend

mostonRe ipro alforappraisingitspainting.

3.2 S enario 2: Provider Role

Inthepreviouss enario,wefo usedontheappraisalsre eivedfromouragent's

opponents. Now, we reverse the roles: our agent provides its opponents with

advi e.Tothisend,weaddanewgoal,andapplythede isionmakingpro edure

totheappraisalsgeneratedbyouragent,insteadofitsopponents.Thenewgoal,

alled

g

2

,essentiallyen ouragesouragenttobeasde eptiveaspossibletowards otheragents(byoverstatingits ertaintyof orre tlyappraisingapainting).This

will, however,inuen ethequalityofreturnedappraisalsby Re ipro al. So,

wemustndabalan e betweena hievinggoal

g

1

and goal

g

2

.Inother words, de eivingother agentsmustnotnegativelyinuen e thea ura yofappraisals

re eivedfrom thoseagentstoomu h.

De idingtheextentofthede eptiontowardsanagentisdierentfrom

de id-ingdelegationweightsins enario1.Forone,thevalueofthede isionvariableis

nownotonlyaresultfromthede isionmakingpro edure,butalsoinuen esa

partoftheopponentmodel.Ins enario1,thede isionvariablewasthe

delega-tionweighttowardsea hagent.Now,thede isionvariableis dishonesty,whi h

is partof theopponentmodel. Se ond,thede isionpertainsto transa tionsin

(12)

future isnotyet available.Inparti ular,the ertaintyassertedbyanopponent

in a future transa tion is important for predi ting the appraisal error, but is

notknownbeforehand.Usingtheopponentmodelwithoutavaluefor ertainty

would ause none of the rules in the rule base to re. Inthis ase, the

oppo-nent model doesnot produ e apredi tion forthe appraisal error,rendering it

essentiallyuseless.

Oursolutiontothisproblemistogenerateasetofargumentsforea h

de i-sionfor anumberof hypotheti alvaluesof ertainty. 2

This way, weee tively

removed the ertainty variable from the opponent model, leaving the relation

betweendishonesty andappraisalerror. Next,theLevel andWeight ofea h of

these argumentsis averagedand ompared to obtain anaggregatedLevel and

Weight.There ommendedde isionisthen al ulatedinthenormalfashion.Of

ourse,de idingontheamountofde eptiontowardsHonestistrivial,be ause

Honestdoesnotrespondtothebehaviorofitsopponents. 3

Be auseofthis,our

agentis apableofbeingtotallydishonestwiththisagent,withoutsurrendering

a ura y.Inwhat follows,wethereforeillustratethispro essby al ulatingthe

bestlevelofde eptiontowardsRe ipro al.

Goals Inadditionto goal

g

1

froms enario1,goal

g

2

=(dishonestyis

deceptive

, 0.5) is in luded in the goal base of our agent. De eptive is a fuzzy set in the

domainof dishonesty.Thehigherdishonesty,themoreouragentmisrepresents

itsexpertisebyoverstatingits ertainty.Note that goal

g

1

hasalowerpriority thangoal

g

2

.

Observations Therearenorelevantobservationsinthisparti ularde ision

mak-ingpro ess,be auseitpertainsto transa tionsinthefuture.

AvailableDe isions We onsidervedierentde isions:

d

A

,i.e.dishonestyis0.0,

d

B

,i.e.dishonestyis0.25,...,and

d

E

,i.e.dishonestyis1.0. Table8showsthe argumentsgeneratedforea hde ision.Weseethattheextentofouragent's

dis-honestytowardsRe ipro alinuen estheaverageappraisalerror.Of ourse,

due to the nature of Re ipro al, this is expe ted, be ause it punishes

dis-honesty byin reasingits own. Consequently, when in reasingdishonestywhile

keepingthe ertaintyequal,theappraisalerrorin reases.

Theinterestingaspe tofthis s enariois thetrade-o betweengoals

g

1

and

g

2

. Our agent has to de ide what it values most: an a urate appraisal from, or its de eption towards Re ipro al. With this parti ular goal base and its

asso iatedpriorities,we on ludefromTable8thatouragentfavorsthelatter.

De ision

d

E

ispreferredbasedonthefa t thatithasthehighestweight. Wealsodeterminedtheinuen eoftheimportan eofgoal

g

2

onthepreferred de ision.Figure 2showstheweightsoftheargumentssupportingde isions

d

A

2

More spe i ally,wegenerated anargumentfor 100 equallyspa edvaluesof

` er-tainty'between0and1.

3

(13)

towardsRe ipro al

Dishonesty AppraisalErrorGoal Satisfa tionLevelWeight

g

1

g

2

d

1

0.00 0.63 0.37 0.00 1.49 0.37

d

2

0.25 0.75 0.25 0.25 1.49 0.38

d

3

0.50 0.85 0.15 0.50 1.49 0.40

d

4

0.75 0.87 0.13 0.75 1.49 0.51

d

5

1.00 0.85 0.15 1.00 1.49 0.65

0.2

0.25

0.3

0.35

0.4

0.45

0.5

0.55

0.6

0.65

0.7

Weight

Dishonesty

Priority

g

2

= 0.5

Priority

g

2

= 0.4

Priority

g

2

= 0.3

Priority

g

2

= 0.2

d

1

d

2

d

3

d

4

d

5

Fig.2.Thepriorityofgoal

g

2

determineswhetherRe ipro alistreateddishonestly byouragent.Thelowerthepriorityofthisgoal,thelessweightisassignedtoarguments

supportinghighlevels ofdishonesty.Table 8shows detailedresults whenthe priority

of

g

2

is0.5.

to

d

E

fordierentprioritiesofgoal

g

2

.Of ourse,asthispriorityde reases,goal

g

1

be omesrelativelymoreimportant.Whenthepriorityof

g

2

dropsbelow0.2, thes alesuddenlytipsinfavorofde ision

d

A

.Apparently,thenouragentfavors a urateappraisalsfromRe ipro alinsteadofde eiving it.

3.3 Competitionin the ARTTestbed

Intheprevioustwos enario's,ouragentmadede isionsinthes opeofasingle

transa tion.Of oursewearealsointerestedinthebehaviorofouragentduring

multiple transa tions and show that our approa h does not only satisfy our

primary requirement (i.e. being apable of explaining trusting de isions), but

that itisalso apableof ompetingagainstotheragents.

An important performan e riterion in ART is the market-shareof agents

parti ipating in the simulation. In ART, agentsdo not appraise paintings for

themselves, but on behalf of their lients. If an agent appraises its paintings

(14)

0.2 0.25 0.3 0.35 0.4 0.45 0 20 40 60 80 100 120 140 160 180 200 Mark et-share Transa tion OurAgent Honest Re ipro al

Fig.3. Market-sharesduringARTTestbedsimulationwiththreeagents

To al ulatethese market-shares,wemadeanassumptionabouthow

Hon-estandRe ipro al al ulatedelegationweightstowardstheiropponents.We

de ided that both Honest and Re ipro al use theasserted ertainty asan

indi ator for expe ted result. This means that they donot expe t their

oppo-nentstolie.The ertaintiesre eivedfromtheiropponentsarethereforeusedto

weightheirinuen eonthenal appraisal.

Figure3showstheresultsof thissimulation. Ouragentperformsbest,

fol-lowedbyRe ipro alandHonest.Re ipro albeatsHonest,be ause

Hon-estisde eivedbyouragent,whereasRe ipro alpersuadesouragentto

oop-erate.Inthisparti ularsimulationouragentbeatsbothagents,be auseitdoes

notblindly trusttheasserted ertainties from itsopponents.Instead,hasbuilt

anopponentmodelthatpredi tsthea tualappraisalerrorbasedonanumberof

variables.For example,it anpredi ttheappraisalerrorof Re ipro albased

onthede eptiontowardsitinthepreviousround.Thatway,itismore apable

ofde idingwhomtotrust,givingitastrategi edgeoverits ompetitors.

4 Dis ussion

Inthispaperweshowedhowarguments anbebasedonfuzzyrules.This

gen-eralizationofAmgoudandPrade'sargumentationframework[1℄isableto ome

upwithareasoningforea hof thepossiblede isions.Weshowedhowthe

on-den e andmat h strength oftheunderlying rules, andthe priorityof the

de- isionsinuen ethede isionsof ouragent.Combinedwithafuzzyrulelearner

(15)

orsmall ve torsto representtrust. For example,in FIRE [13℄ the quality and

the reliability of past transa tion-results are derived and used for future

de i-sionmaking.An appli ationoftheDempster-Shafertheory olle tseviden eof

trustworthiness[14℄, and another approa h using probabilisti re ipro ity

ap-turesutilityprovidedtoandre eivedfromanagent[15℄,ortheprobabilitythat

taskdelegationtowardsanagentissu essful[16℄.Be auseofthelimitedamount

of informationpresentin these models, mu h of theinformationgathered

dur-ingintera tingwithanopponentislost.Consequently,thede isionmodelsthey

supportarequitelimited.

Anexamplewherethemodeloftrustismoreelaborate anbefoundinthe

workbyCastelfran hietal.[6,17℄,wheretrustisde omposedindistin tbeliefs.

Su h a more omplex model would open up the possibility of implementing

dierentinterventionstrategies,depending onthepre ise omposition oftrust,

insteadofjust havingabinary hoi e:delegationornon-delegation.Howeverin

their approa hthe reasonswhy anagentis trustedarestill notvery lear.An

ownerofanagentthatusesaso- alledfuzzy ognitivemapis onfrontedwitha

listofspe i beliefsonpartsofthemodeloftheotheragent,su hastheother's

ompeten e, intentions, and reliability.It is not lear wherethese beliefs ome

from,and no method is given to learnsu h beliefsfrom pastintera tions. For

this,weneedtotra eba kthepro essthatestablisheda ertainde omposition

of trust for aspe i agent. We believethat our approa h forms a good basis

to in lude su h amoreelaborate model of trust, but this may requirea more

advan edfuzzyrulelearningalgorithm.

Improvingtheopponentmodelingalgorithmisoneofoursetgoalsforfuture

work.TheFURLalgorithmweusedinourapproa hhasanumberoflimitations.

Mostimportantly,FURLis in apableof dete tingrelatively omplexbehavior.

Itisnotabletoa uratelymodeldatasetswithalargenumberofinputvariables

as anbeseenfromtheextensiveexperimentsinourte hni alreport[18℄.

In ontrasttothede isionmodelofCastelfran hietal.,themodieddoxasti

logi for Belief, Information a quisition,and Trust (BIT) [19℄ is more apable

of explainingwhy ertainfa ts are believed. Forexample,using BIT,an agent

ouldbeabletopresenttherationaleofthede isiontotrustanother.Interms

ofouraim,thisisveryappealing.However,duetotheinherentun ertain,vague

and ontinuousnatureofobservationsinaMulti-AgentSystemitisnottrivialto

translatethesetoBIT.Inthispaperweshowedhowtomakesu hatranslation

tofuzzylogi .Modallogi hasno`native'supportfordire tly representingsu h

observations,butpossiblytheideasofourar hite ture anbereprodu edinthe

ontextofmodallogi .

Asanalnote,inthe urrentworkwehaveonlyusedargumentsinfavorofa

de ision.Theframework,however,alsoallowsfor ontra-arguments,allowingfor

mu h more omplexargumentation.Maybeeven moreinterestingwould be to

addsupportforreputationinourapproa h.Thiswouldinvolvebroadeningour

(16)

informa-1. Amgoud,L.,Prade,H.:Usingargumentsformakingde isions:apossibilisti logi

approa h. In:AUAI '04: Pro eedings of the 20th onferen e on Un ertainty in

arti ialintelligen e,AUAIPress(2004)1017

2. Barthés, J.A.,Ta la,C.: Agent-supportedportalsandknowledgemanagementin

omplexR&Dproje ts. In:Sixth International Conferen e onCSCWinDesign

(CSCWD'01).(2001)287292

3. Maximilien, E.,Singh,M.: Reputationand endorsement for webservi es. ACM

SIGE omEx hanges(2002)2431

4. Resni k,P.,Ze khauser, R.: Trustamongstrangersininternettransa tions:

Em-piri al analysis of ebay'sreputation system. In: Working Paper for the NBER

WorkshoponEmpiri alStudiesofEle troni Commer e.(2000)

5. Guttman,R.,Moukas,A.,Maes,P.:Agent-mediatedele troni ommer e:asurvey.

TheKnowledgeEngineeringReview(1998)147159

6. Castelfran hi,C.,Fal one,R.:So ialtrust:A ognitiveapproa h.InCastelfran hi,

C., Tan, Y., eds.: Trust and De eption in Virtual So ieties. Kluwer A ademi

Publishers(2001) 5590

7. Dash,R.K., Ram hurn,S.D.,Jennings,N.R.: Trust-basedme hanismdesign. In:

Pro eedingsofAAMAS2004.(2004)748755

8. Fullam,K.,Sabater,J.,Barber,K.S.: Towardatestbedfortrust andreputation

models. TrustingAgentsforTrustingEle troni So ieties(2005)95109

9. Zadeh, L.: Fuzzy sets. In:Fuzzy sets, fuzzy logi , and fuzzysystems: sele ted

papersbyLotA.Zadeh. WorldS ienti PublishingCo., In .,RiverEdge,NJ,

USA(1996)1934

10. Amgoud,L., Prade,H.: Explainingqualitativede isionunderun ertaintyby

ar-gumentation. In:Pro eedingsoftheAAAI,AAAIPress(2006)

11. Rozi h,R.,Ioerger,T., Yager, R.: FURL- atheoryrevision approa h to

learn-ing fuzzyrules. In:Pro eedingsof the IEEE International Conferen e onFuzzy

Systems.(2002)791796

12. Yager,R.R.: Onthehierar hi alstru tureforfuzzymodelingand ontrol. IEEE

Transa tions onFuzzySystems23(1993)11891197

13. Huynh,D.,Jennings,N.R.,R.,N.,Shadbolt: Developinganintegratedtrustand

reputation modelfor open multi-agent systems. In:Pro eedings of 7th

Interna-tionalWorkshoponTrustinAgentSo ieties.(2004)6277

14. Yu,B.,Singh,M.P.: Anevidentialmodelofdistributed reputationmanagement.

In:Pro eedings ofthe rstinternational joint onferen e onAutonomousagents

andmultiagentsystems,ACMPress(2002)294301

15. Sen,S.,Dutta,P.S.:Theevolutionandstabilityof ooperativetraits. In:

Pro eed-ingsoftherstinternationaljoint onferen e onAutonomousagentsand

multia-gentsystems,ACMPress(2002)11141120

16. Mui,L.,Mohtashemi,M.,Halberstadt,A.: Notions ofreputationinmulti-agents

systems:Areview. In:FirstInternationalConferen eonAutonomousAgentsand

MAS,Bologna, Italy(July2002)280287

17. Fal one, R.,Pezzulo,G., Castelfran hi, C.: A FuzzyApproa htoaBelief-Based

Trust Computation. In: Trust, reputation, and se urity: theories and pra ti e.

Springer-Verlag(2003)7386

18. Stranders,R.: Argumentationbasedde isionmakingfortrustinmulti-agent

sys-tems. Master'sthesis,DelftUniversityofTe hnology(2006)

19. Liau, C.J.: Belief, information a quisition, and trust in multi-agent systems: a

Cytaty

Powiązane dokumenty

On the other hand, Mitsubishi Shipbuilding Co., Ltd., to which the Nagasaki Experimental Tank belonged pro- posed a hull form derived from the new advanced method.. of hull

Im więcej uwagi poświęci się obrazowi i dźwiękowi czynności procesowej, tym mniej popełni się pomyłek sądowych. z tych również przyczyn ważkie jądro tezy wyroku

Człowiek wobec tego, również może się stać no- wym początkiem dzięki odzyskaniu właściwej mu godno- ści i jej rozwinięciu w kierunku upodobnienia (byłoby to szczególnie

У Кореневому гніздовому словнику української мови знаходимо ще альти- граф, корелограф, логограф, навіграф, хвилеграф, хмелеграф,

doprowadził do porozumienia z m agistratem pasym skim, który zgodził się otworzyć ludową szkołę katolicką, dla której pobudować miano nowy gmach na

A ctor Andrzej Zieliński Life situation Axiological persuasion, the values o f utilitarian goodnessj di-sseminated in the commercial - by using the advertised product the

Wierność przede wszystkim jest atrybutem miłosierdzia Boga, który pozostaje wierny swej odwiecznej miłosiernej miłości do wszystkich ludzi.. Na bazie tego zobowiązującego

Trzeba powiedzieć — pisałem kiedyś o tym w „Prawie i Życiu” — że z tych trzech zawodów prawniczych sędzia i prokurator, chociaż muszą reprezentować