• Nie Znaleziono Wyników

Towards a representation-based theory of meaning

N/A
N/A
Protected

Academic year: 2021

Share "Towards a representation-based theory of meaning"

Copied!
150
0
0

Pełen tekst

(1)

UNIWERSYTET WARSZAWSKI INSTYTUT FILOZOFII

Towards a representation-based theory of meaning

ROZPRAWA DOKTORSKA

Opracowana pod przewodnictwem prof. UW dra hab. Stanis lawa Krajewskiego

Piotr Wilkin

Warszawa, maj 2012

(2)
(3)

Contents

Introduction 5

1 Cognitive representations 9

1.1 Defining cognitive representations . . . . 9

1.2 Sidenote on formalization . . . 14

1.3 Formalizing representations . . . 16

1.3.1 The effect of representing and the paradox of the elu- sive representation . . . 25

1.3.2 Representations formalized . . . 27

1.3.3 The definition of representations . . . 33

1.4 Philosophical discussion . . . 33

1.4.1 Adequacy . . . 34

1.4.2 Profoundness . . . 39

2 Concepts, meaning and coordination 41 2.1 From representations to concepts . . . 41

2.2 World-knowledge and coordination . . . 42

2.3 Representations abstracted away . . . 50

2.4 Concepts, representations and notions . . . 52

2.5 Tests introduced . . . 56

2.6 Concepts as tests . . . 59

2.7 Tests and language . . . 63

3

(4)

3 Proper names and reference 67

3.1 The mystery of proper names . . . 67

3.2 Cognitive world-maps . . . 68

3.3 The meaning of proper names . . . 70

3.4 The categorial content of names . . . 79

3.5 The solution to philosophical puzzles . . . 81

4 Modalities and the cognitive structure 85 4.1 The importance of cognitive structure . . . 85

4.2 Modalities and the philosophical tradition . . . 87

4.3 Analyzing possibility and necessity . . . 88

4.4 Types of modal constraints . . . 95

4.5 Modal semantics and truth conditions . . . 99

4.6 Conclusions . . . 103

5 Mathematical objects and proofs 105 5.1 The nature of mathematical objects . . . 105

5.2 Infinity and the grounding of axioms . . . 107

5.3 Proofs and proof theory . . . 112

5.4 Mathematical concepts . . . 116

5.5 Conclusions . . . 121

6 The social aspects of language 123 6.1 Sapir-Whorf’s hypothesis and the linguistic reality . . . 123

6.2 Linguistic influences on cognition . . . 125

6.3 Language and attitudes . . . 127

6.4 Analyzing linguistic influence . . . 130

6.5 Conclusions . . . 138

Ending notes and further research 139

(5)

Introduction

‘Yes, all his horses and all his men,’ Humpty Dumpty went on. ‘They’d pick me up again in a minute, THEY would! However, this conversation is going on a little too fast: let’s go back to the last remark but one.’

‘I’m afraid I can’t quite remember it,’ Alice said very politely. ‘In that case we start fresh,’ said Humpty Dumpty, ‘and it’s my turn to choose a subject.’

Lewis Carroll, Through the Looking Glass

Chasing a theory of meaning is a difficult task indeed. Many philosophers have spent their lives developing such a theory; others have spent their lives arguing that such a task is, in fact, impossible. Other half spent their life on the former and the second half on the latter. Therefore, proposing a new approach to a theory of meaning seems a daunting task - especially considering over 100 years of advances in this very topic. The proper approach to this would probably be going down one of the many beaten paths. This is certainly a safe approach - even if you do not get far, you are still in good company. However, this does not seem to be the right approach.

Most of the paths in the study of meaning have been started quite some time ago - back in the days of the early XXth century, when logic was at its triumphant stage of development. Psychology - not so much. While the logical universe seemed to unveil all its secrets at the time, the land of the human psyche remained wrapped in shrouds of mystery, with the works of Freud on one hand and behaviorists on the other only setting the foundations for what would become one of the most rapidly developing sciences of the late XXth century.

5

(6)

For a long time, we knew little to none of language acquisition. All we had was thought experiments - again, multiple philosophers used those to feed their theories of language and even to speak of language acquisition, but we had no empirical data to verify those theories. It was the emergence of developmental psychology as an important branch of psychological research that finally brought us new data.

As was mentioned before, creating a theory of meaning is a daunting task.

Therefore, this is not a task we will perform in this work. Rather, we will try to create the foundations for such a theory. We will develop some definitions, propose a conceptual framework, then try to analyze some philosophically interesting cases with the use of those tools. The main goal in mind is to propose a framework that, while internally consistent and applicable to multiple known philosophical conundrums, is also consistent with the psychological data we have.

We will start by attempting to properly define, formalize and discuss the con- cept of representations. The term has been used in the literature in many contexts, but we will use it in the narrow sense usually connected with mental or cognitive representations. Around this term we will then attempt to build a framework for talking about language. We will not treat language as an abstract formal system in its own right, but rather as an abstraction of our cognitive capa- bility to represent reality and communicate those representations to others, as well as a means to coordinate our actions in the world. Then, we will propose a way to talk about concepts in language and about meanings as the contents of those concepts. Finally, we will attempt to examine a couple of interesting philosophical problems connected to language and see whether our approach can shed some new light on the debates.

The methodology used in our work is pragmatically-oriented conceptual anal- ysis. The latter is the cornerstone methodology of analytic philosophy, the former points at its American flavor. The term conceptual analysis is slight misnomer here, since, while we will start with the analysis of certain concepts, most im- portantly the concepts of meaning, language and representation, we will need to perform quite a bit of synthesis to form a conceptual framework that encompasses those terms. The pragmatically-oriented part, on the other hand, is a key fac-

6

(7)

tor here - our main aim will be to provide a functional and consistent conceptual framework regarding meaning, not to capture the essence of philosophical issues, some of them millennia old. The quest for a good conceptual framework involving meaning is demanding enough as it is.

The claim that we will want to defend is that adding a cognitive level of

description provides the means for properly analyzing the semantics of natural language - meaning that we will adopt a psychologist attitude to

some degree and argue that the mental states of language users (or, at least, some assumptions about the structure of those mental states) are in fact needed to explain various semantic aspects of language. To that end, we will explore the problems of proper names (together with a few philosophical puzzles that are connected with the topic), trying to show that it is possible to have both a causal and informative theory of names. Next, we will attempt to provide a proper analysis of modal terms, giving a unified semantics for various cases of necessity and providing credible truth conditions for natural-language uses of modalities while at the same time not endorsing a possible-worlds ontology. Afterwards we will discuss the status of mathematical objects, showing the possibility of unifying mathematical and non-mathematical induction and explaining the groundedness issue with axioms. Finally, we will talk about the social aspects of language, showing that with a proper framework, they can be analyzed in a bottom-up manner, without the need to adopt a holistic theory of the social world. We will argue that in all those areas, having a cognitive level allows us to propose both a good theory, and that the very absence of this cognitive level is likely to have caused the difficulty in solving various problems earlier on.

7

(8)

8

(9)

Chapter 1

Cognitive representations

1.1 Defining cognitive representations

The concept of cognitive representation is quite troubling to both philoso- phers and psychologists alike. First of all, two terms for representations are used:

psychologists (especially of the cognitive variety) prefer the term “cognitive rep- resentation”, while philosophers of mind prefer to use “mental representation”. It is debatable whether the two terms are actually coreferring since, as I will try to show, there is little clarity as to what the meanings of the respective terms actually are.

The term “cognitive representation” in psychological literature appears as early as the works of Piaget [Pia51]. For Piaget, who is considered the founding figure of modern developmental psychology, cognitive representations were a stepping stone in the process of the child’s social development - they followed sensory-motor as- similation during the formation of more complex mental entities. In this sense, out of the two components of the phrase “cognitive representations”, more empha- sis was placed on the former part - the function of representation was considered primitive and the consecutive, more complex forms of representing were studied.

Indeed, within contemporary developmental psychology the trend of focus- ing on the various types of representation - rather than representation itself - is quite strong. An explicit definition of representations is rarely given; for example,

9

(10)

10 CHAPTER 1. COGNITIVE REPRESENTATIONS [Mar07], which deals largely with representations, states only that “a representa- tion is an information state within the brain of the organism that contributes to adaptive behavior within a given environment”. Instead of defining representations further, the work focuses on different aspects of representation. For enactivism and embodied cognition, representation is strongly tied to action, much in the spirit of Piaget’s works.

On the other hand, the nativist tradition within psychology treats representa- tions as the building block for a higher order theory of concepts [Car09]. What is common to the different psychological traditions is that representation is treated as a basic concept - it is left undefined and used as a component in the theory. Again, Carey discusses various types of representations and representational systems, but the ground notion is left undefined.

One cannot even refer to a textbook approach to representations: in fact, a study of a cognitive psychology textbook ([Mar01]) shows the same methodological problem, with mentions of representations in fragments about perception, mem- ory, imagination and concepts, a typology of representations and even the role of various theories of representation in the formation of concepts, but no definition of representations per se.

It might not be very surprising that psychological literature does not pro- vide a definition of representations: after all, one might consider providing proper definitions (especially of the ontological nature) to be the work of philosophers.

However, the philosophical literature is also not very helpful here, for a multitude of other reasons.

First of all, it is hard to determine the relation between “cognitive representa-

tions” as understood by psychologists and “mental representations” as understood

by philosophers. Psychologists use the term to with relation to human cogni-

tive processes, both of the linguistic and pre-linguistic category. On the other

hand, the use of the term “mental representations” by philosophers is largely in-

fluenced by the Representational Theory of Mind, which, despite what the name

suggests, is largely an abstract theory of cognitive systems, owing more to formal

theories of language and the attempt to construct a theory of arbitrary systems

implementing those formal languages than to low-level psychological theories of

(11)

1.1. DEFINING COGNITIVE REPRESENTATIONS 11 cognition. What might be symptomatic of the problem: among the multitude of bibliographical entries listed on the Stanford Encyclopedia of Philosophy un- der “mental representation” ([Pit08]), there is not one that refers to Piaget. The philosophical discussion has strong ties to philosophy of language and to texts on the philosophy-psychology interface, but has little to do with psychology itself.

The above might be a moot point if we can somehow use the philosophical dis- cussion as a useful source of definitions, but unfortunately that proves not to be the case. For one, the pragmatic goals are different: we want to use representations in a theory of language as tied to overall cognition, while for example Fodor ([Fod75], [Fod08]) wants to create an abstract theory of a cognitive system that implements language as understood traditionally within philosophy of language. To underline the distinction - we want to change the philosophical picture of how language looks because we believe both that the picture faces serious internal difficulties and that psychological data undermines the picture from within, while Fodor and other pro- ponents of the computational theory of mind want to construct an abstract model that saves the traditional, structured view of language shared by formal logic and analytic philosophy.

This statement might be surprising given the apparent strong ties of the com- putational theory of mind with artificial intelligence - our objections might explain why we don’t want to use existing formal approaches to symbol processing such as [New80], since they precisely lack the features we require (a connection to a wider cognitive framework), but they seem to fall short when faced with recent advances in the study of robotics. For example, Clark and Grush in [Cla99] seem to provide exactly what we want - a theory of representation within the framework of robotics, i.e. within the context of an acting individual. Why do we not want to use such an approach?

The answer is as follows: those approaches are still too narrow for our needs.

In this respect, Newell’s symbolic processing system from 1980 and the recent work

by Clark and Grush show one quality of artificial intelligence - it still simulates only

small parts of human cognition and this scarcity of parts makes it insufficient for

our needs. This is not an attempt to denigrate robotics in any way - the attempt

to build an entity that possesses at least a fraction of the capabilities a human has

(12)

12 CHAPTER 1. COGNITIVE REPRESENTATIONS is praiseworthy. Still, a theory of how such an entity functions is insufficient to explain how a human functions, given our claims (which will be further elaborated in the text) that language is largely obtained via social coordination mechanisms.

Unless we create socially functioning robots (and a good formal framework of them), the study of robotics or the study of mechanisms that abstract from the various functions robots are not intended to handle will be simply insufficient for our needs.

This does not mean that we will not borrow various aspects both from the psychological and the philosophical discussions about representations. However, instead of compiling a synthetic definition from the uses of the term in philosophical and psychological literature (which might be both beyond the scope of this work and not very useful for my endeavors), we will instead try to construct a plausible definition for our needs (as stated in the introductory chapter, the plausability of the definition will be judged based on its usefulness, as per pragmatist standards [Jam02]). To do that, we have to first decide on some background assumptions, which will be fundamental to our further theory. Those assumptions are:

• All humans possess a certain apparatus for processing input from the out- side world - moreover, we can assume that at least some functions of that apparatus are shared among humankind.

• There is a common outside world that all of us process and that can be used to help define the contents of at least some of our mental structures.

• All humans possess a certain apparatus for processing internal input, both for probing their own mental states and processing internal feedback such as emotions. Unless proven otherwise, we will not assume that we are given

direct access to our mental states, i.e. we will assume that introspection is

as error-prone and indirect as perception.

• We are able to discern patters in both external and internal input with a

certain regularity and structures are formed in our brain (and thus, in our

mind, which is a functional abstraction of our brain) that are responsible for

this discerning process.

(13)

1.1. DEFINING COGNITIVE REPRESENTATIONS 13

• Our cognitive structures can be complex, that is, built upon simpler struc- tures. This implies that there must also be basic cognitive structures, al- though producing their list is a job for cognitive and developmental psychol- ogy.

The first two assumptions are very natural - the first one being a methodologi- cal assumption for the possibility of psychology as a science in general, the second - a general anti-skeptical assumption of a pragmatic nature. It is worth mentioning here that we are very well aware of the fact that skeptical arguments are far from being considered dismissed in philosophical literature (see eg. [Ung75]), however, the discussion of skeptical and anti-skeptical arguments is beyond the scope of this work. Therefore, we do indeed make the second assumption as a full-fledged assumption rather than an established fact.

The third assumption is certainly more problematic. Certainly, there are many philosophical positions that defend introspection granting direct access to our men- tal states ([Sch10]). On the other hand, there is a growing amount of data, both empirical and theoretical, suggesting that such an approach to introspection is implausible (for one discussion of the objections, see [Sch08]). We believe that, in light of all the evidence, assuming that introspection is subject to similar criteria as our extraspective cognition is the rational assumption to make.

The fourth assumption consists of accounting for the existence of an intuitive mechanism of induction as observed by Hume ([Hum48]), coupled with a very weak assumption about the inner-workings of our brain.

Finally, the fifth assumption is again problematic, as, under a certain under- standing, it seems to imply a nativist approach to human cognition. While this is not necessarily a bad thing, as the nativist approach to developmental data can result in some interesting theories about cognition (see eg. [Car09], [Blo04]), we do not want this work to be considered as taking a stance in the nativism debate.

Therefore, we will adopt an understanding of this assumption which does not im-

ply that the basic cognitive structures are inborn - instead, we will settle on a

weaker assumption that there are certain basic cognitive structures common to all

(or almost all) humans, regardless of whether the source of this regularity lies in

genetic or developmental factors.

(14)

14 CHAPTER 1. COGNITIVE REPRESENTATIONS The above-mentioned list of features allows us to produce the following pre- liminary definition of cognitive representations:

A cognitive representation is a structure in our mind which is responsible for consistently recognizing a pattern in our internal or external input.

Note that this definition cannot be deemed satisfactory from a theoretical perspective - there are too many terms here that have to be considered as basic, among them “structure”, “consistently recognizing” and “pattern”. Also, with definition, the term “representation” is kind of a misnomer, as the definition itself does not entail that a representation actually represents anything. Indeed, if we imagine a “brain-in-a-vat” type situation ([Put81]), where our brain is connected to machinery that simulates internal and external inputs, we will form cognitive representations but not of any external or internal world (although one might argue that we will then have cognitive representations of algorithms which the machinery uses to feed us the data). Further in this chapter, we will attempt to provide a more refined, proper definition of representations.

1.2 Sidenote on formalization

Throughout the text, I will be attempting to formalize the different notions that I introduce. During the formalization, I will be making a few assumptions, which I will now enumerate and justify.

First of all, I do not assume any specific formal system in which the formal-

izations are made (which might be considered as a shortcut for saying I actually

assume an arbitrary-order logic). This might be a surprising claim for many who

are acquainted with logical literature, but this is actually a common practice within

mathematical texts. The reason for this decision is that restricting oneself to a

specific framework limits the expressivity and requires one to focus on details of

various constructions more than on the formalization itself, while the properties of

the framework are important mainly when dealing with metamathematical prop-

erties such as the existence or length of proofs or with properties of infinite objects,

which we will usually not do here. Since I do not use the formalisms to talk about

(15)

1.2. SIDENOTE ON FORMALIZATION 15 infinities, nor do I rely on any types of foundational arguments (of whom the most common are probably the various diagonal-type arguments involving computabil- ity or cardinality), I do not feel the need to be very rigid in establishing a logical system.

I am fully aware of the objections raised by various philosophers (most notably Quine [Qui70]) to the effect that logical systems higher than first order logic are inadequate as tools for describing the world. However, since my text is neither ontological nor logical in nature, responding to those claims is beyond the scope of my study and the only answer I can offer in limited space is that Quine’s criticism is far from being universally considered as valid and there are in fact serious ontological projects (one example being that of Edward Zalta [Zal97]) which freely utilize higher order logic.

Now we will present some preferences wrt. to the formalizations used in this study, which are mostly of a purely aesthetic nature. I usually assume that I work within a typed environment, i.e. an environment in which each object has a distinct type associated with it. I assume that higher order function types can be freely constructed without any restrictions. Also, we will sometimes use lambda notation, with the typical semantics ([Bar85]) for beta reductions being:

(λx.M )N →

β M [x := N ]

for example:

(λx.x + 6)5 →

β

(x + 6)[x := 5] = 5 + 6

We will use lambda notation simply as a way of realizing functional abstraction

- we will not rely on computational properties of lambda calculus, notably, we will

try to avoid composing two lambda abstractions with each other without first

demonstrating that this does not lead to an infinite computation.

(16)

16 CHAPTER 1. COGNITIVE REPRESENTATIONS

1.3 Formalizing representations

First, a question that might be considered important: why start with formaliza- tion? Our answer is that formalization is an important dialectic process - when one formalizes a concept, the concept’s bounds become clear and thus new areas for analysis open which might be omitted when using a purely verbal description.

To put it in other words, formalizations are quite unforgiving - they don’t give the author the luxury of waving away certain problems with verbal tricks.

Therefore, this section deals not only with formalizing representations, but also with problems arising during the conceptual analysis of the concept of rep- resentation which the formalization outlines. As such, the chapter needs not be interpreted as containing definitions only - in fact, much of the contents of the chapter are devoted to building a toolset to be utilized in further research, not necessarily within the scope of this study.

To formalize representations, we first have to decide on what is the most im- portant aspect of our formalization. The apparent choice seems to be between analyzing representations in an internal or an external manner. An internal for- malization would focus on the features of the representations themselves and their internal structure, while an external formalization would describe the role of repre- sentations in a wider system. Since, as we mentioned before, this is a philosophical text, we do not want to study the internal structure of representations, therefore, the external view must be preferred. More importantly, we want to analyze repre- sentations as involved in language acquisition and language use - we are interested in the functionality of representations. Therefore, with our formalizations we will focus on the functional side of representations, namely - what inputs do they process and how the processing of those inputs affects our cognitive states.

We will borrow a page from the book of a semantic approach to natural lan- guage called dynamic semantics ([Gro91]), which in turn borrows heavily from the semantics of programming languages. In those semantic approaches, the basic the- oretical object is not a static entity but instead a transformation: something that modifies a given state into another state (for a textbook approach to denotational semantics, see eg. [Gor87]).

To put it formally, if we have a domain of states S, transformations are func-

(17)

1.3. FORMALIZING REPRESENTATIONS 17 tions S → S. If we want to mimic the behavior of a single, isolated statement, we apply our transformation to a given initial state.

This approach has been used for analyzing natural language semantics because it provides an elegant way to account for side effects of language - for example, using this approach, one can model the effect that consequent sentences within a discourse have on the context that is set within that discourse (this is especially important for fictional discourse). The possibilities are not limited to modelling intralinguistic phenomena, however - one can also model performative functions of language in this way. For example, imagine a negotiation session. For the purpose of modelling this session, the truth-conditional semantics of the sentences uttered might as well be irrelevant - what matters is how the steps modify the negotiating condition. We will now present a token “negotiation semantics” to show how this type of framework functions.

First, let us define negotiations states (I will use the standard computer science notation for enumerating possible members of an algebraic domain, with various constructor types separated by a vertical bar, constructor functions or constants in lowercase and variables in uppercase):

State ::= f ail | s1

(N um) + s

2

(N um) | succ(N um)

where N um is a variable from the domain of numbers in which the offer is presented (which represent for example dollars). In this notation, the negotiation is either in a failed state, in a state where the two sides stand on their offers, or in a success state where the two sides negotiated a common amount.

Now, for the negotiation steps. A single step is defined as follows:

Step ::= Side [ P roposal Response ] Side ::= s1 s2

P roposal ::= ...

Response ::= ...

Since this is just a mock theory, we will not be providing a full list of proposals

and responses, we’ll focus on a few examples instead.

(18)

18 CHAPTER 1. COGNITIVE REPRESENTATIONS Let us assume for simplicity that the during the negotiation process, sides take turns taking the initiative: first, side A makes their proposal and side B responds, then side B makes a proposal and side A responds and so on. So, a valid negotiating

process is a sequence of negotiation steps in which, for any two successive steps s

and t, side(s) 6= side(t) (we assume that side(x) is a function that returns the side fragment from a negotiation step structure) and that ends in step f = ? [f inish ? ] (we use ? to denote that the given part of a structure is irrelevant, this is just syntactic sugar for (∃s ∈ Side)(∃r ∈ Response) f = s [ f inish r ] ; also note that in this fragment, we will sometimes use bold font to separate variables from other syntactic objects, this is only a stylistic measure that is employed in this fragment due to the potential clarity issues with the syntax).

Now, let us start with a simple negotiation. Since, as we remember, the whole negotiation is a process (a transformation of states), we need an initial state. Let’s say the initial state is s

1

(100) + s

2

(200). Now, imagine the following negotiating process:

s1

[ (yield 20) (concede10) ] ; s

2

[ (yield 40) (accept) ] ; s

1

[ f inish accept ]

How do we interpret this? When we start, side A wants to settle for 100 dollars, while side B wants 200. As a first step, side A yields by 20, that is, moves their offer by 20 towards the opponent’s one (to make it 120). The other side responds by conceding 10 from their offer (making it 190). Now, it’s the second side’s turn to make a proposal. Now, side B yields by 40 (making their offer 150), which side A accepts as a compromise. Then, side A ends the negotiation.

This description seems a plausible interpretation of the formal notation pre-

sented above, but no semantics has so far been proposed. To do this, we have to

describe how the given steps contribute to the final result. For the simplicity of

notation, we adopt a convention where s

2

(y) + s

1

(x) is a state description equiv-

alent to s

1

(x) + s

2

(y) and o(x) is a function that returns 1 for x = 2 and 2 for

x = 1. Now, for example, the semantics of yield + concede is the following:

(19)

1.3. FORMALIZING REPRESENTATIONS 19

Jsi[ (yield x) (concede y) ]K(si(n) + so(i)(m)) =

( si(n + x) + so(i)(m − y) if n < m si(n − x) + so(i)(m + y) otherwise

First, let us explain the notation, as it might be a bit confusing.

The main functor is the J·K functor, which takes as an argument a certain negotiation state together with the description of a negotiation step (the subsequent actions of the two sides, enclosed in single square brackets) and returns a result state. Therefore, the type of this functor can be recognized as follows: J·K : (S ide × (S tep × S tep)) → (S tate → S tate)

Note that the semantic meaning function might be a partial function - for example, the semantics of a concede operation in a fail or success state might not make sense at all.

Here, the sides negotiate gradually. However, some negotiations contain

“take-it-or-leave-it” offers. For example, a modification of the previous ne- gotiation might look like this:

s

1

[ (yield 20) (concede10) ] ; s

2

[ (threaten 0) (ref use) ] ; s

1

[ f inish accept ] How do the semantics of threaten look like?

Jsi[ (threaten x) r ]K(si(n) + so(i)(m)) =





succ(n + x) if n < m ∧ r = accept succ(n − x) if n ≥ m ∧ r = accept f ail otherwise

(here, r is a variable containing the reaction of the other side: either accept or refuse)

Note that negotiations end with a f inish statement and not whenever

the result is success or failure. This is due to the fact that, as anyone who

has seen the progress of real negotiations knows, failure within a negotiating

process need not be final - there might be reset statements:

(20)

20 CHAPTER 1. COGNITIVE REPRESENTATIONS

Js

i

[ (reset x) (reset y) ] K(f ail) = s

i

(x) + s

o(i)

(y)

How do we add side effects to this story? For example, suppose that during the negotiations, sides issue press releases (the 2011 NBA collective bargaining agreement negotiation process was a good example of such a ne- gotiating process [Ash11]). How do we factor this into our semantics? The answer is: we need a broader state domain plus selective state transitions - operations that modify only parts of the state. An extreme case of this is that every process semantics can be considered a semantics for “world trans- formation”, where various operations transform fragments of the world in some way. In fact, while this case seems extreme, it might be the only viable semantics for natural language with performative effects added (how would we delimit the states of a semantics that could analyze a sentence “I declare war on Russia” as uttered by the rightful President of the United States?)

While this semantics seems to handle everything well, one thing that it struggles with is nonlinearity. Suppose that, during the negotiation process, one of the sides (say, side A) secretly obtains a legal document that entitles it, no matter what, to force an agreement on certain terms (say, the amount X). Now, side A might want to introduce a step as follows: if, during further negotiations, the end result is either a failure or a concession of more than 20 from our current proposal, use the legal leverage to enforce a deal of succ(X).

How do we factor that into our semantics?

The problem is, there is no way we can know how the future negotiations will progress. What we could do is pass a marker in the state saying that such legal leverage is possessed by side A and force each instruction to respect it (in this case, it would probably be sufficient to amend the semantics of f inish).

However, this way the semantics for all instructions that might appear must

be aware of this special marker and, in fact, of all such special markers

that might occur. This would be a very cumbersome way to construct such

semantics.

(21)

1.3. FORMALIZING REPRESENTATIONS 21 Fortunately, there is a solution, again taken from programming language semantics and adopted for natural language. That solution is continuation semantics ([Bar02]). Continuation semantics adopts the dialectic strategy of transforming the abovementioned problem into the solution: under this ap- proach, instructions are no longer simple transformations, they are functions that take as arguments transformation continuations (which are themselves functions from transformations to transformations) and yield a transforma- tion as a result. Why the name “continuation”? If we imagine a process that has multiple steps, one can divide it into the current step and the remainder.

Now, this remainder can be considered a “transformation of transformations”

- a method of getting from the initial transformation to the final transfor- mation. In other words, this remainder describes how the process continues.

Now, the idea of continuation semantics is to reify this very continuation and pass it as an argument. The only caveat is that now we have to wait until the process ends to compute the semantics for the entire process - but remember that continuation semantics is supposed to remedy nonlinearity situations where we have to wait for the process to end anyway.

So, how would our semantics for this leverage operation look like?

Jsi[ (leverage x) (pass) ]K(κ)(s) =

( succ(x) if badi(κ(s), s) κ(s) otherwise

where s = s

i

(n) + s

o(i)

(m), bad

i

(s, t) is a predicate saying that result t is bad compared to result s for side i and κ is the continuation variable.

Hence, the type of the J·K functor changes - it is now J·K : (S ide × (S tep × Step)) → (Cont → (State → State)), where Cont is the state continuation type State → State.

This semantics does exactly what we needed - if the further negotiations

(as read from κ) do not yield an expected result, the end result of the nego-

tiations is the leveraged result - otherwise, the end result is the continuation

applied to s (in other words, the uninterrupted result of the further negoti-

ation process, starting from state s).

(22)

22 CHAPTER 1. COGNITIVE REPRESENTATIONS How do we modify other statements to support continuations? The sec- ond branch of the semantics for leverage showcases the process: we apply the continuation to the non-continuized semantics. So, for example, one of our previous rules would be rewritten as follows:

Js

i

[ (reset x) (reset y) ] K(κ)(f ail) = κ(s

i

(x) + s

o(i)

(y))

How do we compute the final semantics? We apply the identity con- tinuation to the end result. This way, we get the same semantics as for the non-continuized version, apart from the non-linear side effects that we wanted. Imagine we have a negotiating process with only one step and failure as the initial state. What we would get is the following:

Js

1

[ (reset 100) (reset 200) ] K (λx.x)(f ail) = (λx.x)(s

1

(100) + s

o(1)

(200)) = (s

1

(100) + s

2

(200))

Now that we have roughly sketched the outlines of denotational and con- tinuational semantics, we still have to answer the question: how does it apply to our analysis of mental representations? Does it, in fact, apply to it at all?

To answer that question, two key observations are needed:

• cognitive processing has side effects - as well as recognizing some frag- ments of the world, it can alter the contents of our memory, change our emotional states or focus our attention on certain aspects of the world

• there are cases where cognitive processing is non-linear - for example, we might want to perform some simulations internally; also, emotional input and/or attention shifts might influence further cognitive process- ing

Those observations, together with our previously established focus on

functional, rather than structural, aspects of representations, tend to support

(23)

1.3. FORMALIZING REPRESENTATIONS 23 a view of the semantics that (a) deals with state transformations and (b) is continuational.

However, now we have to deal with a categorial problem. With our “ne- gotiation semantics”, semantics was provided for a negotiating step. Under a typical ontological categorization, a step is an event. Meanwhile, a rep- resentation is an object, not an event. How do we provide semantics for representations if we just established that we want to construct a dynamic, event-based semantics?

The answer might seem surprising at first - we don’t. At least, not di- rectly. Recall that we defined a representation as “a structure in our mind which is responsible for consistently recognizing a pattern in our internal or external input”. What this definition leaves open is what actions are per- formed once the recognition is completed. For example, recognizing objects can be part of a process of imagination, where we simulate the perceptual data normally associated with the given representation. Certainly, if we imagine a mountain, our representation of a mountain is used in the process, but it’s a different use than the one where, upon seeing Mount Everest, we utter the sentence “This is a huge mountain!”. We use the representation in yet another way when we try to reason about the possibility of mountains being made of glass.

Still, there is a common element present in my account of all the processes

above - the representation being used. However, one might argue that I am

now operating within a vicious circle here - I postulated the formalization

of representations to help make the notion precise, but now I am claiming

that I will instead provide semantics for cognitive processing and that repre-

sentations are invariant components of some cognitive processes. It is hard

to refute that claim without providing the semantics itself, which I will at-

tempt in a moment, but before that can be started, one more background

assumption has to be added here: the assumption of the unity of cognitive

processes.

(24)

24 CHAPTER 1. COGNITIVE REPRESENTATIONS An explanation of the phrase is in order. In the account of the various cognitive processes involving mountains two paragraphs above, I claimed that there is a common element within our mind that is involved in those processes. It isn’t a priori certain that this is indeed the case. After all, a totally different part of the brain might be involved in touching parts of a mountain, a completely different one might be used in imagining mountains, and yet a third part might be involved in reasoning about mountains. One could argue that the fact that those are different brain fragments doesn’t entail that they are different mind fragments, but to decide the other way would be question-begging. The only thing that we have to connect those three is the linguistic label “mountain” - but that only works when we have linguistic representations, not with cognitive representations in general. In fact, the claim that language is required to form coherent representations is not that preposterous at all.

However, empirical evidence suggests that we form coherent representa- tions long before we attach any linguistic labels to them (in fact, our pre- linguistic understanding of semantics might be the key to explaining our acquisition of syntax, see [Blo99]), thus supporting the unity assumption.

Therefore, we will in fact assume that there is a single object in our mind

responsible for all the processes involving mountains. To add a philosophical

rationale to the empirical one - even if there wasn’t any single part of the

brain responsible for the processing (and in fact, it does seem that cognitive

processing is scattered among multiple parts of the brain for even the simplest

cognitive processes), one can still postulate that, when considering the mind,

which is a functional abstraction of the brain, we can still unify those parts

under a single entity (we would only have to tackle the claim that there are

no cognitive representations without language, as without the unifying label,

if there is no functional connection between the different processes involving

mountains, there is no way to tie them within a single representation).

(25)

1.3. FORMALIZING REPRESENTATIONS 25

1.3.1 The effect of representing and the paradox of the elusive representation

In order to formalize representations in the manner described above, we must first answer an important question: what is the effect of the process of representing?

In the token semantics for negotiations above, the effect of a negotiat- ing process was simple to describe: either a failed negotiation, a successful negotiation, or both sides keeping their respective positions. On the other hand, if, instead of representations by themselves, we consider the process of representing, what is the effect of such a process?

Note that the question we are asking is the theoretical one: how do we delineate a representation process?, not an empirical psychological one: what mental actions are associated with representing?. Since the problem might seem very abstract, here’s a more down-to-earth example.

Suppose that John sees a car (imagine that John is standing on a parking lot in front of a red Cadillac). We want to hypothesize that a representation of a car was used somewhere along the way, or, to put it in other terms, that a car was represented at some point during John’s stay at the parking lot.

However, what do we actually mean by the car being represented?

John supposedly has many things on his mind during his stay on the parking lot. He might be thinking about what plans he has for the evening, how much fuel does he need for his own car, he might be thinking about what to write for his PhD thesis, he might be emotionally distraught by a bad day at work - all of those contribute to his mental processes for the given period. Moreover, him seeing the Cadillac might be accompanied by different actions: he might think how he himself would like to own a Cadillac, he might say: “What a classy car!”, he might contemplate the global warning problems caused by gas-guzzlers such as said Cadillac, or he might ignore the Cadillac whatsoever, passing it on the way to his own car. The question is:

can we single out any pattern within the different cases outlined above that

(26)

26 CHAPTER 1. COGNITIVE REPRESENTATIONS can be considered representing the Cadillac?

While we might be tempted to answer the question in a positive manner, we quickly run into a problem of being able to say what that pattern is. After all, even in an introspective account we can’t really single out an action of representing an object. The closest that we might be to such a description is when we say we’re thinking about an object, but clearly there are instances where we aren’t thinking about an object and still representing it (mostly because “thinking about an object” is a partially exclusive action, so when we’re thinking about John loving to climb mountains, we’re certainly thinking about John, but probably not thinking about a mountain, but we’re most likely using a representation of a mountain).

Here we face one of the key paradoxes of representations - there is no dis- tinct action of representing. It is therefore no surprise that some accounts of human cognition, especially those that focus on distributed cognition, want to do away with representations at all. However, we want to keep representa- tions as a useful abstraction - not because of the action of representing, but because of the existence of objects in the world and our ability to distinguish those objects reliably.

Let me once more recapitulate the methodological paradox (which we will call the paradox of the elusive representation): the most useful account of representations, as with most human cognitive activities, is a transforma- tional, state-based one, but the objects themselves (the representations) are best singled out due to objective, static criteria. Therefore, it is hard to both argue for representations and provide an account that makes their usefulness clear in one clean sweep, which might be the reason why a good account of representations has been eluding philosophers and psychologists alike.

To summarize: there is no action of representing, but there are repre-

sentations and those representations are best described in an action-based

semantics, but it’s not a semantics of representing - it’s a semantics of cog-

nition in which representations take part. Later on, we will try to provide a

(27)

1.3. FORMALIZING REPRESENTATIONS 27 definition of representations within that semantics, but one has to remember that, due to the reasons mentioned above, it will not be an essential definition - more a criterion of what it means to possess a given representation.

1.3.2 Representations formalized

Now that we’re finished with the preliminaries, the final task awaits - for- malizing representations. We will start with the domain of states for the semantics. What we will have here is cognitive states, which represent the state the mind is in currently with respect to cognition. Since cognition is a very complex process, we will not be able to provide a complete descrip- tion of those states. Also, since there are aspects of our mental states which are not normally viewed as cognitive, but can nevertheless influence cogni- tion (for example emotions, as was outlined in the philosophical literature as early as Spinoza [Spi77]), we won’t be even able to restrict those states to purely cognitive notion (it seems impossible to even give a clear account of what “purely cognitive notions” are). Therefore, we will settle for a sketch of the domain, filling it in as new data is provided. Our key restriction here is consistency - although the states are bound to be complex, we want our descriptions to be consistent, which by itself is no small task and providing a consistent description of mental or cognitive states seems to be a reasonable goal on the way to providing a complete description.

Since we’ve already established we’re not providing a complete account

of mental states, let us start with an inclusive approach - what do we want in

our states in the first place? Certainly one thing that is needed is memory -

after all, in traditional psychological approaches, representations are a part of

memory. Speaking more abstractly, memory is our personal storage for more

permanent data (psychologists usually distinguish long-term and short-term

memory and only the first one is actually responsible for permanent storage,

but for our description here, we will not worry about this distinction too

much). Another important thing is attention - whatever our mental state

(28)

28 CHAPTER 1. COGNITIVE REPRESENTATIONS is, we want it to be intentional, in the sense that, unless we are sleeping, we are conscious and focused on at least one specific topic. Next, we want plans or goals and attitudes, which are the drives for all our activity, including purely cognitive activity. Finally, we want sensory input, which is the data our cognition operates upon. In our description, the term “sensory input” will be used very broadly, as we will also include data that comes from introspection (considered “internal” sensory input) and simulation (all sorts of internal processes that we evoke to provide us with first-order sensory data, for example such processes as imagination or empathy).

The other big question is how do we describe the respective elements of

the state. For memory, the answer seems relatively simple - arbitrarily typed

objects hidden behind labels. Note that we do not postulate here that the

actual mechanism of memory recall works on labels of any sort. The labels

used here are purely metalevel ones - the label itself is simply an abstraction

for whatever memory recall mechanism is used to select the proper memory

fragment, a memory address, if we may use such a computational analogy

(one important distinction here is that the labels are certainly not linguistic,

even if they look so). For attention, we basically want an object (either an

external one such as a mountain or an internal one such as an emotional

state). For plans or goals, we want a certain state-of-the-world (again, the

external world or an internal world, such as 50,000 USD in my possession or

me being happy), finally, for sensory input we want raw data (by raw data

we mean something that is in “binary”, unstructured format, similar to how

a computer processes raw chunks of bytes), however, due to the difficulty

in describing raw data we will sometimes be describing it in a structured

form (however, one has to remember that, similar to the labels on memory,

those structural descriptions are metalevel in nature and do not intend to

describe any sort of structure on the object level). The objects mentioned

here are all easily typable on the formal level, short of memory, which has

to be represented either as an indexed product or a (partial) function from

(29)

1.3. FORMALIZING REPRESENTATIONS 29 labels to variably typed objects (in other words, a dependently typed func- tion). Also, our description of memory inside various states can, for practical reasons, never be full, so instead we will only provide relevant fragments of memory in the semantics, noting changes and restrictions on the full mem- ory whenever applicable (for example, to describe how someone corrects an erroneous representation, one needs to assume that this person already has in her memory a certain representation and that it’s different from the one intended).

Finally, and this is certainly a philosophical decision that goes beyond purely formal concerns, at least some representations have to be externalist in a sense that their existence depends on the existence of objects external to the representation itself (and sometimes on the existence of objects external to the mind itself, although that need not be the case for eg. representations of emotions). Morever, that externalism will be represented in the formal layer, as some parts of the notation will reference objects within an assumed real world (for the purposes of this text, we will use a rich “hunter-gatherer ontology”, assuming objects exist when they are needed for theoretical rea- son, although we will only apply this process to objects that are beyond doubt physical in nature - chairs are in, justice is out).

After establishing all the abovementioned conditions, it’s time to move on to the formalization (note that the State here is not the same State as the one used in our token semantics in the previous section, it is the domain of mental states). We will not treat State as an algebraic type, instead, we will describe its structure by the use of accessor functions.

First, a couple of symbol introductions for the accessors themselves:

• A : State → P (Object) is a functor that takes a state and returns its

attention focus (Object is a domain that captures objects of all types,

similar to how the Object class behaves in object-oriented programming

languages). We will here assume that attention is described externalis-

tically, or that the objects returned are the actual objects being focused

(30)

30 CHAPTER 1. COGNITIVE REPRESENTATIONS on rather than parts of the mental state. This might lead to certain problems (what is the attention focus of hallucinations?), but the as- sumption of an external (in the wide sense mentioned above in which emotions are also “external”) world being the focus of our attention is needed in order to meaningfully talk about representations without falling into a vicious circle.

• M : State → M emory is a functor that takes a state and returns its associated memory (where M emory = Label → Object)

• I : InputT ype → State → Input is a functor that takes an input type and a state and returns its sensory input of the given type (auditory, visual etc.)

• G : State → P (Goal) is a functor that takes a state and returns its set of goals and attitudes (P is the standard powerset functor).

Next, we want to decide how to formalize the respective components of the state. The Object domain is heterogenous, so we won’t be providing a single template for it, the same applies for M emory (as a domain that consists of function from Labels to Objects). The two domains that deserve more attention are Input and Goals.

As we stated before, we will consider Input to contain raw data (we are well aware that the very question whether our perception yields raw data is a matter of philosophical and psychological debate, for a summary, see eg.

[Sie11]; our account will not depend on any specific stance in this debate

other than the very weak assumption that our perceptual data is not pre-

structured linguistically, the adjective “raw” here has to be treated as simply

meaning “not processed”). Thus, we will treat it as an atomic type, although

we will describe inputs in a structured manner (the description here is, as

in the label cases above, metalevel in nature). To mark the fact that inputs

are considered raw, we will enclose the descriptions in angle brackets, so a

sample might look as follows:

(31)

1.3. FORMALIZING REPRESENTATIONS 31

I

visual

(s) = ha room with a wooden chair standing in the middlei As for the goals, the most reasonable way to write them down seems to be with sentences expressing propositional attitudes: want, desire and the like.

Therefore, a single goal will be a proposition containing such operators, and the target of the G function will be a set of such propositions, for example, a set of materialist goals would be:

G(s) = {want(I have a red Ferrari), want(I have 200,000 USD in my bank account)}

Due to the sheer size of mental states, we will usually not describe them entirely, instead, we will use the abovementioned functors to set some neces- sary and sufficient conditions for those states.

Let us move to the topic of representations. What would be a sufficient condition to say that somebody has a representation of a chair?

First of all, somebody must have a memory fragment that corresponds to the representation (or, one could even say, is the representation). Second, that fragment must be empirically adequate - they must be able to recog- nize chairs. Recognizing chairs is obviously a process - however, possessing a representation is a quality of states, not of processes. Nevertheless, we de- scribe the constraint within a process-oriented semantics, as justified above in section 1.2.

Now, we also need a predicate that allows us to say that a certain object of

the mental realm (in this case, a memory fragment) was involved in a certain

mental process. We will use inv(p, s, o) for this, where p is the process, s

is an input state and o is the object in question. The main premises for

the existence of this predicate are again non-trivial when we remember that

mental processes are basically state-transforming functions. The process can

be very long, but a function in the traditional set-theoretical sense is just a

(32)

32 CHAPTER 1. COGNITIVE REPRESENTATIONS set of ordered argument-value pairs, where the intermediate steps are lost.

In order to make the inner workings of the predicate more explicit, we will not be treating it as a simple predicate. Instead, we will use a more primitive notion of “involve” for states (invS(s, o), where s is a state and o is the object in question) as well as the notion of a process trace.

The trace of a process is a function from states to finite sequences (tuples) of primitive process - we assume that there is a subset of the domain of men- tal processes that consists of primitive process parts that cannot be decom- posed into more fine-grained steps. The function T r(p, s) outputs a sequence hp

1

...p

n

i such that p = p

1

◦ p

2

... ◦ p

n

, or p is a composition of the sequence of primitive process parts (for example, a cognitive process of imagining a mountain might involve process parts that recall the proper representations from memory, create a mental image of a mountain, add properties to the perceived mental image, form judgements about the mountain and so on).

Note that the trace is a function of the input state because we do not as- sume the homogeneity of processes irrelevant of the input - different process compositions may be responsible for processing different input states.

A derivative helper notion that we will need is that of a state trace. The state trace is simply a “motion capture” of a process in progress on a certain stage. If, for an input state s and process p, T r(p, s) has n elements, then for k ≤ n, StT r(p, s, k) is the result of applying the traced sequence up to k steps (in other words, it’s (p

1

◦ ... ◦ p

k

)(s)).

Now, we are ready to explicitly define the inv predicate:

inv(p, o) =

def

∀s∃k∃t(StT r(p, s, k) ∧ invS(s, o)

The definition states explicitly the intended semantics for inv - for a process to involve a certain object it has to be the case that for every input state the process has to handle, there is a decomposition (trace) of that process that at some point, has a state that involves that object.

Note that we are keeping one thing implicit here - the quantification

(33)

1.4. PHILOSOPHICAL DISCUSSION 33 domain for s. Right now we tacitly assume that s ranges over all input states - we will revisit that in a moment. For now, let us also assume that we might need a predicate inv

D

, where D is the restriction of the quantification domain on states.

1.3.3 The definition of representations

Now, we are ready to state our definition of representations: a memory fragment m is a representation of x if and only if, for all processes p, for the domain D of input states s such that x ∈ A(s), it is the case that inv

D

(s, m). In less formal terms: m is a representation of x if and only if m is involved in all processes where in the input state s, attention is focused on x.

Note that we might be tempted to provide an alternative definition of representations, one that is linguistically connected - m is a representation for x if n is a name for x and m will be recalled from memory whenever attention is focused on a (spoken, written) phrase n. However, this definition would not allow us to state how we obtain representations in the first place. Moreover, it would not be possible to state the criteria for adequate and inadequate (or correct and erroneous) representations without a vicious circle. Like late Wittgenstein suggested ([Wit53], [Kri82]), correctness is a group matter - there is no correctness outside a society that enforces convention. So, one can speak of a correct representation whenever it is a convention that n is a name for x and one’s memory fragment evoked by the name n is the representation for x, but for such a criterion to be meaningful, you have to have a notion of representation that does not involve language.

1.4 Philosophical discussion

The definition provided in the section above might seem a bit arbitrary - after

all, why select this and not some other definition? One reason for picking

(34)

34 CHAPTER 1. COGNITIVE REPRESENTATIONS the definition above the linguistically loaded one has been already provided when discussing our preliminary definition, and in this section I will focus on the reasoning that will show the definition to be adequate for our various theoretical purposes.

1.4.1 Adequacy

First of all, the definition has to be adequate - it has to actually describe what our common sense would label as representations. The problem with the def- inition, due to the general quantifier at the start, is that it’s an intersection- type construction. Intersection-type constructions are subject to emptiness issues - it might turn out there are simply no representations whatsoever of certain object types. Two things are done here to ensure this is not the case - one is a background assumption which we will now make clear, another is a contextual relaxation of the definition that has already been hinted in the previous section.

The background assumption is that of regularity of mental processes. In order for us to postulate representations, we have to assume that it’s not the case that a different portion of our mind is responsible for the same cognitive functions each time. If we assume said regularity (that is, that processes of a similar type are handled by the same modules), we can also safely assume that there are representations - it is almost a deductive consequence. I say almost because it is impossible to provide an adequate explication of what it means for processes to be “of a similar type” without creating into a full-fledged typology of mental processes. For our needs, it’s sufficient to state a weaker claim - that processes focusing attention of items of the same ontological kind are handled by the same parts of the mind each time.

The contextual relaxation involves the types of input states in which we

are supposed to recognize objects. The definition of representations makes

no claims about the difficulty of distinguishing an object within some sense

data. In fact, we have skimmed over the topic of the relation between sen-

(35)

1.4. PHILOSOPHICAL DISCUSSION 35 sory input data (which is internalistic and not associated with any external content) and atttention (which is externalistic and thus picks out objects in the world). In doing so, we have implicitly assumed that we are capable of perfect discrimination - whenever an object is present in the outside world and we are in its vicinity, we can select it from its surroundings. This is ob- viously not the case - if we are in a completely dark room with the proverbial black cat playing with a black ball of wool, we will not be able to tell the cat apart from the rest of the room (at least not until we step on the cat’s tail).

We have two ways out of this - either rely on attention only being able to select objects we can reasonably discriminate (but this can lead to circularity problems again, as it might be up to our representations to say which objects we can discriminate and if we restrict our attention a priori, we can never explain how new, more fine-grained representations can be forged), or we can relax the condition on the input states. Recall that the original definition restricted the domain of quantification to states in which the object repre- sented is present in the attention set; we can restrict that domain further by requiring that the object be present in the attention set and the sensory input in that state be sufficient to discriminate the object in question. De- pending on this, we can have adequate representations of different quality, where the quality is denoted by how little we want to restrict the domain of states (perfect representations obviously involve no restrictions).

One might also wonder why the definition restricts involvement of the object in the attention focus of the input state, instead of all the states in the state trace of the object. The reason for is the is the existence of parasitic mental processes and side effects. One example would be the process of someone imagining what it would be like to have cars being named chairs.

There are likely to be states during the execution of that process in which

our attention is being focused on a specific car, but the representation evoked

during that state is that of a chair (or vice versa). Various simulation states

within certain thought experiments are also a likely candidate. That’s why

Cytaty

Powiązane dokumenty

In mathematical logic, we don’t really bother with determining whether a given sentence has truth value 0 or 1 – instead we will be investigating truth values of sentences combined

The elements in F ∩ C are splitters and in the case of Harrison’s classical cotorsion theory these are the torsion-free, algebraically compact groups.. For the trivial cotorsion

For the problems being considered, issues related to the computational complexity of the problem and known optimal for all instances of the problem algorithms will

(c) Calculate the amount of money she receives in Singapore dollars, correct to two decimal places.. Clara visits Britain from the United States and exchanges 1000 US dollars

Note that we consider 0 to be a natural number, this is a convention, some textbook author may exclude 0 from the set of natural numbers.. In other words rational numbers are

A complete probability measure µ on a space X is said to be Radon if it is defined on the Borel subsets of X and has the property that the measure of each Borel set is the supremum

We say that a bipartite algebra R of the form (1.1) is of infinite prin- jective type if the category prin(R) is of infinite representation type, that is, there exists an

Recall that the covering number of the null ideal (i.e. Fremlin and has been around since the late seventies. It appears in Fremlin’s list of problems, [Fe94], as problem CO.