• Nie Znaleziono Wyników

Word Sense Induction using the SnS method

N/A
N/A
Protected

Academic year: 2021

Share "Word Sense Induction using the SnS method"

Copied!
27
0
0

Pełen tekst

(1)

Word Sense Induction using the SnS method

Phd thesis

Marek Koz owski

ł

(2)

Contents

1) Ambiguity

2) WSD vs WSI

3) WSI Applications

4) WSI related work

5) The SnS description

6) Exploration of senses by example

7) Qualitative evaluation using SemEval2013 task11 [1]

8) Conclusions

(3)

Preface

The times of exponential growth of data

● Mainly unstructured data

● Unlikely to be analyzed by humans

● Currently used approaches are based on lexico-syntactic analysis

of text → words occurrences

Two main flaws of the currently used approach are:

● Inability to identify documents using different wordings

● Lack of context-awareness what leads to retrieval documents

which are not pertinent to the user needs

Knowledge of an actual meaning of a polysemous word can significantly improve retrieving more relevant documents or extracting relevant information from texts.

(4)

Ambiguity

Ambiguity around us

● More than 73% of words in English are polysemous [2] ● Average number of meanings per word is approximately 6 ● Apple can be used as a software company name, personal

computers, fruit, or river.

● Plant can be used to mean a botanical life form or a industrial

building.

● Bass may refer respectively to low-frequency tones and a type of

fish. Sample:

Suppose Robin and Joe are talking, and Joe states, “The bank on the left is solid, but the one on the right is crumbling.” What are Robin and Joe

talking about? Are they on Wall Street looking at the offices of two financial institutions, or are they floating down the Mississippi River looking for a place to land their canoe?

(5)

Foundations

Word sense theories

● There are many approaches to define sense (referential theory,

mentalist, behaviourist and use theory)

● „...the meaning of a word is its use in the language“

(Wittgenstein)

● Distributional hipothesis introduced by Zellig Harris, which can

be summarized in a few words: „a word is characterized by the company it keeps“

Main assumption used by context-based wsd methods:

Semantically similar words tend to occur in similar contexts. Sample:

wine: beer, white wine, red wine, Chardonnay, champagne, fruit, food, coffee, juice, Cabernet, cognac, vinegar, Pinot noir,

(6)

Basic concepts

Word Sense Discovery – ambiguous concept

● Word sense disambiguation -

● process of meaning identification for words in context

● is the ability to identify the meaning of words in context in a

computational manner

● is an AI-complete problem, that is a problem whose difficulty is

equivalent to solving central problems of artificial intelligence.

● Word sense induction

-● subtask of unsupervised WSD

● task of automatically identifying senses of words in texts,

without the need for handcrafted resources or manually annotated data.

(7)

Applications

Possible applications of WSI

Information retrieval e.g: new type of Web Search Engines Or

Local Domain Oriented Searchers

Information Extraction e.g: acronym expansion, people

disambiguation

Question Anserwing - the main strategy for question answering

is to find documents that have the right content even if the same words are not used.

Machine Translation - context to decide about sense of

translated word is necessary

Lexicographers supporting tool

Support tools for building ontologies - create relations

(8)

Related work

The WSI approaches can be divided into the following groups

● Clustering context vectors approach ([5], [6]) consists in

grouping of the contexts, so that a given target word occurs

● Extended clustering techniques like LSA[7], CBC[8]

● Bayesian methods [9] model the contexts of the ambiguous word

as samples from a multinomial distribution over senses (characterized as distributions over words).

● The WSI graph-based approaches ([10], [11], [12]) represent each

word co-occurring with the target word (within a context) as a vertex. Two vertices are connected via an edge if they co-occur in one or more contexts of the target word.

● Frequent termsets based algorithms exploit classical data mining

methods (as association rule mining) to induce senses ([14], [15]).

(9)

Methodology

The main assumptions:

● the granularity problem

● the contextual patterns representation

Concepts

(10)

Methodology

2) Contextual pattern are

CP1:

3) Sense Frame are

(11)

Algorithm

Main Algorithm – steps done to find senses for a query

Input: set of documents (wikipedia, or another corpora), which is indexed by Lucene

● Using full-text search there are found all paragraphs of

documents which contains a given query

● Simple context build stage → only alphanumeric noun-phrases

and proper names surrounding the given term are persisted as a contexts

● Relevant patterns build stage → closed frequent sets mining on

contexts is taken to build set of contextual patterns

Sense frames are constructed from contextual patterns

(subset-superset relations)

● Sense are generated from clustered sense frames

Output for end-user is a set of senses, with multi-level hierarchies and matching them documents.

(12)

The SnS method

The SenseSearcher (SnS) is a word sense induction algorithm

based on closed frequent sets and multi-level sense representation. SnS performed better than methods using vector space modelling. Induced senses by SnS characterize better readability (are more intuitive), also they are hierarchical, what gives them flexible granularity.

Key features of SnS are:

● ability to find infrequent, dominated senses;

● number of likely senses determined by content of corpora, there

is no fixed threshold determining constant number of retrieved senses;

(13)

Evaluation

First, we evaluate sense exploration in a large textual corpus. We will show in the experiments that SnS is enable to build hierarchical sense representations and identify dominated, infrequent senses. SnS was evaluated using Wikipedia corpus.

Second, we test SnS as a web search result clustering WSI-based algorithm. These experiments aimed at comparing SnS with other WSI algorithms within 2013 SemEval Task no 11 "Word Sense Induction within an End-User Application". According to the SemEval standards we evaluate SnS with others by means of a broad scope of quality/diversification measures.

(14)
(15)
(16)
(17)

Comparison with others

Some of algorithms which were experimentally used as alternatives:

● LSA – Latent Semantic Analysis

● Co-occurring terms are mapped to the same dimensions, not

co-occurring terms are mapped to different dimensions

● Lower number of dimensions leads to generalizations over the

simple frequency data.

● LDA – Latent Dirichlet Allocation

● probabilistic model of text generation

● models each document using a mixture over K topics, which

are in turn characterized as distributions over words

● SenseClusters

● Perl programs that allows a user to cluster similar contexts

together using unsupervised knowledge-lean methods.

(18)

Comparison with LSI

● LSI models the meaning of words and documents by projecting

them into a vector space of reduced dimensionality by applying singular value decomposition

(19)

WSI within End-user Application

Evaluation of WSI methods is difficult because there is no easy way to compare and rank various representations of senses. Evaluating WSI methods is actually a special case of a more general and difficult problem of evaluating clustering algorithms. In order to find out more rigid ways to compare results of sense induction systems, Navigli and Vannella [1] organized Semeval-2013 task 11.

The task is stated as follows:

given a target query, induction and disambiguation systems are requested to cluster and diversify the search results returned by a search engine for that query

(20)

The SnS customization

In order to perform comparisons with SemEval systems we customized SnS by adding the results clustering and diversication

phase. The clustering is performed in two phases: (1) simultaneously during sense induction, and

(2) after sense discovering clustering the results that remained not grouped before.

Each sense frame has the main contextual pattern, so according to sense frames the snippets containing the main pattern are grouped in the corresponding result cluster. Non-grouped snippets are tested iteratively against each of the induced sense. The similarity measure is defined as intersection cardinality between the snippet and sense cluster's bag-of-words. Within each cluster the snippets are sorted using this measure. Clusters are sorted by the support of their sense frames.

(21)

Compared systems

SemEval task 11 participating systems

● HDP systems adopt WSI methodology based on a non-parametric

model using Hierarchical Dirichlet Process. Systems are trained over extracts from the full text of English Wikipedia.

● Satty-approach system implements the idea of monotone

submodular function optimization, using a greedy algorithm.

● UKP systems exploit graph-based WSI methods and external

resources (Wikipedia or ukWaC).

● Duluth systems are based on second-order context clustering as

provided in SenseClusters, a freely available open source software package.

● Rakesh system exploits external sense inventories for performing

the disambiguation task. It employs YAGO hierarchy and DBPedia in order to assign senses to the search results.

(22)

Scoring

Clustering quality measures

● Rand Index (RI)

● Adjusted Rand Index (ARI) ● Jaccard Index (JI)

● Precision ● Recall

● F-measure

Diversification quality measures

● S-recall@K ● S-precision@r

(23)
(24)
(25)

Conclusions

Summarization: SnS is a novel WSI knowledge-poor algorithm SnS,

based on text mining approaches, namely closed frequent termsets. It uses small or medium size text corpus in order to identify senses. It converts simple contexts into relevant contextual patterns. Using patterns SnS build hierarchical structures called sense frames. Discovered sense frames usually are independent senses, but sometimes (e.g. because of too small corpus) can point the same sense. Finally using clustering methods sense frames are grouped in order to find similar ones referring to the same main sense.

Applications: semantic search engines, machine translation, tools for

lexicographers

(26)

References

[1] Navigli, R., Vannella, D.: SemEval-2013 Task 11: Word Sense Induction and Disam-biguation within an End-User Applications. In: Proc. 7th Int'l SemEval Workshop,

The 2nd Joint Conf. on Lexical and Comp. Semantics (2013)

[2] Miller, G., Chadorow, M., Landes, S., Leacock, C., Thomas, R.: Using a semantic concordance for sense identication. In: Proceedings of the ARPA Human Language Technology Workshop, 240-243 (1994)

[3]. Mihalcea, R., Moldovan, D.: Automatic Generation of a Coarse Grained WordNet. In: Proc. of NAACL Workshop on WordNet and Other Lexical Resources (2001)

[4]. Navigli, R.: Word sense disambiguation: A survey. ACM Computing Surveys 41(2) (2009)

[5]. Schutze, H.: Automatic word sense discrimination. Computational Linguistics - Spe-cial issue on word sense disambiguation, 24(1) (1998)

[6]. Pedersen, T., Bruce, R.: Knowledge lean word sense disambiguation. In: Proceedings of the 15th National Conference on Articial Intelligence (1998)

[7]. Landauer, T., Dumais, S.: A solution to Platos problem: The Latent Semantic Anal-ysis theory of the acquisition, induction, and representation of knowledge. Psychol-ogy Review (1997)

[8]. Pantel, P., Lin, D.: Discovering word senses from text. In: Proceedings of the 8th International Conference on Knowledge Discovery and Data Mining (2002)

[9] Brody, S., Lapata, M.: Bayesian word sense induction. In: Proceedings of EACL 2009 (2009)

(27)

References

[10]. Veronis, J.: Hyperlex: lexical cartography for information retrieval. Computer Speech and Language (2004)

[11]. Agirre, E., Soroa, A.: Ubc-as: A graph based unsupervised system for induction and classication. In: Proc. 4th Int'l Workshop on Semantic Evaluations (2007)

[12]. Dorow, B., Widdows, D.: Discovering corpus-specic word senses. In: Proceedings of the 10th conference of the European chapter of the ACL (2003)

[13]. Maedche, A., Staab, S.: Discovering conceptual relations from text. In: Proceedings of the 14th European Conference on Articial Intelligence (2000)

[14]. Rybinski, H., et al: Discovering word meanings based on frequent termsets. In: Proc. 3rd Int'l Workshop on Mining Complex Data (2007)

[15]. Nykiel, T., Rybinski, H.: Word Sense Discovery for Web Information Retrieval. In: Proc. 4th Int'l Workshop on Mining Complex Data (2008)

Cytaty

Powiązane dokumenty

The propagated weights are summed up for each document, and the documents with positive weights are to be retrieved as the result of the relevance feedback process.. It should

Mathematical induction, binomial formula, complex

25 Posterior Analytics, I.2 (71b20-23): “εἰ τοίνυν ἐστὶ τὸ ἐπίστασθαι οἷον ἔθεμεν, ἀνάγκη καὶ τὴν ἀποδεικτικὴν ἐπιστήμην ἐξ ἀληθῶν τ’

It is used throughout the query system, as described previously: the parser produces its initial form that is given to the optimizer, the optimizer rewrites it to a query tree that

The comparative study of behaviours in people suffering from respiratory allergies and/or asthma, and in the general population, showed that 97% of people with

But, the most important observation that we hope may be made on the basis of the small fraction of the analysis at hand here is the fact that the proper selection of the

A scheduling problem arises if we have some temporal variables and constraints between them, and we have to construct a schedule σ, an assignment of a value σ(t) to each event t,

Conceptual boundaries blur in discussions of state creation, territorial acquisition, and recognition. Nevertheless, Zohar Nevo and Tamar Megiddo assert that “[r]ecognition is today