I. Names for Actions By far the greatest amount of research on lexical semantics in command languages has been done with names for actions. It is easy to find instances of commands whose names are cryptic or dangerously misleading (such as Unix's cat for displaying a file, and Tenex's[r]
COMMONSENSE METAPHYSICS AND LEXICAL SEMANTICS Jerry R. Hobbs, William Croft, Todd Davies, Douglas Edwards, and Kenneth Laws Artificial Intelligence Center SRI International 1 Introduction In the TACITUS project for using commonsense knowl- edge in the understanding of texts about mecha[r]
generating a target-language surface form. CROSS-LINGUISTIC APPLICABILITY: PARAMETERIZATION OF THE TWO-LEVEL MODEL Although issues concerning lexical-semantics and as- pect have been studied extensively, they have not been examined sufficiently in the context of machine trans- lation.[r]
Lexical Semantics to Disambiguate Polysemous Phenomena of Japanese Adnominal Constituents Hitoshi Isahara and Kyoko Kanzaki Communications Research Laboratory 588-2 Iwaoka, Iwaoka-cho, Nishi-ku Kobe, Hyogo, 651-2401, Japan {isahara, kanzaki}~crl.go.jp Abstract We exploit and extend the[r]
only in its initial stages, and currently only a few,mostly small, corpora are available. Semantic an-notation has predominantly concentrated on wordsenses, e.g. in the SENSEVAL initiative (Kilgarriff,2001), a notable exception being the Prague Tree-bank (Hajiˇcov´a, 1998) . As a consequence, mostre[r]
line text, which can be automatically part-of-speech tagged, assigned shallow syntactic structure by ro- bust partial parsing systems, and morphologically analyzed, all without any prior lexical semantics. A possible disadvantage of surface cueing is that surface cues for a particular[r]
AUTOMATIC ACQUISITION OF THE LEXICAL SEMANTICS OF VERBS FROM SENTENCE FRAMES* Mort Webster and Mitch Marcus Department of Computer and Information Science University of Pennsylvania 200 S. 33rd Street Philadelphia, PA 19104 ABSTRACT This paper presents a computational model of verb acq[r]
individuals through their FORMAL role, and as collections of individuals through their CONST role, if the FORMAL individual does not meet the selectional restrictions imposed by the verb, or other semantic constraints. See Caudal (1998) for detailed evidence of this, and for a tentative solution wit[r]
ment of collocations will be detailed below. In recent years, there has been a resurgence of statistical approaches applied to the study of nat- ural languages. Sinclair (1991) states that '% word 1328 which occurs in close proximity to a word under in- vestigation is called a collocate of it Colloc[r]
developed for homonymy depend on large seman- tic differences between meanings and thus are not as useful for CEs. Although comparatives are frequently used as examples in the NLP literature (e.g. (Hendrix, Sacerdoti, Sagalowicz, and Slocum 1978), (Mar- tin, Appelt, and Pereira 1983) and (Pereira 19[r]
3We do not report Wd scores for the combined model(CM) on ASR output because this model predicted 0 segmentboundaries when operating on ASR output. In our experi-ence, CM routinely underpredicted the number of segmentboundaries, and due to the nature of the Wd metric, it shouldnot be used when there[r]
lexical only 64.4combination+lexical (T&C08) 65.2lexical+parse 68.1all features (+Parse) 68.5Table 2: Accuracy on preposition selection taskfor various feature combinationsThe Preposition Head and Complement Mixedfeatures are created by taking the first feature inthe prev[r]
According to this statement, Haspelmath considers the terms ‘wordclass’, ‘part of speech’, ‘syntactic category’, and ‘lexical category’ to beequivalent or at least ‘roughly equivalent’, and thus to be synonyms ornear-synonyms.As far as the terms ‘part of speech’ and ‘word class’ are concerned[r]
For any feature set, the mean +/- 2*SE = the 95% con-fidence interval. If the confidence intervals for two featuresets are non-overlapping, then their mean accuracies are sig-nificantly different with 95% confidence.With respect to the relative utility of lexical ver-sus acoustic-prosodic feature[r]
9: end forFigure 2: Algorithm for training Bayesian Networks forinference of lexical semantic rolesAfter the training phase, a testing procedure usingthe Markov Chain Monte Carlo (MCMC) inferenceengine can be used to infer role labels. Since it isreasonable to think that in some cases the Ver[r]
6 Comparison with Previous WorkThere are currently no probabilistic, treebank-trained parsers available for German (to our knowl-edge). A number of chunking models have been pro-posed, however. Skut and Brants (1998) used Ne-gra to train a maximum entropy-based chunker, andreport LR and LP of 84.4%[r]
is not able to handle the new verb usages, i.e., the small portion outside the dictionary cover- age. However, a native speaker has an unrestricted number of verbs for lexical selection. By measur- ing the similarities among target verbs, the most similar one can be chosen for the new verb us[r]
Chapter 2. Lexical StructureThe lexical structure of a programming language is the set of elementary rules thaspecifies how you write programs in that language. It is the lowest-level syntax of a language; it specifies such things as what variable names look like, what characters are u[r]
a “rose” or a “lily”One may call a beautiful girl a “rose” or a “lily”What is “semantics”?Conceptual meaning refers to the linguist function of the word, that provides its meaning. E.g. MoneyMoney: It’s an object (or series of them) that allow people to buy goodsAssociative meaning deals with[r]
underlying data format and no opportunity of automatic consistency checks. GernEdiT re-places the earlier development by a more user-friendly tool, which facilitates automatic checking of internal consistency and correct-ness of the linguistic resource. This paper pre-sents all these core functional[r]