of the traditional 2-stage SRL model (i.e. bound-ary detection and argument classification) and onthe complete SRL task. We use three different fea-ture spaces: a set of standard attribute-value featuresand the AST and the EAST structures defined in3.4. Standard feature vectors can be combined witha p[r]
as <loves,john,(s\np)/np,1>, indicating the head ofthe functor, the head of the argument, the functor cat-egory and the argument slot. The second argument(the direct object) fills slot 2. This can be encoded as<loves,mary,(s\np)/np,2>. One of the potential ad-vantages to u[r]
crete structures. Technical Report UCSC-CRL-99-10, July.Zheng Ping Jiang, Jia Li, and Hwee Tou Ng. 2005. Se-mantic argument classification exploiting argumentinterdependence. In Proceedings of IJCAI-2005.Thorsten Joachims, Nello Cristianini, and John Shawe-Taylor. 2001. Composite kernels for h[r]
The extension of our work is to improve the per-formance of the entire semantic role labeling system using the grammar-driven tree kernel, including all four stages: pruning, semantic role identification, classification and post inference. In addition, a<[r]
beling (SRL) mainly focused on how to imple-ment SRL methods which are successful on En-glish. Similar to English, parsing is a standardpre-processing for Chinese SRL. Many featuresare extracted to represent constituents in the inputparses (Sun and Jurafsky, 2004; Xue, 2008; Dingand Ch[r]
2005 systems show a significant performance dropwhen the tested corpus, i.e. Brown, differs from237the training one (i.e. Wall Street Journal), e.g.(Toutanova et al., 2008). More recently, the state-of-art frame-based semantic role labeling systemdiscussed in (Johansson an[r]
We obtained best results with a model using acontext window of five words on either side of thetarget word, the cosine measure, and 2,000 vec-tor dimensions. The latter were the most com-mon context words (excluding a stop list of func-tion words). Their values were set to the ratio oft[r]
(non-argument) had often dominated other labels in the examples added to the training set. Lee et al. (2007) attacked another SRL learn-ing problem using self-training. Using Propbank instead of FrameNet, they aimed at increasing the performance of supervised SRL system by exploiting a[r]
tem, CLP(name2structure), in more detail in thefollowing section.In this introduction we described the particular-ities of biochemical terminology. Related work inthe area of processing these terms was overviewedand we gave the motivation for our own approach.After presenting our system
ists a parallel corpus where one of the languagesis English. However, the relatively close relation-ship between English and Swedish probably madethe task comparatively easy in our case.As we can see, the figures (especially the FEbracketing recall) leave room for improvement forthe [r]
and Gilbert, 1991) to harvest the Tainan-city tour-guiding dialogue corpus in a lab environment andexperiment with simulated noisy ASR results. Thedetails are given in this section. Two types of datafrom different sources are collected for this work.The first type of data, called A[r]
probability for assigning the node t to ARG.Propagating this procedure from the leaves to theroot of t, we have our most likely non-overlappingassignment. By slightly modifying this procedure,we obtain the most likely assignment according to591a product of local identification and classificatio[r]
(cf. [Hayes, 1984]). For example, the fact "If x produces y, then x causes y to exist" is a fact about causality. The fact "The replication of a virus requires components of a cell of an organism" is a fact about viruses. The fact "A household is an environm[r]
exterior windows, but have problems getting calls farther inside the building. And even if the cellular phone can still transmit from inside a building, it must boost its transmission signal to do so, which reduces its battery life.The solution is an in-building wireless[r]
Building SentencesInstructions for use:Print the worksheets onto card. Cut out and laminate the cards. Put each set of cards in anenvelope or box, with the relevant label attached. These are some ideas for how to use them:a)Practise sentence building - ask students[r]
scribed in this paper, a natural language interfaceto mobile robots. Compared to more typical textprocessing tasks on newspapers for which we at-tempt shallow understandings and broad coverage,for these domains vocabulary is limited and verystrong domain knowledge is available.[r]
Acquisition of a Lexicon from Semantic Representations of Sentences* Cynthia A. Thompson Department of Computer Sciences University of Texas 2.124 Taylor Hall Austin, TX 78712 cthomp@cs.utexas.edu Abstract A system, WOLFIE, that acquires a map- ping o[r]
the semantics of topics and it performs well on topic spotting task. It is well known that human expert, whose most prominent characteristic is the ability to understand text documents, have a strong natural ability to spot topics in documents. We are, however, unclear about the nature of hum[r]
considered a 23-class problem of NULL (no la-bel), the core arguments ARG0-5, REL, ARGA, andARGM- along with the 13 secondary modifier labelssuch as ARGM-LOC and ARGM-TMP. We simpli-fied R-ARGn and C-ARGn to be written as ARGn,and post-processed ASSERT to do this as well.We compared our syst[r]
RESEARCH METHODS The study describes and compare the syntactic, semantic features of idioms expressing anger in English versus Vietnamese and then withdraw some implications for the teac[r]