acl acl2011 acl2011-207 knowledge-graph by maker-knowledge-mining

207 acl-2011-Learning to Win by Reading Manuals in a Monte-Carlo Framework


Source: pdf

Author: S.R.K Branavan ; David Silver ; Regina Barzilay

Abstract: This paper presents a novel approach for leveraging automatically extracted textual knowledge to improve the performance of control applications such as games. Our ultimate goal is to enrich a stochastic player with highlevel guidance expressed in text. Our model jointly learns to identify text that is relevant to a given game state in addition to learning game strategies guided by the selected text. Our method operates in the Monte-Carlo search framework, and learns both text analysis and game strategies based only on environment feedback. We apply our approach to the complex strategy game Civilization II using the official game manual as the text guide. Our results show that a linguistically-informed game-playing agent significantly outperforms its language-unaware counterpart, yielding a 27% absolute improvement and winning over 78% of games when playing against the built- . in AI of Civilization II. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Our model jointly learns to identify text that is relevant to a given game state in addition to learning game strategies guided by the selected text. [sent-7, score-1.84]

2 Our method operates in the Monte-Carlo search framework, and learns both text analysis and game strategies based only on environment feedback. [sent-8, score-0.907]

3 We apply our approach to the complex strategy game Civilization II using the official game manual as the text guide. [sent-9, score-1.739]

4 This is an excerpt from the user manual of the game Civilization II. [sent-29, score-0.85]

5 2 This text describes game locations where the action “build-city” can be effectively applied. [sent-30, score-1.015]

6 A stochastic player that does not have access to this text would have to gain this knowledge the hard way: it would repeatedly attempt this action in a myriad of states, thereby learning the characterization of promising state-action pairs based on the observed game outcomes. [sent-31, score-1.14]

7 An algorithm with access to the text, however, could learn correlations between words in the text and game attributes e. [sent-33, score-0.878]

8 , the word “river” and places with rivers in the game thus leveraging strategies described in text to better select actions. [sent-35, score-0.861]

9 Since the game’s state space is extremely large, and the states that will be encountered during game play cannot be known a priori, it is impractical to manually annotate the information that would be relevant to those states. [sent-38, score-1.009]

10 Instead, we propose to learn text analysis based on a feedback signal inherent to the control application, such as game score. [sent-39, score-0.857]

11 Ac s2s0o1ci1a Atiosnso fcoirat Cioonm foprut Caotimonpaulta Lti nognuails Lti cnsg,u piasgteics 268–277, Our general setup consists of a game in a stochastic environment, where the goal of the player is to maximize a given utility function R(s) at state s. [sent-44, score-1.182]

12 An obvious way to enrich the model with textual information is to augment the action-value function with word features in addition to state and action features. [sent-48, score-0.401]

13 We test our method on the strategy game Civilization II, a notoriously challenging game with an immense action space. [sent-57, score-1.859]

14 3 As a source of knowledge for guiding our model, we use the official game manual. [sent-58, score-0.844]

15 In contrast, game manuals provide high-level advice but do not directly describe the correct actions for every potential game state. [sent-74, score-1.83]

16 The area of language analysis situated in a game domain has been studied in the past (Eisenstein et al. [sent-78, score-0.844]

17 Our goal is more open ended, in that we aim to learn winning game strategies. [sent-82, score-0.865]

18 (2009) rely on a different source of supervision game traces collected a priori. [sent-84, score-0.838]

19 For complex games, like the one considered in this paper, collecting such game traces is prohibitively expensive. [sent-85, score-0.838]

20 Game Representation The game is defined by a large Markov Decision Process hS, A, T, Ri . [sent-94, score-0.818]

21 Specifically, a state encodes attribu∈tes S So fa nthde a game world, iscuaclhly as available resources and city locations. [sent-97, score-0.925]

22 At each step of the game, a player executes an action a which causes the current state s to change to a new state s0 according to the transition function T(s0|s, a). [sent-98, score-0.594]

23 While this function is not known a priori, the|s program encoding the game can be viewed as a black box from which transitions can be sampled. [sent-99, score-0.894]

24 Finally, a given utility function R(s) ∈ R captures the likelaih goiovden o uft winning ttiohen game ∈fro Rm c satpattuer s (e. [sent-100, score-0.978]

25 This selection is based on the results of multiple roll-outs which measure the outcome of a sequence of actions in a simulated game e. [sent-104, score-0.993]

26 On game completion at time τ, we measure the final utility R(sτ). [sent-108, score-0.878]

27 5 The actual game action is then selected as the one corresponding to the roll-out with the best final utility. [sent-109, score-1.016]

28 The success of Monte-Carlo search is based on its ability to make a fast, local estimate of the ac5In general, roll-outs are run till game completion. [sent-111, score-0.818]

29 270 procedure PlayGame () Initialize game state to fixed starting state s1 ← s0 for t = 1. [sent-113, score-1.052]

30 States and actions are evaluated by an action-value function Q(s, a), which is an estimate of the expected outcome of action a in state s. [sent-121, score-0.473]

31 One successful approach is to model the action-value function as a linear combination of state and action attributes (Silver et al. [sent-127, score-0.398]

32 an action is selected uniformly at random; otherwise the action is selected greedily to maximize the current action-value function, arg maxa Q(s, a) . [sent-138, score-0.415]

33 1 Model Structure To inform action selection with the advice provided in game manuals, we modify the action-value function Q(s, a) to take into account words of the doc- ument in addition to state and action information. [sent-141, score-1.377]

34 A fixed, real-valued feature function x(s, a, d) transforms the game state s, action a, and strategy document d into the input vector The first hidden layer contains two disjoint sets of units and corresponding to linguistic analyzes of the strategy document. [sent-144, score-1.474]

35 The units of the second hidden layer f~(s, a, d, yi, zi) are a set of fixed real valued feature functions on s, a, d and the active units yi and zi of and z respectively. [sent-147, score-0.38]

36 The input layer represents the current state s, candidate action a, and document d. [sent-153, score-0.467]

37 Given this activation function, the second layer effectively models sentence relevance and predicate labeling decisions via log-linear distributions, the details of which are described below. [sent-158, score-0.482]

38 The third feature layer neural network is deterministically computed given the active units yi and zj of the softmax layers, and the values of the input layer. [sent-159, score-0.402]

39 Modeling Sentence Relevance Given a strategy document d, we wish to identify a sentence yi that is most relevant to the current game state st and action at. [sent-163, score-1.348]

40 272 Here ej is the predicate label of the jth word being labeled, and e1:j−1 is the partial predicate labeling constructed so far for sentence yi. [sent-175, score-0.356]

41 In the second layer of the neural network, the units z represent a predicate labeling ei of every sentence yi ∈ d. [sent-176, score-0.512]

42 Given the sentence selected as relevant and its predicate labeling, the output layer of the network can now explicitly learn the correlations between textual information, and game states and actions – for example, between the word “grassland” in Figure 1, and the action of building a city. [sent-180, score-1.57]

43 This allows our method to leverage the automatically extracted textual information to improve game play. [sent-181, score-0.881]

44 2 Parameter Estimation Learning in our method is performed in an online fashion: at each game state st, the algorithm performs a simulated game roll-out, observes the outcome of the game, and updates the parameters and of the action-value function Q(st, at, d). [sent-183, score-1.91]

45 These three steps are repeated a fixed number of times at each actual game state. [sent-184, score-0.869]

46 The information from these roll-outs is used to select the actual game action. [sent-185, score-0.818]

47 The algorithm re-learns Q(st, at, d) for every new game state st. [sent-186, score-0.925]

48 Specifically, we adjust the parameters by stochastic gradient descent, to minimize the mean-squared error between the actionvalue Q(s, a) and the final utility R(sτ) for each observed game state s and action a. [sent-189, score-1.249]

49 We use the official manual 6 of the game as the source of textual strategy advice for the language aware algorithms. [sent-195, score-1.027]

50 Civilization IIis a multi-player game set on a gridbased map of the world. [sent-196, score-0.818]

51 In our experiments, we consider a two-player game of Civilization II on a grid of 1000 squares, where we play against the built-in AI player. [sent-203, score-0.837]

52 – Game States and Actions We define the game state of Civilization IIto be the map of the world, the attributes of each map tile, and the attributes of each player’s cities and units. [sent-204, score-1.069]

53 The space of possible actions for a given city or unit is known given the current game state. [sent-206, score-0.975]

54 The actions of a player’s cities and units combine to form the action space of that player. [sent-207, score-0.373]

55 This results in a very large action space for the game i. [sent-209, score-0.996]

56 To effectively deal with this large action space, we assume that given the state, the actions of a single unit are independent of the actions of all other units of the same player. [sent-212, score-0.506]

57 kwdaeuhsrc,ladtxnpc,omraeluwnitcadhlef,r tc) and features computed using the game manual and these attributes (box below). [sent-220, score-0.91]

58 In the typical application of the algorithm, the final game outcome is used as the utility function (Tesauro and Galperin, 1996). [sent-222, score-0.96]

59 Given the complexity ofCivilization II, running simulation roll-outs until game completion is impractical. [sent-223, score-0.853]

60 The game, however, provides each player with a game score, which is a noisy indication of how well they are currently playing. [sent-224, score-0.927]

61 Since we are playing a two-player game, we use the ratio of the game score of the two players as our utility function. [sent-225, score-0.926]

62 Features The sentence relevance features φ~ and the f~ action-value function features consider the attributes of the game state and action, and the words of the sentence. [sent-226, score-1.155]

63 6 Experimental Setup f~, φ~ and Datasets We use the official game manual for Civilization IIas our strategy guide. [sent-232, score-0.921]

64 7 We instrument the game to allow our method to programmatically measure the current state of the game and to execute game actions. [sent-236, score-2.58]

65 , 2006) was used to generate the dependency parse information for sentences in the game manual. [sent-238, score-0.818]

66 Across all experiments, we start the game at the same initial state and run it for 100 steps. [sent-239, score-0.925]

67 Each roll-out is run for 20 simulated game steps before halting the simulation and evaluating the outcome. [sent-241, score-0.924]

68 In this setup, a single game of 100 steps runs in approximately 1. [sent-245, score-0.849]

69 Evaluation Metrics We wish to evaluate two aspects of our method: how well it leverages textual information to improve game play, and the ac- curacy of the linguistic analysis it produces. [sent-247, score-0.881]

70 Since full games can last for multiple days, we compute the percentage of games won within the first 100 game steps as our primary evaluation. [sent-251, score-1.259]

71 To confirm that performance under this evaluation is meaningful, we also compute the percentage of full games won over 50 independent runs, where each game is run to completion. [sent-252, score-1.046]

72 4571 Table 1: Win rate of our method and several baselines within the first 100 game steps, while playing against the built-in game AI. [sent-262, score-1.706]

73 All results are averaged across 200 independent game runs. [sent-265, score-0.818]

74 This evaluation is an underestimate since it assumes that any game not won within the first 100 steps is a loss. [sent-273, score-0.895]

75 It attempts to model the action value function Q(s, a) only in terms of the attributes of the game state and action. [sent-277, score-1.216]

76 The box below shows the predicted predicate structure of three sentences, with “S” indicating state description,“A” action description and background words unmarked. [sent-285, score-0.435]

77 7% of games, showing that while identifying the text relevant to the current game state is essential, a deeper structural analysis of the extracted text provides substantial benefits. [sent-289, score-0.98]

78 One possible explanation for the improved performance of our method is that the non-linear approximation simply models game characteristics better, rather than modeling textual information. [sent-290, score-0.881]

79 We generate this text by randomly permuting the word locations of the actual game manual, thereby maintaining the document’s overall statistical properties. [sent-293, score-0.818]

80 The second baseline, latent variable, extends the linear action-value function Q(s, a) of the game only baseline with a set of latent variables i. [sent-294, score-0.871]

81 , it is a four layer neural network, where the second layer’s units are activated only based on game information. [sent-296, score-1.059]

82 impractical since the relevance decision is dependent on the game context, and is hence specific to each time step of each game instance. [sent-301, score-1.73]

83 Therefore, for the purposes of this evaluation, we modify the game manual by adding to it sentences randomly selected from the Wall Street Journal corpus (Marcus et al. [sent-302, score-0.87]

84 , 1993) sentences that are highly unlikely to be relevant to game play. [sent-303, score-0.854]

85 Given that our model only has to differentiate between the game manual text and the Wall Street Journal, this number may seem disappointing. [sent-307, score-0.85]

86 Furthermore, as can be seen from Figure 5, the sentence relevance accuracy varies widely as the game progresses, with a high average of 94. [sent-308, score-0.935]

87 In reality, this pattern of high initial accuracy followed by a lower average is not entirely surprising: the official game manual for Civilization II is written for first time players. [sent-310, score-0.876]

88 As such, it focuses on the initial portion of the game, providing little strategy advice relevant to subsequence game play. [sent-311, score-0.942]

89 8 If this is the reason for the observed sentence relevance trend, we would also expect the final layer of the neural network to emphasize game features over text features after the first 25 steps of the game. [sent-312, score-1.17]

90 Figure 6: Difference between the norms of the text features and game features of the output layer of the neural network. [sent-315, score-0.994]

91 Beyond the initial 25 steps of the game, our method relies increasingly on game features. [sent-316, score-0.849]

92 This shows that our method is able to accurately identify relevant sen- tences when the information they contain is most pertinent to game play. [sent-320, score-0.854]

93 We evaluate the accuracy of this labeling by comparing it against a gold-standard annotation of the game manual. [sent-322, score-0.874]

94 Table 3 shows the performance of our method in terms of how accurately it labels words as state, action or background, and also how accurately it differentiates between state and action words. [sent-323, score-0.463]

95 This is to be expected since the model relies heavily on textual features only during the beginning of the game (see Figure 6). [sent-325, score-0.881]

96 This random labeling results in a win rate of 44% a performance similar – to the sentence relevance model which uses no predicate information. [sent-328, score-0.36]

97 This confirms that our method is able identify a predicate structure which, while noisy, provides information relevant to game play. [sent-329, score-0.981]

98 Column “S/A/B” shows performance on the three-way labeling of words as state, action or background, while column “S/A” shows accuracy on the task of differentiating between state and action words. [sent-333, score-0.538]

99 Figure 7 shows examples of how this textual information is grounded in the game, by way of the associations learned between words and game attributes in the final layer of the full model. [sent-335, score-1.083]

100 Our model, which operates in the Monte-Carlo framework, jointly learns to identify text relevant to a given game state in addition to learning game strategies guided by the selected text. [sent-337, score-1.871]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('game', 0.818), ('games', 0.182), ('civilization', 0.18), ('action', 0.178), ('layer', 0.142), ('predicate', 0.127), ('player', 0.109), ('state', 0.107), ('actions', 0.106), ('relevance', 0.094), ('branavan', 0.084), ('units', 0.065), ('yi', 0.065), ('silver', 0.064), ('textual', 0.063), ('utility', 0.06), ('win', 0.06), ('attributes', 0.06), ('labeling', 0.056), ('function', 0.053), ('tesauro', 0.052), ('playing', 0.048), ('winning', 0.047), ('won', 0.046), ('manuals', 0.045), ('uct', 0.045), ('strategy', 0.045), ('advice', 0.043), ('simulated', 0.04), ('softmax', 0.039), ('control', 0.039), ('galperin', 0.039), ('st', 0.036), ('relevant', 0.036), ('grounding', 0.036), ('reinforcement', 0.036), ('simulation', 0.035), ('stochastic', 0.035), ('environment', 0.035), ('simulations', 0.034), ('neural', 0.034), ('manual', 0.032), ('unit', 0.032), ('operates', 0.031), ('steps', 0.031), ('ai', 0.031), ('zj', 0.029), ('wins', 0.029), ('outcome', 0.029), ('states', 0.029), ('network', 0.028), ('ii', 0.028), ('regina', 0.027), ('played', 0.027), ('sutton', 0.027), ('official', 0.026), ('ablative', 0.026), ('actiondescription', 0.026), ('actionvalue', 0.026), ('balla', 0.026), ('billings', 0.026), ('bryson', 0.026), ('gelly', 0.026), ('rumelhart', 0.026), ('luke', 0.026), ('zettlemoyer', 0.026), ('situated', 0.026), ('guidance', 0.026), ('parameters', 0.025), ('cities', 0.024), ('layers', 0.024), ('box', 0.023), ('zi', 0.023), ('ej', 0.023), ('learns', 0.023), ('eisenstein', 0.023), ('rivers', 0.023), ('tile', 0.023), ('sentence', 0.023), ('baselines', 0.022), ('executes', 0.021), ('activation', 0.021), ('fleischman', 0.021), ('priori', 0.021), ('document', 0.021), ('selected', 0.02), ('updates', 0.02), ('fixed', 0.02), ('interpretation', 0.02), ('land', 0.02), ('traces', 0.02), ('leveraging', 0.02), ('ui', 0.019), ('current', 0.019), ('effectively', 0.019), ('play', 0.019), ('differentiating', 0.019), ('rn', 0.018), ('guided', 0.018), ('vogel', 0.018)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000001 207 acl-2011-Learning to Win by Reading Manuals in a Monte-Carlo Framework

Author: S.R.K Branavan ; David Silver ; Regina Barzilay

Abstract: This paper presents a novel approach for leveraging automatically extracted textual knowledge to improve the performance of control applications such as games. Our ultimate goal is to enrich a stochastic player with highlevel guidance expressed in text. Our model jointly learns to identify text that is relevant to a given game state in addition to learning game strategies guided by the selected text. Our method operates in the Monte-Carlo search framework, and learns both text analysis and game strategies based only on environment feedback. We apply our approach to the complex strategy game Civilization II using the official game manual as the text guide. Our results show that a linguistically-informed game-playing agent significantly outperforms its language-unaware counterpart, yielding a 27% absolute improvement and winning over 78% of games when playing against the built- . in AI of Civilization II. 1

2 0.18953609 253 acl-2011-PsychoSentiWordNet

Author: Amitava Das

Abstract: Sentiment analysis is one of the hot demanding research areas since last few decades. Although a formidable amount of research has been done but still the existing reported solutions or available systems are far from perfect or to meet the satisfaction level of end user's. The main issue may be there are many conceptual rules that govern sentiment, and there are even more clues (possibly unlimited) that can convey these concepts from realization to verbalization of a human being. Human psychology directly relates to the unrevealed clues; govern the sentiment realization of us. Human psychology relates many things like social psychology, culture, pragmatics and many more endless intelligent aspects of civilization. Proper incorporation of human psychology into computational sentiment knowledge representation may solve the problem. PsychoSentiWordNet is an extension over SentiWordNet that holds human psychological knowledge and sentiment knowledge simultaneously. 1

3 0.18427984 105 acl-2011-Dr Sentiment Knows Everything!

Author: Amitava Das ; Sivaji Bandyopadhyay

Abstract: Sentiment analysis is one of the hot demanding research areas since last few decades. Although a formidable amount of research have been done, the existing reported solutions or available systems are still far from perfect or do not meet the satisfaction level of end users’ . The main issue is the various conceptual rules that govern sentiment and there are even more clues (possibly unlimited) that can convey these concepts from realization to verbalization of a human being. Human psychology directly relates to the unrevealed clues and governs the sentiment realization of us. Human psychology relates many things like social psychology, culture, pragmatics and many more endless intelligent aspects of civilization. Proper incorporation of human psychology into computational sentiment knowledge representation may solve the problem. In the present paper we propose a template based online interactive gaming technology, called Dr Sentiment to automatically create the PsychoSentiWordNet involving internet population. The PsychoSentiWordNet is an extension of SentiWordNet that presently holds human psychological knowledge on a few aspects along with sentiment knowledge.

4 0.11758123 226 acl-2011-Multi-Modal Annotation of Quest Games in Second Life

Author: Sharon Gower Small ; Jennifer Strommer-Galley ; Tomek Strzalkowski

Abstract: We describe an annotation tool developed to assist in the creation of multimodal actioncommunication corpora from on-line massively multi-player games, or MMGs. MMGs typically involve groups of players (5-30) who control their perform various activities (questing, competing, fighting, etc.) and communicate via chat or speech using assumed screen names. We collected a corpus of 48 group quests in Second Life that jointly involved 206 players who generated over 30,000 messages in quasisynchronous chat during approximately 140 hours of recorded action. Multiple levels of coordinated annotation of this corpus (dialogue, movements, touch, gaze, wear, etc) are required in order to support development of automated predictors of selected real-life social and demographic characteristics of the players. The annotation tool presented in this paper was developed to enable efficient and accurate annotation of all dimensions simultaneously. avatars1, 1

5 0.070007242 252 acl-2011-Prototyping virtual instructors from human-human corpora

Author: Luciana Benotti ; Alexandre Denis

Abstract: Virtual instructors can be used in several applications, ranging from trainers in simulated worlds to non player characters for virtual games. In this paper we present a novel algorithm for rapidly prototyping virtual instructors from human-human corpora without manual annotation. Automatically prototyping full-fledged dialogue systems from corpora is far from being a reality nowadays. Our algorithm is restricted in that only the virtual instructor can perform speech acts while the user responses are limited to physical actions in the virtual world. We evaluate a virtual instructor, generated using this algorithm, with human users. We compare our results both with human instructors and rule-based virtual instructors hand-coded for the same task.

6 0.069181532 149 acl-2011-Hierarchical Reinforcement Learning and Hidden Markov Models for Task-Oriented Natural Language Generation

7 0.055731557 79 acl-2011-Confidence Driven Unsupervised Semantic Parsing

8 0.053928275 320 acl-2011-Unsupervised Discovery of Domain-Specific Knowledge from Text

9 0.05196384 33 acl-2011-An Affect-Enriched Dialogue Act Classification Model for Task-Oriented Dialogue

10 0.051306136 282 acl-2011-Shift-Reduce CCG Parsing

11 0.046783727 271 acl-2011-Search in the Lost Sense of "Query": Question Formulation in Web Search Queries and its Temporal Changes

12 0.042124201 103 acl-2011-Domain Adaptation by Constraining Inter-Domain Variability of Latent Feature Representation

13 0.04206755 3 acl-2011-A Bayesian Model for Unsupervised Semantic Parsing

14 0.039585829 295 acl-2011-Temporal Restricted Boltzmann Machines for Dependency Parsing

15 0.038272541 309 acl-2011-Transition-based Dependency Parsing with Rich Non-local Features

16 0.038095545 190 acl-2011-Knowledge-Based Weak Supervision for Information Extraction of Overlapping Relations

17 0.037372857 315 acl-2011-Types of Common-Sense Knowledge Needed for Recognizing Textual Entailment

18 0.036713455 260 acl-2011-Recognizing Authority in Dialogue with an Integer Linear Programming Constrained Model

19 0.034853961 141 acl-2011-Gappy Phrasal Alignment By Agreement

20 0.034819305 170 acl-2011-In-domain Relation Discovery with Meta-constraints via Posterior Regularization


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.118), (1, 0.058), (2, 0.007), (3, -0.03), (4, -0.024), (5, 0.042), (6, -0.007), (7, -0.015), (8, 0.006), (9, -0.026), (10, 0.053), (11, 0.008), (12, 0.012), (13, 0.047), (14, -0.03), (15, 0.014), (16, 0.049), (17, 0.004), (18, -0.035), (19, 0.025), (20, 0.098), (21, 0.097), (22, -0.004), (23, -0.04), (24, -0.017), (25, -0.138), (26, -0.012), (27, 0.013), (28, -0.055), (29, -0.089), (30, -0.058), (31, -0.121), (32, 0.193), (33, -0.009), (34, 0.085), (35, 0.061), (36, 0.085), (37, 0.061), (38, -0.0), (39, -0.056), (40, 0.055), (41, 0.057), (42, -0.013), (43, -0.12), (44, 0.004), (45, -0.125), (46, -0.025), (47, 0.108), (48, 0.031), (49, -0.051)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.9241665 207 acl-2011-Learning to Win by Reading Manuals in a Monte-Carlo Framework

Author: S.R.K Branavan ; David Silver ; Regina Barzilay

Abstract: This paper presents a novel approach for leveraging automatically extracted textual knowledge to improve the performance of control applications such as games. Our ultimate goal is to enrich a stochastic player with highlevel guidance expressed in text. Our model jointly learns to identify text that is relevant to a given game state in addition to learning game strategies guided by the selected text. Our method operates in the Monte-Carlo search framework, and learns both text analysis and game strategies based only on environment feedback. We apply our approach to the complex strategy game Civilization II using the official game manual as the text guide. Our results show that a linguistically-informed game-playing agent significantly outperforms its language-unaware counterpart, yielding a 27% absolute improvement and winning over 78% of games when playing against the built- . in AI of Civilization II. 1

2 0.64146078 105 acl-2011-Dr Sentiment Knows Everything!

Author: Amitava Das ; Sivaji Bandyopadhyay

Abstract: Sentiment analysis is one of the hot demanding research areas since last few decades. Although a formidable amount of research have been done, the existing reported solutions or available systems are still far from perfect or do not meet the satisfaction level of end users’ . The main issue is the various conceptual rules that govern sentiment and there are even more clues (possibly unlimited) that can convey these concepts from realization to verbalization of a human being. Human psychology directly relates to the unrevealed clues and governs the sentiment realization of us. Human psychology relates many things like social psychology, culture, pragmatics and many more endless intelligent aspects of civilization. Proper incorporation of human psychology into computational sentiment knowledge representation may solve the problem. In the present paper we propose a template based online interactive gaming technology, called Dr Sentiment to automatically create the PsychoSentiWordNet involving internet population. The PsychoSentiWordNet is an extension of SentiWordNet that presently holds human psychological knowledge on a few aspects along with sentiment knowledge.

3 0.63414532 253 acl-2011-PsychoSentiWordNet

Author: Amitava Das

Abstract: Sentiment analysis is one of the hot demanding research areas since last few decades. Although a formidable amount of research has been done but still the existing reported solutions or available systems are far from perfect or to meet the satisfaction level of end user's. The main issue may be there are many conceptual rules that govern sentiment, and there are even more clues (possibly unlimited) that can convey these concepts from realization to verbalization of a human being. Human psychology directly relates to the unrevealed clues; govern the sentiment realization of us. Human psychology relates many things like social psychology, culture, pragmatics and many more endless intelligent aspects of civilization. Proper incorporation of human psychology into computational sentiment knowledge representation may solve the problem. PsychoSentiWordNet is an extension over SentiWordNet that holds human psychological knowledge and sentiment knowledge simultaneously. 1

4 0.57831913 226 acl-2011-Multi-Modal Annotation of Quest Games in Second Life

Author: Sharon Gower Small ; Jennifer Strommer-Galley ; Tomek Strzalkowski

Abstract: We describe an annotation tool developed to assist in the creation of multimodal actioncommunication corpora from on-line massively multi-player games, or MMGs. MMGs typically involve groups of players (5-30) who control their perform various activities (questing, competing, fighting, etc.) and communicate via chat or speech using assumed screen names. We collected a corpus of 48 group quests in Second Life that jointly involved 206 players who generated over 30,000 messages in quasisynchronous chat during approximately 140 hours of recorded action. Multiple levels of coordinated annotation of this corpus (dialogue, movements, touch, gaze, wear, etc) are required in order to support development of automated predictors of selected real-life social and demographic characteristics of the players. The annotation tool presented in this paper was developed to enable efficient and accurate annotation of all dimensions simultaneously. avatars1, 1

5 0.44481903 120 acl-2011-Even the Abstract have Color: Consensus in Word-Colour Associations

Author: Saif Mohammad

Abstract: Colour is a key component in the successful dissemination of information. Since many real-world concepts are associated with colour, for example danger with red, linguistic information is often complemented with the use of appropriate colours in information visualization and product marketing. Yet, there is no comprehensive resource that captures concept–colour associations. We present a method to create a large word–colour association lexicon by crowdsourcing. A wordchoice question was used to obtain sense-level annotations and to ensure data quality. We focus especially on abstract concepts and emotions to show that even they tend to have strong colour associations. Thus, using the right colours can not only improve semantic coherence, but also inspire the desired emotional response.

6 0.41094974 320 acl-2011-Unsupervised Discovery of Domain-Specific Knowledge from Text

7 0.40299514 215 acl-2011-MACAON An NLP Tool Suite for Processing Word Lattices

8 0.38497004 252 acl-2011-Prototyping virtual instructors from human-human corpora

9 0.37631863 138 acl-2011-French TimeBank: An ISO-TimeML Annotated Reference Corpus

10 0.36770862 99 acl-2011-Discrete vs. Continuous Rating Scales for Language Evaluation in NLP

11 0.35394508 42 acl-2011-An Interface for Rapid Natural Language Processing Development in UIMA

12 0.35251176 36 acl-2011-An Efficient Indexer for Large N-Gram Corpora

13 0.34197375 130 acl-2011-Extracting Comparative Entities and Predicates from Texts Using Comparative Type Classification

14 0.33055559 229 acl-2011-NULEX: An Open-License Broad Coverage Lexicon

15 0.32912555 121 acl-2011-Event Discovery in Social Media Feeds

16 0.30761892 113 acl-2011-Efficient Online Locality Sensitive Hashing via Reservoir Counting

17 0.30508316 248 acl-2011-Predicting Clicks in a Vocabulary Learning System

18 0.30332601 321 acl-2011-Unsupervised Discovery of Rhyme Schemes

19 0.30269817 74 acl-2011-Combining Indicators of Allophony

20 0.29994294 200 acl-2011-Learning Dependency-Based Compositional Semantics


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(5, 0.042), (17, 0.057), (26, 0.046), (37, 0.081), (39, 0.053), (41, 0.053), (55, 0.048), (59, 0.043), (72, 0.036), (91, 0.056), (96, 0.146), (98, 0.039), (99, 0.193)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.86685461 17 acl-2011-A Large Scale Distributed Syntactic, Semantic and Lexical Language Model for Machine Translation

Author: Ming Tan ; Wenli Zhou ; Lei Zheng ; Shaojun Wang

Abstract: This paper presents an attempt at building a large scale distributed composite language model that simultaneously accounts for local word lexical information, mid-range sentence syntactic structure, and long-span document semantic content under a directed Markov random field paradigm. The composite language model has been trained by performing a convergent N-best list approximate EM algorithm that has linear time complexity and a followup EM algorithm to improve word prediction power on corpora with up to a billion tokens and stored on a supercomputer. The large scale distributed composite language model gives drastic perplexity reduction over ngrams and achieves significantly better translation quality measured by the BLEU score and “readability” when applied to the task of re-ranking the N-best list from a state-of-the- art parsing-based machine translation system.

same-paper 2 0.81896311 207 acl-2011-Learning to Win by Reading Manuals in a Monte-Carlo Framework

Author: S.R.K Branavan ; David Silver ; Regina Barzilay

Abstract: This paper presents a novel approach for leveraging automatically extracted textual knowledge to improve the performance of control applications such as games. Our ultimate goal is to enrich a stochastic player with highlevel guidance expressed in text. Our model jointly learns to identify text that is relevant to a given game state in addition to learning game strategies guided by the selected text. Our method operates in the Monte-Carlo search framework, and learns both text analysis and game strategies based only on environment feedback. We apply our approach to the complex strategy game Civilization II using the official game manual as the text guide. Our results show that a linguistically-informed game-playing agent significantly outperforms its language-unaware counterpart, yielding a 27% absolute improvement and winning over 78% of games when playing against the built- . in AI of Civilization II. 1

3 0.71790206 327 acl-2011-Using Bilingual Parallel Corpora for Cross-Lingual Textual Entailment

Author: Yashar Mehdad ; Matteo Negri ; Marcello Federico

Abstract: This paper explores the use of bilingual parallel corpora as a source of lexical knowledge for cross-lingual textual entailment. We claim that, in spite of the inherent difficulties of the task, phrase tables extracted from parallel data allow to capture both lexical relations between single words, and contextual information useful for inference. We experiment with a phrasal matching method in order to: i) build a system portable across languages, and ii) evaluate the contribution of lexical knowledge in isolation, without interaction with other inference mechanisms. Results achieved on an English-Spanish corpus obtained from the RTE3 dataset support our claim, with an overall accuracy above average scores reported by RTE participants on monolingual data. Finally, we show that using parallel corpora to extract paraphrase tables reveals their potential also in the monolingual setting, improving the results achieved with other sources of lexical knowledge.

4 0.71768284 137 acl-2011-Fine-Grained Class Label Markup of Search Queries

Author: Joseph Reisinger ; Marius Pasca

Abstract: We develop a novel approach to the semantic analysis of short text segments and demonstrate its utility on a large corpus of Web search queries. Extracting meaning from short text segments is difficult as there is little semantic redundancy between terms; hence methods based on shallow semantic analysis may fail to accurately estimate meaning. Furthermore search queries lack explicit syntax often used to determine intent in question answering. In this paper we propose a hybrid model of semantic analysis combining explicit class-label extraction with a latent class PCFG. This class-label correlation (CLC) model admits a robust parallel approximation, allowing it to scale to large amounts of query data. We demonstrate its performance in terms of (1) its predicted label accuracy on polysemous queries and (2) its ability to accurately chunk queries into base constituents.

5 0.71622616 145 acl-2011-Good Seed Makes a Good Crop: Accelerating Active Learning Using Language Modeling

Author: Dmitriy Dligach ; Martha Palmer

Abstract: Active Learning (AL) is typically initialized with a small seed of examples selected randomly. However, when the distribution of classes in the data is skewed, some classes may be missed, resulting in a slow learning progress. Our contribution is twofold: (1) we show that an unsupervised language modeling based technique is effective in selecting rare class examples, and (2) we use this technique for seeding AL and demonstrate that it leads to a higher learning rate. The evaluation is conducted in the context of word sense disambiguation.

6 0.71464539 161 acl-2011-Identifying Word Translations from Comparable Corpora Using Latent Topic Models

7 0.71364945 119 acl-2011-Evaluating the Impact of Coder Errors on Active Learning

8 0.71225262 282 acl-2011-Shift-Reduce CCG Parsing

9 0.71207881 5 acl-2011-A Comparison of Loopy Belief Propagation and Dual Decomposition for Integrated CCG Supertagging and Parsing

10 0.71065378 324 acl-2011-Unsupervised Semantic Role Induction via Split-Merge Clustering

11 0.71023792 133 acl-2011-Extracting Social Power Relationships from Natural Language

12 0.70798719 36 acl-2011-An Efficient Indexer for Large N-Gram Corpora

13 0.70790958 241 acl-2011-Parsing the Internal Structure of Words: A New Paradigm for Chinese Word Segmentation

14 0.70696199 190 acl-2011-Knowledge-Based Weak Supervision for Information Extraction of Overlapping Relations

15 0.7068969 170 acl-2011-In-domain Relation Discovery with Meta-constraints via Posterior Regularization

16 0.7068451 38 acl-2011-An Empirical Investigation of Discounting in Cross-Domain Language Models

17 0.706725 246 acl-2011-Piggyback: Using Search Engines for Robust Cross-Domain Named Entity Recognition

18 0.70586854 123 acl-2011-Exact Decoding of Syntactic Translation Models through Lagrangian Relaxation

19 0.70528305 300 acl-2011-The Surprising Variance in Shortest-Derivation Parsing

20 0.7049163 193 acl-2011-Language-independent compound splitting with morphological operations