My research is centered around the study of meaning-in-language: what is meaning, how do languages encode meaning, how can the grammars that encode meaning be learned from observed utterances? These are very broad questions that encompass both (i) language as the external signal that carries meaning and (ii) cognition as the ultimate producer and interpreter of language.

I am interested in Metaphor Theory and Construction Grammar because these are important intersections of language and cognition. I am interested in corpus-based computational modeling because it supports linguistic theories that are reproducible, falsifiable, and learnable. Such computational modeling involves both (i) the symbolic representation of metaphor and constructions and (ii) the statistical learning of such symbolic representations from corpora of written language.

I am also interested in using these learned symbolic representations to model social and regional dialects. Because language is characterized by the presence of externally-conditioned variations, a further criteria for the success of Metaphor Theory and Construction Grammar is their ability to predict and model variations in usage.

My research is centered around evidence produced by computational models that are learned from and evaluated on observed language. This paradigm has several epistemic advantages:
  •     The evidence gathered in this way is directly reproducible; most of the data and algorithms from my work are available at this site
  •     These models are directly falsifiable because they predict symbolic annotations that can be evaluated using existing introspective or psycholinguistic methods
  •     All use of introspection within these models is explicit because we can control what type and amount of knowledge each model starts with
  •     Such models can themselves be viewed as models of cognition, so that competing theories can be directly tested against one another on a single set of observed language
  •     Models can be tested on very large datasets, offering a further hurdle for hypotheses that appear valid on small datasets
Replicability, Falsifiability, and Learnability. I work on two phenomena where language and cognition are especially inter-dependent: Metaphor Theory and Construction Grammar. Linguistic theories need to be evaluated for reproducibility, falsifiability, and learnability; theories of metaphor and constructions have struggled with each of these standards, in large part because they depend on both language and cognition. For example, Conceptual Metaphor Theory (Lakoff & Johnson, 1999) depends on an analyst to determine which A IS B descriptions apply to a given linguistic expression; this mapping between observed language and conceptual representation is inherently not reproducible. As another example, traditional approaches to Construction Grammar (i.e., CCxG; Goldberg, 2006) have no methods for predicting the grammar of constructions present in a language or dataset; because these approaches do not make predictions about specific constructions they are inherently not falsifiable. Further, because both conceptual metaphors and specific constructions depend on manual annotations, there is no metric for evaluating whether one or another representation is more learnable from observed language. This is what it means to say that current linguistic theories of metaphor and constructions do not meet standards for replicability, falsifiability, and learnability.

My work, on the other hand, has used corpus-based computational models to formulate theories of metaphor and constructions that are replicable, falsifiable, and learnable. For metaphor theory, replicability means that a theory should be able to produce the same set of representations or predictions regardless of the analyst involved. My work on metaphor identification (Dunn, 2013a, 2013b) does this by modeling which utterances are metaphoric and how metaphoric they are; these predicted metaphoricity values are directly replicable. A metaphor identification algorithm is also falsifiable so long as its predictions are based on generalizations. For example, if the algorithm simply finds paraphrases of a metaphor like “love is a journey”, it is not falsifiable because it is not making a hypothesis; it is just finding examples of specific metaphors. My work, on the other hand, has provided hypotheses about the conceptual interactions which give rise to metaphoric language (Dunn, 2015a, 2015b) and these hypotheses, when implemented in a computational model, provide falsifiability. In short, falsifiability is not just a matter of identifying metaphors, but rather the identification of metaphors using generalizations that encode hypotheses about what metaphor is. Some of my work on metaphor has also been unsupervised, making predictions about levels of metaphoricity (Dunn, 2014a, 2014b) without reference to training data. This work provides learnability in the sense that human learners are also not provided with gold-standard examples of metaphor: rather than repeat input metaphor templates, this work asks what the essential properties of metaphor are.

Construction Grammars (CxGs) are descriptive grammars containing sets of symbolic form-meaning mappings. The strength of CxG is its descriptive power: for example, it combines item-specific representations of idioms and usage-constraints with fully-schematic representations by incorporating multiple levels of abstraction. Its weakness, however, is that the learnability and falsifiability of these unconstrained representations is difficult to evaluate. Learnability here is the degree to which the optimum set of constructions can be consistently selected from the large set of possible constructions; falsifiability is the ability to make testable predictions about the constructions present in a dataset. Replicability is the degree to which the same grammar can be derived from the same data across many analysts and here is similar to learnability. Traditional introspective approaches to CxG have no procedure for (i) selecting one potential constructional representation over another and (ii) making testable predictions about the constructions present in the grammar of a particular language. My work uses an unsupervised construction grammar induction algorithm (Dunn, 2016) to formalize CxG sufficiently to allow learnability and falsifiability to be evaluated at the level of learning. Thus, given a discovery-device CxG and a set of observed utterances, its learnability is its stability over sub-sets of data and its falsifiability is its ability to predict a grammar of constructions. Replicability follows from its computational nature: given the code and the data used, the same grammar will always be obtained.

The point here is that both metaphor and constructions are linguistic manifestations of the intersection between language and cognition. I have been able to formulate theories (i.e., models) of these phenomena that are replicable, falsifiable, and learnable using corpus-based computational modeling. This is important for providing a rigorous evaluation of linguistic theories.

Symbolic Representations and Statistical Learning. My work combines symbolic representations with statistical learning. To see why this is important, we can contrast this with purely symbolic approaches that are falsifiable but not reproducible, on the one hand, and with purely statistical approaches that are reproducible but not fully falsifiable, on the other hand.

On the one hand, symbolic representations can be manually-specified as in Ontological Semantics (Nirenberg & Raskin, 2004) for meaning-in-language and Fluid Construction Grammar (Steels, 2012) for constructions. In both cases, the goal is to provide introspective analyses for all observed phenomena and then hand-code these introspective analyses into machine-tractable symbolic representations that provide the entry-point for computational modeling. There are two problems with this approach: First, it does not scale because it will never be possible to enumerate all relevant structures in all languages and dialects; even if it were feasible, these representations cannot be evaluated for learnability. Second, these approaches are inherently not reproducible because all output depends on a certain set of hand-coded inputs; different analysts will produce different representations. Thus, these sorts of models are not learnable and not reproducible, two of the main motivations for using computational models in the first place.

On the other hand, we can contrast this with purely statistical learning methods, for example combining distributional semantics with deep neural networks or bag-of-word features with non-linear SVMs. In this case we only have access to the predictions of the model; there is no way to interpret the model’s internal representations or to determine why the model made one prediction over another (i.e., which input features lead to a given decision boundary). The problem with these purely statistical approaches is that, although they provide learnability and replicability, they are not falsifiable. Making predictions is one part of falsifiability, but the second part is to connect those predictions with specific hypotheses. For language, hypotheses almost always involve some sort of inspectable symbolic representation. Thus, black-box models are not falsifiable; or, at least, we cannot learn anything meaningful in linguistic terms by trying to falsify them.

My work rejects this dichotomy by using statistical methods for learning symbolic representations (e.g., Dunn, 2016). This supports computational models that can be evaluated for learnability (i.e., whether they can be learned from observed language) as well as falsifiability (i.e., model-internal representations count as predicted analyses) and replicability (i.e., producing the same output given the same input). This is important because we need the explanatory and descriptive adequacy of symbolic representations (qualities which both Ontological Semantics and Fluid Construction Grammar have), but we need it alongside the learnability and replicability of statistical models. My work has shown that it is possible to satisfy both sets of requirements in a single model.

Modeling Linguistic Variation. Language is subject to externally-conditioned variations; this means that a single generalized model of either metaphor or constructions will not be adequate to describe observed usage. My work has taken a text-classification approach to modeling both social dialects (Dunn, et al., 2015) and regional dialects (Dunn, In Review). This approach takes meta-data about the source of observed language (e.g., its dialect region) and uses a supervised classifier to identify which learned symbolic representations are predictive of each region. Representations which are used similarly across social and regional dialects have no predictive value. On the other hand, representations whose usage varies across dialects do have predictive value and this indicates that they are externally-conditioned. Thus, feature weights from learned models allow us to identify dialect-specific variants in the high-dimensional feature space of learned symbolic representations. This work is important because a truly adequate linguistic theory should also be able to describe variations in the usage of the phenomenon in question. My recent work, for example, has shown that learned construction grammars can be used to accurately predict usage across regional dialects.

Works Cited

Dunn, Jonathan. (2013a). "Evaluating the Premises and Results of Four Metaphor Identification Systems." In Proceedings of the Conference on Intelligent Text Processing and Computational Linguistics, Vol. 1 [CICLING 2013]. Heidelberg: Springer. 471-486.

Dunn, Jonathan. (2013b). "What Metaphor Identification Systems Can Tell Us About Metaphor-in-Language." In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: First Workshop on Metaphor in NLP [NAACL 2013]. Stroudsburg, PA: Association for Computational Linguistics.   1-10.

Dunn, Jonathan. (2014a). "Measuring Metaphoricity." In Proceedings of the Annual Meeting of the Association for Computational Linguistics, Vol. 2 [ACL 2014]. Stroudsburg, PA: Association for Computational Linguistics. 745-751.

Dunn, Jonathan. (2014b). "Multi-Dimensional Abstractness in Cross-Domain Mappings." In Proceedings of the Annual Meeting of the Association for Computational Linguistics: Second Workshop on Metaphor in NLP [ACL 2014]. Stroudsburg, PA: Association for Computational Linguistics. 27-32.

Dunn, Jonathan. (2015a). "Three Types of Metaphoric Utterances That Can Synthesize Theories of Metaphor." Metaphor & Symbol, 30(1): 1-23.

Dunn, Jonathan. (2015b) "Modeling Abstractness and Metaphoricity." Metaphor & Symbol, 30(4): 259-289.

Dunn, Jonathan; Argamon, Shlomo; Rasooli, Amin; & Kumar, Geet. (2015) "Profile-Based Authorship Analysis." Literary and Linguistic Computing, 31. 22 pages.

Dunn, Jonathan. (2016). "Computational Learning of Construction Grammars." Language and Cognition.

Goldberg, A. (2006). Constructions at work: The nature of generalization in language. Oxford: Oxford University Press.

Lakoff, G. & Johnson, M. (1999). Philosophy in the Flesh. NY: Basic Books.

Nirenburg, S. & Raskin, V. (2004). Ontological Semantics. Cambridge, MA: MIT Press.

Steels, L. (2012). "Design methods for fluid construction grammar." In Steels, L. (ed), Computational Issues in Fluid Construction Grammar. Berlin: Springer. 3-36.