Who thinks that they can subdue Leviathan? Strength resides in its neck; dismay goes before it. It is without fear. It looks down on all who are haughty; it is king over all who are proud. These words inspired PJ Wehry to create Chasing Leviathan. Chasing Leviathan was born out of two ideals: that truth is worth pursuing but will never be subjugated, and the discipline of listening is one of the most important habits anyone can develop. Every episode is a dialogue, a journey into the depths of a meaningful question explored through the lens of personal experience or professional expertise.
PJ Wehry:
Hello and welcome to Chasing the Liathen. I'm your host, PJ Weary, and I'm here today with Dr. Matt Traxler, professor of psychology at UC Davis. And we're talking about his book, Introduction to Psycholinguistics, Understanding Language Science. Dr. Traxler, wonderful to have you on today. Matt, wonderful to have you on today. And tell us a little, what is psycholinguistics to kind of orient our audience?
Matt:
Great opening question. So, psycholinguistics is a multidisciplinary, interdisciplinary effort. At its core, we're really trying to understand how humans produce language, and how humans understand language, and how starting from zero, children are able to acquire languages without any kind of special training or intervention. And there's different perspectives on those core questions. I'm a psychologist, trained at University of Minnesota, got my PhD at University of Oregon. So I'm coming at it from a cognitive psychology, cognitive science background. We're interested in information processing and representations of information and how the mind manipulates those things in pursuit of some task. I spend a lot of time talking to linguists. They have a different orientation. They're also interested in cognition and psychology, but they focus, this could be an overgeneralization, but they focus more on formal descriptions of languages. What are the components of languages? How do those things fit together? How do different kinds of languages express meaning? We talk a lot to philosophers. We have an... interdisciplinary cognitive science program here where philosophers actually spearheaded that effort. So we're interested in what philosophers have to say about language. Computer scientists are involved, developmental specialists. There's a variety of disciplines that computer science, can't forget computer science. But at the core of it is just how we have this unique ability as humans to communicate using language. Why is that? What are the cognitive structures and processes that are involved? How can we use computer science methods to answer interesting questions about human language? So that's it at its core. And I think of the overall effort, psycholinguistics is a piece of it. So using psychological techniques and perspectives to understand language. But language science, I think, is a... is a term that I prefer even though my publisher doesn't because I think language scientists helps bring together people from different perspectives without saying you have to be interested in the cycle part of it you can just be interested in language. So if I understand the question right, it's sort of what, how do we tackle these questions about language and what is the connection between the data collection side and kind of the theory building side? And that's a long conversation in and of itself. But one of the things that I've observed in my 30 or so years doing the work is the methods have shifted quite a bit. And so, When I started out, there was very little neuroscience in the psycholinguistics field. We had ERPs. They were pretty well developed, but not as developed as they are currently. That's brainwave measurements, electrophysiology. Yep. But neural imaging really was in its bare infancy. It was very difficult to get access to equipment. There weren't very many sites. Back then we were using positron emission tomography using radioactive tracers. And again, the availability of that research method was very, very limited. And then we started to get the magnetic resonance imaging revolution underway. So now, you know, in the last 10, 15 years or so, there's been an acceleration of using neuroimaging methods to study language. One of my very good friends, Evelina Fedorenko from Massachusetts Institute of Technology, has done some great work mapping out, she calls it the brain network. Not everyone agrees with her about exactly which parts of brain participate in language, but she's using very sophisticated neuroimaging methods that I could not have used in the 1990s when I was in graduate school. So that, in terms of methodology, there's been lots of exciting developments in terms of techniques we can use. we can image the living brain, we can assess the electrical activity of living brains much better than we could before. So that's a very, very active area of research. In terms of how that ties to theory, again, it's a very diverse field, and there are people who focus on large questions and people who focus more narrowly on specific aspects of language function. But I've always kind of been a follower of Karl Popper. In cognitive psychology and psycholinguistics, we can't directly observe many of the mental events that we think are interesting or important in how we produce language and how we understand language. So we're trying to look at stuff that's invisible effectively. So we have to be very careful in our theorizing to do Poperian. science, right? So Karl Popper said, among other things, your theories have to be able to make specific predictions about future observations, and that I think is the secret sauce that unlocks a lot of cognition. So... if all of the important stuff is invisible, you can sometimes persuade yourself that things are happening even though they're not. And if you're theorizing is loose and you're not making real specific predictions, and if you're not rigorous in statistics is a The way we analyze our data statistically has huge implications on the conclusions we draw. Anyway, long story short, the connections between the theory and the observations are super, super important, and there's a real push in the field to be more rigorous in terms of how we draw those connections. And that has impacts on how we construct our samples. It impacts how we do our statistical analyses. And I think it helps improve the quality of our conclusions because there is a strong push in the field for more rigor. And that shows up on different dimensions.
PJ Wehry:
And I might be speaking out of turn here, so feel free. One, thank you for taking my question and making it better. I appreciate that, Matt. But the, I might be speaking out of turn here, but from the little reading that I've done in psychology, one of the reasons that Freud has become unpopular is because his theories are not falsifiable, right?
Matt:
Correct.
PJ Wehry:
It's like
Matt:
Yep.
PJ Wehry:
when he predicts stuff, what happens is he just, reinterprets the event as either one way or the other, and no matter what, his interpretations cover everything.
Matt:
Right.
PJ Wehry:
But it's no use as a predictive model, right? And so that'd be a good example of what
Matt:
Absolutely,
PJ Wehry:
you're not looking for.
Matt:
exactly. So the classic example, or one of the classic examples, is the concept of denial. So there's a very famous case from Brewer called the Rat Man. The Rat Man had terrible anxiety, verging on panic attacks. He had this repeated nightmare, a night terror. He would wake up in a cold sweat. because he thought rats were tunneling into his body, kind of like that scene at the end of 1984 by George Orwell. So they did Freudian psychodynamic analysis on this fellow for an extended period of time. They discovered to their satisfaction what his hidden conflicts were. And then they did the big psychodynamic reveal. Hey, buddy, here's the source, the ultimate source. The hidden conflict in your unconscious is whatever it was. Hey. you should feel much better now. The guy continued to have all these nightmares. His problems were in no way solved by the psychodynamic efforts. And instead of going back to the theory and saying, well, we did the recipe, we followed what we were supposed to do according to the theory, it didn't work. A real rigorous scientist would then say, okay, what's wrong with my theory? And go back and revise. Freud doesn't make that move. Freud says it's the patient's fault. The patient is in denial. So if the patient gets better, that's good for the theory. If the patient doesn't get better, that's good for the theory, because he's in denial. It's a game of heads I win, tails you lose. And that's what we're trying to avoid. We face similar challenges as cognitive psychologists, because again, not dissimilar to Freud. We are We are giving explanatory power to things we can't see. So we have to be very careful in what Mike Tannenhaus used to call our linking hypotheses. Something invisible is happening. What are the concrete, predictable, objective events you're gonna observe as the basis of that hidden activity? And you can't just, if your observations don't turn out as predicted, you shouldn't be coming up with these excuses. You have to be going back to the theory saying, well, I guess I was wrong about those hidden invisible events. Let me come up with a better framework, better theory, generate some new hypotheses from that theory that are specific and testable, and then go make the observations. That's what we're trying to do as cognitive scientists and as psycholinguists. Because again, we're talking about memory systems, attention systems, information processing flow. representations of language components that we can't see. So we gotta be careful. And sometimes we have to be humble. Like, okay, I have this theory of how sentences are processed and understood. I did some experiments, they didn't work out the way I planned. I have to be humble and go back to the theory and say, oops, let me see if I can come up with a better concept. We have some advantages that are not available in some aspects of social science. So what you're talking about is what we call demand characteristics. And if somebody knows they're being observed, they may very well change their behavior. There's very famous examples of this from industrial organizational psychology like the Hawthorne effect. This dude back in the 50s went into factories and he was interested in productivity and the factors that would make workers produce more with given inputs. He announced this study ahead of time, and then he started playing with things like lighting levels and sound. He did all kinds of manipulations to the factory. No matter what he did, productivity went up. There was one version of the experiment where he turned the lights down to almost zero. Like you could barely see what was in front of your face. And it didn't matter. People in the factory produced more stuff anyway because they knew they were being watched. So that's called the Hawthorne effect, and it's something we worry about as cognitive psychologists and social science in general. We have some advantages in the language science field because, and again, it's not that we have to pay zero attention to demand characteristics, we still worry about it, but the events, the mental events that we're looking at happen over really short time scales. You know, 100. 150 milliseconds, one-tenth of one second. So we don't have to worry as much about whether people are consciously trying to change their response. There's brainwave patterns that are going to happen, whether you want them to happen or not. So those kinds of mental events, we really don't have to worry about the demand characteristics too much. We do in many. Psycholinguistic experiments have to worry about special strategies. For example, in reading research, there are different modes of reading that we can decide to engage in. We can decide to really, really pay close attention and really read very, very carefully, and that will produce a given pattern of eye movements during reading. It will produce certain comprehension outcomes. Or we can decide to just skim through things. and just kind of pick up a few words here and there and get the gist. Those two modes of reading have very different consequences for what we observe when we ask people to read stuff. And a lot of the work in my lab and other people's labs really involves people reading things. We measure their eye movements and we look at patterns of eye movements to try to figure out what's happening in terms of cognitive processes. And yes, if we instruct our subjects in certain ways, or our participants in certain ways, that could impact their response. And that may influence the conclusions we can draw, or the inferences we draw from what we observe. So yeah, we're lucky in a sense that things happen so fast and so automatically in a lot of language processes that we don't have to worry as much about participant strategies or demand characteristics. But sometimes we do. And sometimes we're not aware of what strategic characteristics or strategic processing considerations might, how those might influence people's responses. Ooh.
PJ Wehry:
No
Matt:
You
PJ Wehry:
pressure.
Matt:
know, it's funny, it's funny. We were having a meeting the other day, just a couple months ago. We have a language group that meets here, and we review studies that people have run, and we talk about different aspects of language. And one of our faculty members was talking in detail about different subcomponents of language and how they might influence one another in a given. processing environment and I've sitting there and I was thinking We don't really know What language is? We have formal descriptions of components of language like there's there's a Phonetic features in speech and those go into making phonemes and phonemes go into making Syllables and morphemes and words and we can combine those things into bigger units like phrases and clauses and sentences and then People can have dialogues and discourse And all, so at kind of one level, there's general consensus about what components of language are in terms of the linguistic pieces. And there's a hierarchy. There's more fundamental basic pieces. And then you put those little language atoms together and you get language molecules and you put the molecules together and you get compounds. And you build your way up on the complexity tree. At a deep, deep philosophical level, I'm not 100% sure that we're describing the right components or whether the way we view those components is kind of a byproduct of our specific way of observing and thinking about things. That's a very deep philosophical question. There's another really interesting area of research that's, it's more or less comparative communication research. And one of the fundamental questions that those folks are asking is what is language and how does the human language system and its abilities, how does that differ from other animal communication systems. There's various ideas about this. So Pinker, Steve Pinker, beautiful writer, very prominent theoretician in language. Pinker and folks like Ray Jackendoff from Tufts University, they will argue that... There are a few core components of language, and they include things like grammars. And there's a special property of human language grammars that differs from other communication systems, and that's this notion of recursion. So Pinker and Jackendorf argue that being able to place a representation inside another representation of the same type, that the mental operation of recursion that that is the core component of language that differentiates it from other communication systems that are shared with our near relatives, other primates. And then there's other folks who say, well, no, it's more than just recursion, and it's definitely not grammar per se. So if you go to animal communication systems like Diana monkeys. or rhesus macaques, animals that have been taught to use gesture to communicate. There's a grammar that those animals follow. It's not as systematic or uniform as human language grammars are, but it's a grammar. There are certain components that appear in certain orders according to grammatical rules. And again, There's not 100% systematic. If you go back to Nim Chimpsky, who was one of the first chimps, primates, that was taught how to use gesture to communicate, Nim will produce gestures in some sequences more often than he will produce gestures in a different sequence. So that suggests that there's an underlying rule that is governing those behaviors. That's why some sequences are more frequent than others. But that is nothing like the systematicity that you would see even in a three-year-old child. So there's grammars, we think, that govern primate gestural communication. I am reasonable, well, I'm not even reasonably confident. I'm 100% certain. I'm not even going to be diplomatic to my friends who do primate communication research. It's not the same thing. And The human language system is more complex, and it's far more systematic, and it develops without concrete rewards. And there's a little bit of research from Sue Savage-Rumbau's lab on bonobos showing that bonobos can pick up gestural communication systems by observation similar to the way. human children pick up human languages by observation, without explicit instruction, without reward or punishment. But the degree of systematicity that those very wonderful and complex primates, the systematicity that they exhibit is just not on the same order as what humans do. So I'm convinced, as a cognitive scientist, they're both communication systems. One of them is a true language, one is not. And the precise details of what makes a human language different from an animal grammar-driven communication system. That is a topic of open philosophical discussion. And the discussion focuses around what do the grammars look like? One of the ideas is the animal communication system grammars don't have recursion. They have sequencing, but they don't have the ability to put units of a given type within other units of the same type. So tail recursion is really easy. So if I say, Tom likes beans, that's a complete sentence in and of itself. I can crack that thing open and drop another idea, a sentence-sized idea inside it. So Tom likes beans, that's a sentence. I can crack it open and I can say, PJ thinks Tom likes beans. So Tom likes beans is now tail recursioned into the bigger unit. And I can do this to infinite depth. So I can say, Matt knows that PJ thinks that Dave believes that Sue remembered that Tom reviewed blah, blah, blah, blah, to infinite depth, so infinite recursion. There are center embedded structures and in humans, we think because of working memory limitations, you can only really embed to two levels before things get really hairy. My good friend Ted Gibson at MIT has studied this for years and years. But a center embedded structure could be something like the rat that the cat chased hid under the porch. That's a double embedding. So you have a center embedded structure within the bigger structure. When you try to go to three levels, it gets really difficult to process and understand really quickly. So the rat, the cat. the horse bit chased, died, is a perfectly grammatical sentence. It's almost unparsable for real human beings, even if you're super, super fluent in English. So recursion is a thing we have, and in linguistic theory, we can embed to infinite depth. There is no limit, there's no upper limit to the amount of embeddings or recursion we can have. In practical terms, center embeddings are really, really difficult after you get beyond about two layers. Tail recursion is a little bit easier, but why would you ever need tail recursion with 20 embeddings? Yep. Yeah. Uh-huh.
PJ Wehry:
five to eight of those before your brain is just like, that's enough for your time to go to bed, you know?
Matt:
100%.
PJ Wehry:
And.
Matt:
So we have hypothetically this infinite recursion ability, which we don't exercise. And one of the live cognitive science questions, or psycholinguistic questions, is why is that? And so a lot of people are interested in working memory limitations. So there's people who study working memory who aren't really terribly interested in language. There's people who study language who aren't terribly interested in working memory. And then there's people who like to bring those two. aspects of cognition together and it's useful in explaining things like okay, you have these triple embeddings. They do occur naturally. It's not an extinct species, but they're rare and when we run into them, they tend to be difficult to deal with. Why is that? Well, there's a whole line of working memory research that specifies here's the system in which language is processed. That's a whole set of cognitive theory. And that can be useful, and we can test that in various ways. There's lots of people who've done working memory studies where they measure different people who will perform differently on working memory tests. So you test their working memory, you put them in a language processing environment, and then sometimes we observe differences in language processing and comprehension outcomes based on people's working memory capacities.
PJ Wehry:
Yeah, I mean, and I think we know that at a fundamental level too. It's obvious that some people have better memories than others and are a little quicker to catch on to things, right? That's like
Matt:
Yep.
PJ Wehry:
a basic test of intelligence. One, I don't know if it's a good or bad thing now that I'm thinking about it. I have been working for a while now through The Language Animal by Charles Taylor. So when you talk about that philosophical problem about are we looking at the problem of language even like through the correct lens?
Matt:
Yep.
PJ Wehry:
That comes to mind, but it's probably good
Matt:
Yep.
PJ Wehry:
that I haven't finished it because that would probably hijack the interview
Matt:
Okay. I haven't read it, but I'm going to go. I better go get a copy.
PJ Wehry:
Well, it's Basically this idea of like do we build from the ground up or do we think about it as a world like is there a Qualitative leap right and
Matt:
Oh
PJ Wehry:
I
Matt:
yeah,
PJ Wehry:
think even
Matt:
okay,
PJ Wehry:
as you're
Matt:
yep.
PJ Wehry:
talking there Which
Matt:
Yeah.
PJ Wehry:
is kind of the core of that question that you're you're
Matt:
Yep.
PJ Wehry:
and I don't know the answer because I haven't gotten to the end Of the book, of course,
Matt:
Well,
PJ Wehry:
I'm
Matt:
PJ,
PJ Wehry:
sure I'll
Matt:
nobody knows the answer, and if somebody tells you they do, they're not giving you the straight story. There's,
PJ Wehry:
Right,
Matt:
again,
PJ Wehry:
right.
Matt:
there's an evolutionary psychology branch, and there's a piece of that that's in language science. Derek Bickerton, folks like that, and we have some living language models that I think maybe it'll... may help illustrate what happened over evolutionary time as we went from pre-human communication systems to Homo sapiens. And the available evidence suggests that Homo sapiens emerged before language did by maybe 100,000, 150,000 years. There was some weird event about 75,000 years ago that there was this cultural revolution. And there are lots of serious. philosophers of language and evolutionary psychologists who think that those two things somehow went together. Who knows, was it the ability to control fire? Was that the key ingredient? Was there some random mutation in some population somewhere probably in Africa that created this quantum leap in computing power? So all of a sudden we go from rudimentary pigeon like. communication systems to full-blown, fully modern human language ability. Nobody knows for sure how that rolled out. There are some very interesting clues in the record of bones suggesting that there were some changes in Homo sapiens versus pre-runner species in terms of the nerve tracts that Anyway, it's a fascinating area of study. I don't like, I review it a little bit because I talk about it from time to time. It's not, certainly not my core area of expertise, but there's some super, super interesting hypotheses. And it is very interesting to think about whether a modern language could emerge just by sort of baby steps from some previous communication system, or was there this quantum leap? and the folks who who kind of correlate the language change with the the cultural change based on the the digs, the archaeological record, there's a case you can make for a quantum leap, right? Because there's, was it just coincidental or did something really important cognitively happen about 50 to 75,000 years ago? Yes. Developmental language disorder.
PJ Wehry:
this idea of like children not catching on to the grammar.
Matt:
Yes.
PJ Wehry:
Can you talk a little bit about that? Because it seems like there'd be a link there. I have five kids, so
Matt:
Uh... Heheheheh...
PJ Wehry:
maybe I shouldn't call them monkeys, but sometimes, it feels like the way.
Matt:
Sure.
PJ Wehry:
But can you speak to that a little bit?
Matt:
So developmental language disorder is, it's an area of inquiry back in the day when I was an undergraduate, it would have been called selective language impairment or SLI. And the idea was we have individuals whose cognitive systems are perfectly intact. So their general intellectual functioning, their social function, everything is okay except for this one area function language. So specific language impairment. And there's years of research that went into trying to figure out why some children do not master the adult grammar the way 90, 95 percent of children do, just without any special intervention of any kind. There's been a change in the way the condition is described and the cognitive theories that go into explaining how and why it happens. So one of the, so if you've read about language, you've probably read a little bit about universal grammar and the notion
PJ Wehry:
Chomsky
Matt:
that,
PJ Wehry:
was maybe in my notes, yeah.
Matt:
yeah, so like, Achomsky's like that dude, he's dialed in on a lot of things. huge figure in the cognitive revolution, all respect to Professor Chomsky. Anyway, there's been a lot of theory that's come out of the Chomskyian tradition that's based on universal grammar. And if we have a universal grammar that's based on a genetically inherited language learning device, why is it in that framework that some children, are not acquiring language in the same way that other children are, given the same kind of exposure. And one of the obvious potential suspects is, well, maybe there's a genetic difference. So there are children out there, they have, make systematic errors in the way they produce certain kinds of sentences involving certain grammatical components. Past tense forms of verbs are suspect. possessives versus plural ass is a suspect. There's certain little aspects of grammar that these children don't appear to master. And following from the Chomskyian tradition, there's this notion of an extended, optional, infinitive period. And infinitives are a component of a grammar. Infinitive forms of verbs are a specific kind of, specific flavor of verb. And it's those infinitive... components that children with developmental language disorder seem to struggle with. So why is that? Well, if you are a believer in universal grammar and a language learning device, you might think that while the 7% or so of children who are showing that pattern, maybe there's some genetic component that differs between those children and children who are typically developing. And then about, I don't know. 20, 25 years ago or so, this fellow Gopnik over in the UK published papers about this K.E. family. And the K.E. family had a pretty clear pattern of transmission. So there was grandparents, parents, and children. Some members of that family had the characteristic signs of specific language impairment or DLD, developmental language disorder. Other members of that family did not. have that pattern. They were typically developing with regards to language. Then somebody went in, did some DNA testing, and found that, hey, there's this FOXP2 gene, and there's this allele that is different in the affected members of that KE family compared to the typically developing members. Everybody lost their minds. And you've got papers
PJ Wehry:
We found
Matt:
in the New
PJ Wehry:
it!
Matt:
York Times. We got the language gene. Here it is. Boom, we're done. And then as this is the beauty of science, something splashy happens, and people go, well, is that really how things work? Let's think up some alternatives and maybe test those out. So people came along following up on the study, and now, as I understand the consensus view is, there are families that have similar patterns of transmission of DLD to the original KE family. But the way the FOXP2 alleles play out in those families is different. So the KE family might actually be an outlier with regards to the relationship between presence versus absence of different FOXP2 alleles and the presence versus absence of DLD. And also, guess what, folks? There is no language gene. We got zillions of these things. The brain is a very complex instrument. The function and the integrity and the quality of the brain especially with regards to language does not boil down to a specific allele on one specific gene. Life is not that simple, even though sometimes we wish it was. So. There have been further studies of FOXP2. We don't think it's a language gene. We might think that, or again, I'm not a geneticist. I'm not an expert on this. I read a little bit about the work because I find it interesting. And my understanding of the genetics work is that FOXP2, if it does have some specific effects, it's probably having specific effects. on motor control and motor sequencing, and obviously motor control and motor sequencing is a super important set of functions relating to speech production. So if FOXP2 really is specific, it might be specific in that area of function, not specific to can you learn the past tense forms of particular verbs, or are you stuck using infinitives the whole time. Ahaha. And, right, and we also apparently have to worry about epigenetics, which is how our experience and our own activity can flip on versus off different genes. When I used to work out more, yeah, there were genes that were getting activated by that activity. I didn't get a new set of genes. I just epigenetically flipped some things on that hadn't previously been flipped on. So again, there's a... There's people who are very interested in how genetics contributes to neural systems, both the morphology, the physical structure, and the way those systems are connected, connectivity. And I think the ultimate answer to the ultimate question of language, how and why do we have the language abilities that we have, the answer absolutely will have to reference. Human brains are different than other kinds of animal brains. That's a function of genetics. So that's a piece of the answer. We have brains that do language in ways that other animal brains don't. OK, what are the genetic contributions to those differences? But the idea that even if you believe that language is innate, and a lot of people do, or language ability is innate, even if you believe there's a genetically programmed language acquisition device. you still don't have to believe in a language gene. You can believe in complex interactions between different genes, and you can believe in different interactions between experience and the activation versus deactivation of genes. And the current state of play is, I don't think we have a very good grasp of that as a field. And that's an area where maybe some smart young people will come along and they'll figure things out better than we've been able to to this point. Yeah, again, so like one of my favorite philosophers of science and psychology is this fellow called Keith Stanovich, who was a reading researcher down at University of Texas back in the day, and he wrote a book that's been updated many times since I read it called How to Think Straight about Psychology, and he brings in aspects of Poperian philosophy of science. He also spends a lot of time talking about how does science progress. We have this idea. as kind of casual observers, that science proceeds because of these huge revolutions, that there's this one sole operator in a lab somewhere and the light bulb comes on and that person does the crucial experiment and they go, Eureka, I have found it. Science doesn't work like that. So science, 99.9% of the time. proceeds because of incremental change and and it doesn't come out of a vacuum. Occasionally you get and Popper talks about this, Stanovich talks about this, you do get paradigm shifts as new developments take place oftentimes in technology but sometimes just in terms of theoretical perspective. There's been a series of those in psychology. We started with philosophy, Voint established his lab and said let's not just be armchair people, let's go look at stuff. That's the conversion of the discipline as a whole from a philosophy to something that's more like a science. And then we did armchair introspection as our mode of observation for decades in psychology, and people finally said that. Just because some old dude sitting in his armchair in Leipzig has this thought about his own mental experience. That's not that interesting. That's not decisive. So the behaviorists came along and they said, this needs to stop. We can't have these invisible ghosts in our heads and think that those are gonna explain stuff. So instead, let's just focus on the concrete. Let's just look at contexts, responses, and consequences. And again, for about 30, 40 years, That's all anybody did, especially in America. Rat labs got built all over the place. Pigeon labs got built everywhere. And then people like our good friend, Noam Chomsky, came along, along with guys like Coslin in the 1950s. And Al Bandura in the 60s, various people looked at what the behaviors were saying in detail. And they said, this works for, why does an animal learn to do a new trick? Or how can we? create environments that will make children act better in schools, but it doesn't explain how language works. That was Chomsky's contribution. It doesn't explain how children learn from just observing models. That was Al Bandura's contribution. So again, about 1956 or so, there was a paradigm shift. We had the cognitive revolution and off we go. Now we've had a neuroscience revolution, where now we can look in the brain. and we can assess our cognitive processing theories by looking at brain activity. That's been a paradigm shift. But it's typically, it's incremental. We do one little experiment at a time, we answer pretty narrowly crafted questions, and when we add all that stuff over time, yeah, we might know some things better now than we did 20 years ago.
PJ Wehry:
Human form of language right things like metaphor things
Matt:
Yep.
PJ Wehry:
like
Matt:
Yep.
PJ Wehry:
dialogue Can you talk a little bit Especially dialogue and I've already learned so much so one Thank you But I'm gonna walk down the stairs here and my wife's gonna ask me what I learned and I don't think she wants to hear Me recap this for 20 minutes.
Matt:
Okay.
PJ Wehry:
So how do you do you say is it Grecian? How do you say the maxims?
Matt:
H. Paul Grice. So
PJ Wehry:
Grice
Matt:
I pronounce it Gricean maxims from the Stanford philosopher of language, H. Paul Grice.
PJ Wehry:
See, that's perfect, because then my wife's like, what'd you learn? I'm like, I learned how to say Gricean. That's a very nice, concise, but what are Gricean maxims, and how does that help us understand dialogue?
Matt:
So, Grice was this philosopher of language, and he was interested in explaining why people behave the way they behave when they're interacting with one another in conversation. So, I think about Grice's concepts as sort of an idealized version of a dialogue. And so, He starts out with the notion that we cooperate with one another to exchange information. So why do you talk to anybody? Grace says it's because you have some information that you want me to share, and so we're going to engage in a cooperative activity such that the contents of your mind can be transferred and now I have new contents in my mind. So the cooperative principle is the bedrock. where GRISE starts. So we're not just randomly interacting. Both of us have the same objective. We want to share information, and we are going to cooperate to make that happen. It's the cooperativeness principle. And so information exchange is the objective. Cooperation is the high-level means of that exchange. And then in detail, how do we cooperate? What are the... What are the ingredients that go into that cooperation recipe? And he has a whole list of them. There's a bunch of Gricean principles. There's the maxim of quality. So when we are cooperating in an information exchange, I should be telling you the truth, not lying to you about stuff, not making things up. That's maxim of quality. There's the maxim of quantity, which as professors, we violate all the time. I should be saying just as much as I need to so you can understand what I'm trying to communicate, and then I should stop. I shouldn't just keep going and repeating the same thing, or, you get the idea. There's the maxim of relation. If we are talking about a given subject, like language, I should stay on topic. I should continue to make my contributions relevant to that topic, or I should say, That's you know, let's change gears and maybe talk about the the drought right? So manner of really there's a bunch of other Gricean maxims and To a first approximation, they do a decent job of explaining why we do some things in dialogue and not other things. But what folks like Herb Clark and his students, again, Stanford University faculty member, when we look at how people actually operate in dialogue. we are frequent violators of Gricean maxims, and there's oftentimes good reasons for that. So Grice, again, more in kind of the philosophy, how should this stuff work? What are the necessary components to a conversational exchange of information? And it's accurate in that sense. We do cooperate when we communicate. We absolutely do, to a greater or lesser extent. keep our expressions true, we do, to a greater or lesser extent, provide just enough signal so that the intended message can be received. But we do a lot more than that in dialogue. So Herb Clark did some wonderful studies on back channel communication, which Grice just doesn't worry about. So when we're face to face or when we're across the Zoom box. I'm talking, I'm simultaneously monitoring you for signals that you've understood. As a listener, you are sort of doing the metacognition monitoring how well you think you're following along and you will be providing me with feedback that either says, hey, stop, I got it, or wait a minute, that wasn't clear, can you please elaborate, or okay, I got it, now let me make my contribution. So Grice was an interesting cat. We still teach him. He's still relevant, but he needs to be understood in the greater context. And the contribution is, yeah, this is a great description of formal properties of dialogue. But when we actually go and look at how people interact in conversational tasks, we violate Greisy and Maxims all the time, and somehow the system as a whole does what it needs to do.
PJ Wehry:
I will say, as you were talking, the limits on recursion that we talked about earlier, we're starting to kick in as you were talking about... I was sitting here, I was like, okay, my brain started working overdrive because we're talking about me giving you signals that I'm understanding what you're talking about, about what we're talking about.
Matt:
Uh-huh. Yep. And those signals really do have an impact. So, Grice was really only concerned about the verbal channel. And the verbal channel obviously is super, super critical to dialogue and conversation. But we're social creatures. We pick up on all kinds of signals that are not verbal, nonverbal communication, facial expression, gestures, mm-hmm noises that are not described classically as a component of language. unless you're interested in back channel communication or things like disfluencies. So yeah, there's lots going on. Christ is a cool dude, but he didn't have the whole picture. That's okay.
PJ Wehry:
concepts you talked about earlier that one of the not fully distinguishing you mentioned something about bonobos doing the same thing but that human beings language exceeds mere rewards and punishments and one of the things that as you're talking about the Gricean maxims is they seem to be focused on those rewards and punishments and there's a life as you're talking about, you know the Quantitative statements and it's like why do we keep talking when we're not accomplishing anything and there is like It's like there's no reward or punishment here. There's something else going on.
Matt:
I just so much want to get my point across, and
PJ Wehry:
Ha ha ha!
Matt:
I'm so insensitive to the fact that you're bored or you want to talk about something else. I just want to talk about what I want to talk about. Ego-centrism is a factor that people study in both language production, in how we talk to one another, and it also matters. in terms of how we understand language in detailed ways that are very interesting. If you like that stuff, you gotta read work by Mike Tannenhaus and his students and other folks. It's this beautiful and complex system. And despite, like, if you talk to computational people, if you talk to people who do neurobiology, we look at the brain, we look at the computations that have to take place. It's super, super complex. It's a massive, difficult information processing problem, both in terms of producing speech and language and understanding speech. and language. Nonetheless, subjectively it is super, super easy. So if your audience is going to walk away with one idea, it might be this one. You are really, really skilled at language. And you are super, super skilled at language, even though you didn't have to have formal instruction on how to do it. And that's a testament to the power and complexity and beauty of the cognitive systems that underlie language. So again, it's not like we have a real knock-down, drag-out, perfect working description of that system, but we absolutely do understand it does this amazing complex task for us, and it feels almost effortless most of the time. It's a pleasure, PJ. It's good to meet you, and thank you for inviting me on the podcast. This has been a lot of fun.