It was weird to see Pinker agreeing with something else we read ;-)
The Nowak article was dense and hard to decipher at times. I got the basic gist, but I was lost on the technical details. I guess it was game theory stuff, mostly. I'm not sure just how important the mathematical finer points are. I confess I'm not sure I entirely see the significance of the results Nowak et al got. Perhaps I was giving evolution and/or syntax too much credit to begin with, but is it really such a breakthrough to say "as the world got more complicated, we needed a combinatorial method of communication"? Or is the game theory stuff a rigorous demonstration of common sense which had previously been called into question?
I also questioned, initially, some of the assumptions Nowak et al made, but on reflection I found them to be a reasonable simplification of the problem. That said, it would be interesting to see what happens to this game theory model when the limitations are more true to real life=8Athough I would believe the first person who told me that was impossibly complex from a mathematical perspective.
What i found most interesting in this week's reading was not so much how Nowak, Plotkin, Jansen, and Pinker talked about how language evolved, but the premise behind it all -- that initially, we had (as animals still do) a non-syntactic means of communication, which later evolved into the combinatorial, syntactic system which is the elephant's trunk of other communicative systems. What fascinates me about this is the difference Nowak et al. drew between human language and music, two symbolic systems which have been equated in many ways, attempts even having been made to fuse the two:
Animal communication is based on three basic designs: a finite repertoire of calls (territorial calls or warning of predators); a continuous analogue signal (for example, the dance of bees); and a series of random variations on a theme (such as the song of birds).
- Nowak et al., p. 497
(While this is entirely irrelevant to the class, please indulge my little revelation:) a "series of random variations on a theme" is exactly what a lot of music is. Certainly, while it has structure and rules, it is non-combinatorial -- there is no semantic content in chords, so how can a chord progression mean any more than a single chord (for example)?
i just think this is cool. pardon the complete digression.
Other than this, the articles were interesting, but i don't have much to say about them. Nowak et al.'s description makes sense, especially the explanation that we had to switch from a non-syntactic to a syntactic (combinatorial) system in order to encompass and describe "the increase in th number of relevant events that could be referred to" (Nowak et al., p. 497). This distinction (and pinker's subsequent illustration using vowel gradations) made a lot of sense to me, and spawned that whole mini-revelation above.
Math! Ordinarily, this would be much fun to play with, but at five in the morning, I'm just assuming that Nowak knows what he's talking about with regard to the numbers.... In the end, he seems to suggest that, if we can give, e.g., dolphins enough new things to talk about, they will eventually evolve syntactic language. I wonder if they now have commonly-known sounds or words for "fishgiver", "hoop", "glass", and "tourist" that they didn't have a century ago, now that we've put such concepts in the common scope of reference for many dolphins. And while other concepts -- jumping, flipping, tail walking -- have certainly been known to dolphins long before SeaWorld, did they have words for them before? (Do they now?) Or did they only for more necessary things relating to food, danger, direction, and so on? Are we already unwittingly leading them on the path to Uplift, a la David Brin? (Brin writes far future novels in which humans have genetically improved, or super-evolved, dolphins such that they are sentient and intelligent, Startide Rising being the best of them.)
Pinker, sadly, seemed to say very little in his pages; the summary seems to be "this is neat stuff, and you should read the Nowak article fifty pages later". At least he included a Far Side cartoon and helped make the Nowak article a bit easier to take in without spending time on the math.
I think I will always remember the day when I learned about the birds and the bees. Birds, bees, and predators. It made me realize how different we are from animals. Of course, the question arises as to how we became so much more advanced than animals. How did we develop the complex syntactic communication so different from the three basic designs of non-syntactic animal communication exemplified by bird songs, bee dances, and predator warnings (Nowak 496)? Wait, what did you think I meant?
I'm confused about a lot of the math and assumptions in the Nowak, et. al. Let's start with equation (1). They assume a rate constant ... but how can it be constant? Parameter q assumes that all words are equally difficult to learn, but aren't some words harder? Also, the assumption ``words are memorized independently of each other'' (496) confuses me. How about words that are variations on other words? Often, recognition of a word's root or certain morphemes might help one remember meaning. I'm confused as to how they can say that context doesn't factor in to memorization. These are confusing issues, but not really that important, I guess. Even if q is either word-specific or related to the size of the lexicon, the worst that could happen is that the size of the lexicon of distinct words is exaggerated.
My lack of understanding of the fitness measurement is much more detrimental to my understanding of the results. After a little algebra, the fitness of the non-syntactic communication is pretty straightforward, but I get confused when it comes to the fitness of the syntactic communication. The syntactic fitness is based on the equilibrium frequency of individuals who know a particular N and V. The problem is that equation (6) follows from equation (2) if and only if we assume equation (1) to be true for any given single word. I'm confused as to how they could assert a word categorization (N and V) and a relationship between the two categories (exactly one N and one V per event), but still assume that (1) still holds when one of (1)'s primary assumptions is that words are memorized independently of each other. I'm kind of lost.
Also, their threshold confuses me. They ``assume that the combinations of objects and actions are arranged in a way that all nouns and all verbs, respectively, have about the same frequency'' (498). Similarly, they ``[s]uppose a fraction p of these mn events occur (all at the same frequency)'' (497) and that there are ``pnm meaningful events that all occur at the same frequency'' (498). Are they saying that every action for which we have a word happens the same amount? Every object shows up the same amount in our conversations? Do I talk/listen about egrets as much as I talk about connectionist models? Maybe I'm just misunderstanding the article. Without these assumptions, they aren't able to describe their condition in such a simple form, and I don't understand how they can justify these assumptions.
In general, the argument is pretty convincing for hard-wired syntax being of some evolutionary advantage. I'm curious as to how one would go about using this premise to model language acquisition in humans. Would one use evolutionary algorithms on a connectionist framework, allowing the network to clamp an arbitrary number of nodes and connections? Would you feed the network instances sentences in languages, or try to have two networks talk to each other to achieve complex tasks involving many different actions and objects? I'm kind of curious what kind of approach could take advantage of this idea ... It seems like conectionists would be inherently opposed to any idea of hard-wired processing structure, but I can't think of any other paradigms flexible enough to be manipulated easily by evolutionary computation.
The Nowak reading for this week, though short, was dense as hell. I was lost from the very first equation presented. I'll just say I believe him. Pinker on the other hand, was witty as ever, and always a pleasure to read. I have nothing more to say on the subject, except that I find evolutionary game theory very interesting and mystifying to say the least.
The other thing, I'm not completely comfortable with the assumption of separate word learning in a non-syntactic language. Do non-syntactic systems, I am thinking that means animal languages, really have single concrete units to describe an event? They could still have repetitious sequences that are not necessarily syntactic but also not necessarily a word unit, right? I like the concept of learning to express the event before the object and action, and I am thinking there could be support for that from early child language, but it seems like the event is comunicated with more ado (sound, gesture, whatever) than a single concrete utterance.
I love that Nature published Nowak et al.ís near-inscrutable analysis, but then apparently solicited Pinker to elucidate and put into context their work to the non-mathematician. What it seems all the equations boil down to, however, is simply that there is a threshold in the number of events worth communicating that must be crossed in order for syntactic language to be more useful than non-syntactic language. Presumably, then, no other species has much worth talking about; likewise, our ancient ancestors got along just fine grunting and grooming until their environment became too complex. Nowak et al. Leave the nature of this environmental complexification to speculation.
So, we have that human language is a useful tool. But, we already knew that. The central question of evolutionary psycholinguistics remains: Did syntactic communication confer a selective advantage to those who possessed it, such that they were able to reproduce more offspring than those who did not? This question makes the additional assumption that there was something genetically different in the brains of those possessing syntactic communication that allowed them to communicate in such a way. Perhaps the central question is unanswerable, but the assumption could be supported in part by finding genetically transmitted deficits in syntax, with other linguistic faculties in tact.
Nowak et al.: while all this equationizing may be valid, the number of relevant communication topics actually discussed in this article might not reach the strict mathematical threshhold required to make syntactic communication about it advantageous. ... so we only need human-type language when we conceptually distinguish a certain number of different kinds of events: BUT, nobody knows anything about the evolution of the ability or necessity of distinguishing different events, nor (propbably) are any of the other parameters they use measurable or stable. I doubt any of these equations will ever be useful.
On the other hand there's an unstated, underlying assumption that is pretty interesting: that human language as we know it, with all its intricate 'universals', must have replaced another system, also extremely complex, but fundamentally different in its operation. This is similar to Hurford's model mentioned by Pinker which works on the assumption that the arbitrary sign would have competed evolutionarily with a system of not-so arbitrary signs. All these guys, then, are operating on the assumption that, insofar as we have innate linguistic abilities, our ancestors have had all kinds of variants of innate linguistic abilities and limitations. (I don't know what I'm saying about this; maybe it's not that new or interesting, but it kind of is to me)
Seems like there should be more to say, but I'm regressing to the "bow-wow" stage.