Week07 Reactions

Reading:

Elaine's reactions!

Okay. At first I had a lot of problems with this reading, because I felt like Hebbs was using words I didn't understand. However, as I read on, he answered most of my questions. I really do wish he hadn't discussed sets and attention and cell-assembly without defining them first. I was rather dismayed to find that he actually told us what 'cell assembly' is on the third from the last page of the second chapter. Anyway.

Not knowing much about the organization of the brain, and the mechanisms of perception, I find it hard to agree or disagree with much in this article. I'm not entirely clear on how cell-assemblages get around the fact that moving something around in the visual field causes different receptors to be stimulated. I understand how recognition can still occur if an object is translated, as a result of the assmblages, but I don't know how the earlier part of the perception allows this.

As I was reading the section about how animals or people raised in an environment in which they are not exposed to any visual stimuli have a hard time perceiving objects once they can see them, I thought about the reverse of this. In our society, we rarely use our sense of touch as a means of recognizing objects. If a blind person can use the sense of touch to recognize people's faces, it is a skill that they have learned that a seeing person has not. If seeing people were removed from a visual environment, and asked to recognize familiar faces using their hands, I would think it would be difficult, perhaps only slightly easier because seeing people have been exposed to touch, even if not for the purposes of recognition. A seeing person would be able to feel, but would have a hard time perceiving objects and distinguishing. This parallels the fact that the blind people whose vision was restored had no perception of shape or pattern.

I'm also wondering if any further research was done about the rats whose movement was restricted but whose vision was not. The two experiments cited have conflicting results, one saying that performance in a maze was not harmed, and one saying that it was. Something sounds fudged somewhere.

I was highly amused by the authors attempts to convince the reader that all experiments done were completely ethical. The dogs whose movements were restricted were simulataneously "happy as larks" and "abnormal in personality and social behavior." I'd like to take this moment to tell you all that if you're reading this reaction, and you find this sentence, please tell me in class that you've found it, and I'll give you a present. I was also amused by the idea that the dogs weren't unhappy because they didn't know any better, having been raised in that environment since birth. It sort of takes that whole veal issue and throws it out the window. Anyway, this is all besides the point, I guess.

The point about a neuron's firing making it more likely for it to fire again and the effect on short term memory was interesting. I'm interested to know what exactly is going on when we forget something, when we try to recall it, but can't. Hebbs gives the example of seeing a pencil on a table and associating the two. What is going on when we try to remember where the pencil is and can't? Have we lost the connection there? What about the times when we try to remember and rack our brains and finally remember? What was going on during the time in which we couldn't remember? At the time when we finally remember?

And, finally, when the subjects of the visual stability experiment had to wear those contact lenses with the stalks attached to them, how did they keep from blinking?
SCHYNS

Hmmm. I'm intrigued by the label-concept independence thing. It's interesting to think that we label things according to categories which we have learned, as opposed to categorizing according to labels. However, it also seems that a lot of real-life categories are somewhat arbitrary. I think supervised learning may have more to do with categorization than Schyns propses. We may be able to categorize according to features, but I still think that there are some things that we simply learn as labels from having been taught. When he states that the model which separates naming from concept formation is "more powerful and psychologically realistic" (p 467) I think this is not quite accurate. Watching a parent with a young child, it seems that learning labels in a supervised fashion has a great effect on learning categories. It is not unusual for the child to point to a cat and say 'dog.' The child has learned the label, but learns the category through correction. Much of learning at a young age involves a parent saying to a child, "What's that called?" and the child being told.

With the experiments involving the patterns called cat, dog, wolf and bird, I'm not sure if Schyns was trying to model how these particular categories are learned. If he was, I don't entirely approve, because we can't tell if he is accurately representing similarities and differences between the animals. Although he has made the dog and wolf patterns more similar to each other than the dog and bird ones, his representations are still essentially arbitrary, and I don't think they can be depended on to model how we actually learn to distinguish a dog from a wolf. If he did not intend to show that, and he simply chose the names as symbols, I think that was rather misleading.


Tom Kornack
Reactions to Reading, Week 7

In concept, ATR networks hold tremendous promise. The implementation of a resonance pattern seems to better model how our own brains work. Itıs too bad, though, that these networks are fickle and unstable. Indeed, it is one of the most complex designs that we have ever seen; it is no wonder that it is intolerant of slight changes in parameters. But we must be missing something. Biological brains are pretty tolerant, arenıt they? What is it, then, that makes this system unstable? I imagine that this instability is the result of trying to model biological phenomena without sufficient numbers of neurons. Indeed, the elements such as the reset branch and the 2/3 rule seem terribly patchy and are probably the source of many problems including stability. However, I cannot imagine a superior method in place of what has been already designed. It is also unclear to me how exactly this might be implemented in a connectionist scheme. Exactly how is each of the storage nodes ³connected into a competitive internal architecture?² Furthermore, I am still confused about these toggles and how to understand Figure 7.2.

I found Hebbıs retrospective article pretty amazing; a story of his haphazard investigation into the complete unknown of neurobiology with a theory that matched poor, sometimes incorrect and incomplete data. His series of bold conclusions from wimpy, qualitative and empirical data made me a little delirious at times. It amazes me: how these cognitive scientists can make decisions based on so little evidence about perception. However Hebb thought it up, the idea of cell assemblies being the basic building blocks of neural activity is essential and fundamental and shaped the thinking of investigators ever since. I should not be too shocked by such a leap in understanding as it has been this way for most of the greatest ideas in science and philosophy: Bohr came up with his (incorrect) model of the hydrogen atom that explained some features of the hydrogen spectrum, but was patently wrong. It nonetheless made way for the discovery of quantum mechanics. In the same way, Hebb is making this leap in understanding. Later the neurobiologists will find the truth and explain it to us all; it nonetheless took someone like Hebb to start the process.

So some things were a little more fishy and he admits to those things freely. The whole idea that ³emotion is the disruption of cell assemblies² is a bit far out and Iım glad he takes this one down two paragraphs after he suggests it. The evidence that whole lines appear and disappear at once might have something to do with jiggling and cell assembly stimulation may be a good explanation, but I find it hard to justify that it is the best explanation. The phenomena also might have something to do with optics. If you do the work and figure out the resolution of the retina and the precision of the optics you will find that it canıt possibly be as good as we seem to be able to see. The increase in resolution and field of view comes directly from the fact that our eyes move a lot and fill in the gaps with what we know. Our retina is not a uniform detector at all. It is easy to make things fall into blind spots of the retina and disappear, even with all this jiggling. The disappearance of lines certainly occurs when the line is stationary with respect to jiggles but this may be because the perceptive structure requires things to move and if they donıt, then it may be a new retinal defect that needs to be masked out of our perception. This, at least, is the story I heard before reading this paper. This explanation neither supports or rejects the idea of cell assemblies. It is more compatible with the other perception problems: the block disappearing and the triangle lines being completed (an opposite of defect compensation.)


Jon Shlens

A MODULAR NEURAL NETWORK
MODEL OF CONCEPT ACQUISITION

- the network which they are using is a Kohonen attached to another network: is this like an "outstar" network (i.e. are we dealing with a bidirectional counter-propagation network?)
- this article seems to really promote the ability to self-organize. I think that this trait is indeed a very powerful trait in the network but I am wondering if there is any other type of self-organizing network then a Kohonen - possibly one more powerful?
- they talk about a self-supervised back-propagation network (SSB) - what is this? and how is this implemented? I am curious if it still retains its generalizing power and distributed representations even though it is self-supervising?
- as they point out, the network weight vectors are "locally organized" (476); because Kohonens are locally organized they loose a lot of power because they do not have a distributed representation. It would be nice to see a network which has the ability to self-organize but also has distributed representations- possibly a SSB?
- the network is used as an associator; however, because it is dealing with hierarchical representations, the mapping is NOT one-to-one so it is not possible to do the inverse on the function. In other words, a "german shepherd" maps to a "dog" but a "dog" maps to a "german shepherd" and a "poodle," etc. Therefore, how can the network act as a bidirectional auto-associator - which is what they seem to want their network to do?
- their network also implements a "top-down" ability - how can one implement this on a Kohonen network?
- "basic level categories had to be represented before low-level concepts could be acquired" (502) - their comment seems to agree with the findings of the Elman and Meeden papers! Indeed, the need to start small does seem to be universal in neural networks, whether back-propagation, recurrent or Kohonen.

ESSAY ON MIND
- I thought the study on sensory deprivation was quite fascinating as it seemed to imply intelligence is directly related to how much sensory input one is receiving. I wonder how or if one could test this on a neural network?
- "the newborn baby is selectively responsive to the rhythms of human speech, innately prepared for and impelled toward a form of learning that, we know, is inevitable in normal conditions, that is, learning to talk." - how can this be tested?
It was interesting how they tried to explain lots of mental phenomenon with cell-structure and physiology in this article. However, at the end of the article, I felt that they had only scratched the surface and at the end of I was looking for some better explanation of perception and other psychological traits. Furthermore, in the areas which the author did try to explain, he seemed to suggest a lot of possibilities and some quasi-proofs but I never felt he gave a true justification for his ideas or expanded on them to their fullest.


Alex Robinson

In both the readings I found it interesting how strongly the authors tried to dispel the idea of a cognitive conceptual structure heavily reliant on symbols such as words said, seen, or heard. Hebb seemed to have learned from Chomsky and I enjoyed how he broke down all the basic developments that need to occur in a child before a word-object pairing becomes an issue. I still have not resolved the questions of whether as adults we think in words. It certainly seems as if I do, but I know I can't always say what I'm thinking. Perhaps a greater question is whether we can think about complex abstract thoughts without the aid of some sort of language flowing in our mind. It may be that because of our culture we have learned to think in words and it is hard to think of doing it any other way.

The data-driven development that Hebb suggest makes words and language somewhat arbitrary, what really makes the difference is the sensory world a creature is reared in. Instead of highly structured nervous system, he suggests a much more vague layout that requires a data intensive world to develop. Our brain is not as integrated as it may seem for our current environment and is as only as whole as its environment. The success and efficiency of our brain is primarily due to practice and inculcation, not so much talent. We have innate talent for learning, but it takes us years of practice and listening before we can talk. Even so, I can imagine it would be hard for Hebb to believe that something as simple as recognizing an object in different fields of the eye required our brain to train itself. It seems as if such a structure would be built in, but given the choice what would you make the eye automatically recognize in all fields of vision? By allowing the adaptation do occur outside the womb the child can adapt to recognizing its own mother and dangers that might be specific to its environment. The theory of cell assemblies certainly makes the long period of human infancy seem much more necessary.

I have a question about what exactly the differences between nativism and empiricism are?

I found the Schyns article interesting, but the results seemed predictable. The data was significant for his theory but it seems only natural that a Kohonen network would categorize the data as it did. I may be biased by my experience with Kohohnen networks but I can't imagine what other results he would have expected to have gotten after training a network on two different objects. I also thought that labeling the categories cat, dog, and bird was misleading.


Carl Wellington - Reaction

Every article that I read that attempts to explain how the brain works makes me become even more in awe of its capabilities. Perception, being very important and relatively easy to study (as brain things go), is used very often to try to understand how our brain works in a very basic sense (How do we know that a square is square? instead of How can we solve abstract problems?).

Hebb's article raised some striking differences between the performance of the brain and the performance of the neural nets that we've been studying. First, the brain is so good at recognizing objects as what they truly are despite massive interference given by the environment and very different views of the object in question. Obviously our little backprop network isn't going to be able to recognize someone at 100 yards simply by the way they walk, but I've been very frustrated with the rigidity that the networks which we've studied have exhibited when presented with very similar inputs. I find it very interesting to speculate about the way the brain represents things. Hebb described evidence that our brain uses feature identifiers that look for lines, etc. and then build an image from those features. The experiment using the contact lens which jiggled with the eye and therefore made parts of patterns disappear in separate chunks was fascinating. I would love to try one of those contacts on and see for myself. Since sounds, tastes, and especially smells fade away so quickly when they're constant, it makes sense that our vision exhibits the same behavior. Still, humans can instantly recognize objects even when the object is in a completely unfamiliar orientation. Since we don't store an infinite number of pictures for every object, we must have some higher level understanding of what the crucial aspects of that object are. Second, our brains can reconstruct an amazing amount of information even when it wasn't trained extensively. We can see something in passing and not even notice it, but completely reconstruct it if it becomes important later. We didn't train at all with this event, and yet we successfully learned it. Similarly, even if we're not listening to a conversation at all, if we hear our name or something else important, we'll also know what words led up to our name even though we weren't listening yet.

I don't expect the nets that we use to be able to do even simple things that our brain can do with ease, but I find it frustrating that they have such trouble with things like translation and rotation or size change. It seems that we need some fundamental change in the way we construct our neural nets before anything really exciting will happen.
Aaron Hoffman

Schyns

The strength of the dense local excitatory connections are a Gaussian function of the distance from the input node - At first this confused me because I didn't realize that he meant a half-Gaussian. This confusion was also good because it made me think about what would happen if the strength of the connections were not monotonic with Euclidean distance. It would generate a very interesting network topology which I lack the skills to analyze.

The learning constant is an inversely correlated with activation. We learn quicker when we have more to learn, and make slower, finer adjustments as we reach our goal. This is just a continous version of what we did discretely - lower the neighborhood size as training progresses. Schyns argues that his way is better because the rate of learning doesn't decrease with time, but with success, so his network can increase learning in the presence of novel stimuli... OK.

I'm not sure that I see how learing is a rotation. The transformation on the weight space does not preserve distance, because a neigbor which is far away from the input vector will be rotated more than a winner which is close to the input vector. I guess that this is a continuous function of the weight space that assigns similar weight vectors to similar rotations not unlike the way a derivative assigns nearby points to nearby linear transformations.

"The network preserves the topology of the input space" too bad we don't have biological cybernetics from 1982. His book is at Haverford; if anyone wants to help me tackle the math I'll ILL it.

Typo bottom of page 480, it should read: In an autoassociator, f and t are equivalent and the equations become g = Af, e = f-g, A = ÿefT. I believe that I am correct, if the equations are correct as written, then I am very confused.

I'm a little confused about energy. It seems that (potential) energy should correspond to error. Schyns writes energy as the differential 2-form that assigns the scalar .5xTAx to each point x. If energy does correspond to error, then where is the teacher vector without which we cannot measure error. I understand that x is some state vector which changes dynamically such that
x(0) = f, and x(t+1) = stepfn(ax(t) + bAx(t)). Since the rule for updating x does not explicitly include E, E must be the function which we are implicity minimizing. This makes the updating rule the gradient of E, which seems plausible. However, I still don't see how these rules direct x to the proper location in phase space.

IN the first three experiments, all he has shown is that preprocessing helps and can be acheived by a self-organizing network.

The fourth and fifth experiments are very nice. I'd like to look at exacltly what happens when we allow top-down influences more. Although the expert stuff was cool because it was grounded in psychology, I want to clarification on one point: Were the exemplars used as a category centroid, just for measurement and quantization purposes, or does Schyns subscribe to the notion of exemplars in category formation?


Marco


* I was looking at the figure on page 9, and the discussion on it, and the following occured to me: Although different perceptions reult from the same sensory input, the key to that problem may lie in the emphasis on different elements of the figure. negative emphasis on the top lines stresses the faces, negative emphasis on the "lips and chin" elements stresses the vase.
* P.83: "The face that the other end of the room does not look different as your eyes move, or when you move from one point to another" FALSE.
* P.91: In the discussion of how the two young chimpanzees and other experimental subjects were facing the tests with defective vision equipment reminded me of a talk given recently at swarthmore. The talk concerened the regeneration of cellular perceptive elements (rods and cones) by cells on the retinae, and how it helped organisms to perceive environments (that's what they think) with different amounts of light. The cells were mostly regenerated at night for the morning. Sometimes it happens that I open my eyes to a "red" surroundings or a "blue" surroundings (inside the former, outside the latter).


Roger Bock

Essay on Mind
pg 79 - What does a lobotomy do? How can a person lose such a significant amount of his brain and show no loss of intelligence? Admittedly, knowledge is stored in a distributed fashion, but even so, losing that many connections should have an effect.

pg 85 - How do you show that a connection doesn't exist between the motor area and the visual cortex? I thought our understanding of the brain wasn't that advanced.

pg 87 - The author says set and attention is a problem. However, it seems that the problem is much more complicated than that. His examples for set involved a person who is told what to do with two numbers. However, how do cell assemblies account for memory which is stored for about a week. Every waking moment we are absorbing visual details around us without specifically focusing on them. I might be able to recall the clothing of someone I talked with a week ago, despite the fact that at the time I made no effort to memorize it. It seems that there would have to be a ridiculously large number of cell-assemblies to retain all these little pieces of information, most of which never get acted upon. Are there that many cell-assemblies in the brain?

pg 88 - How would brain circuits fatigue? Is this a physical fact, or supposition?

pg 96 - I find it very interesting that with a lobotomy someone can feel pain but not have it affect him. If most of pain is emotion, could one then learn to ignore it? Or are emotions physiological things we can't override?

pg 97 - What are somesthetic hallucinations

pg 99 - What is Gestalt psychology?

pg 100 - I think the results of the experiment with the triangle and the square are amazing! This is the first real evidence I've seen that supports the brain having feature-detectors. Has this experiment been done with more complex shapes? We talked about having a general concept for grandmother, I wonder if a picture of a grandmother could be made to disappear as a unit, although I'm guessing that the feature-detectors never detect such high-level things. Does the eye work by examining what is basically a bit-map representation of the world? Are the things in the back of our eye (by my terminology you can guess I'm not a bio major) capturing a bitmap image? Or do they detect things in the visual range which appear within the radius of a certain point,as was proposed in one of our readings? I'm still not sure how features are detected independent of position, since at some level you are still trying to recognize a feature which has been translated some amount.

pg 102 - What are peristriate circuits?

pg 106 - Since our eyes can sweep in any direction, it seems that not only horizontal and vertical lines would be generated. Wouldn't our brain be equipped to deal with any line? Or do we have a tendency to sweep our eyes vertically and horizontally only?

pg 107 - If there is an assembly for every different orientation, and there aren't a finite number of orientations/distances to see an object from, wouldn't we need nearly infinite numbers of assemblies to be able to recognize different objects from different angles and different distances? Even given one billion cell-assemblies, that doesn't seem to be enough.

pg 108 - What does it mean when they say "a complex cell is fired by any number of simple cells that vary somewhat in locus?" How can we separate the slope of a line from its location? Obviously we do, but they gloss over exactly how this would be done. Is there a way to do it without having one cell assembly for every possible line?

A Modular Neural Network Model of Concept Acquisition
pg 481 - I definitely do not understand the equation for the BSB on this page. Maybe we could examine this in class.

pg 483 - What's the asymptote energy value represent in this table?

pg 484 - What is ostensive learning?

pg 490 - What are the differences between bottom-up and top-down constraints? How are they implemented in this network?

pg 496 - Couldn't it affect the results by having the Pinscher and Pigeon similar to each other (ANDOPI and ANBIPI). It seems this would teach the network that there is more of a similarity between these to labels than there actually is.