Deprecated: Function eregi() is deprecated in /home/mab/ideas/wp-content/themes/PapierJap/header.php on line 26

Deprecated: Function eregi() is deprecated in /home/mab/ideas/wp-content/themes/PapierJap/header.php on line 26
Creuset of Ideas
A collection of various ideas



Archives of 2007-03

h1

Exodus

2007-03-29 @ 9:15

Deprecated: preg_replace(): The /e modifier is deprecated, use preg_replace_callback instead in /home/mab/ideas/wp-includes/functions-formatting.php on line 76

Via indexed

h1

Interview meme

2007-03-19 @ 11:55

Deprecated: preg_replace(): The /e modifier is deprecated, use preg_replace_callback instead in /home/mab/ideas/wp-includes/functions-formatting.php on line 76

So, I’m getting interviewed by Thinking Girl. Here’s how it works: Anyone wanting to be interviewed leave me a comment saying, “Interview me.” I will respond by asking you five personalized questions, to be answered on your own blog. You have to include this explanation and an offer to interview other people. And so on. Depending on how much (or little) I know you, the question can be more or less intimate.

Continue reading »

h1

Seeing words

2007-03-7 @ 14:42

Deprecated: preg_replace(): The /e modifier is deprecated, use preg_replace_callback instead in /home/mab/ideas/wp-includes/functions-formatting.php on line 76

Our culture has a special relationship to the written word; we tend to think of written language as pretty natural. But out the thousands of languages spoken around the world, many, if not most, are not written.

When we stop to think about it, the written word is not that natural. Consider what happens, in our brain, when we read:

  • Our eyes scan the lines, pausing for 250 milliseconds on each group of letters, before moving on the next. During that time, we pick up the first and last letter, and, mixed up, the middle ones. This is fed as nerve impulses to our occipital lobe (back of the head).
  • The information gathered is transmitted from the occipital lobe to the left temporal lobe for processing: transforming the little images into words.

Consider then what happens when we listen: the sound strike either ear, to be translated into nerve impulse that stay in the same region for processing (or simply cross over the other side of the brain, which is a natural path).

This may explain why silent reading is a fairly recent development (last millennium, if I’m not mistaken) and why children learn to read aloud. The lines are transformed into real sound, for easier processing. Seen from outside, this may seem more complicated, but for our brain, it’s easier.

h1

The minimum

2007-03-1 @ 16:22

Deprecated: preg_replace(): The /e modifier is deprecated, use preg_replace_callback instead in /home/mab/ideas/wp-includes/functions-formatting.php on line 76

I was discussing machine translation (MT) with a friend of mine, a fellow linguist, the other day, trying to see how a computer could acquire enough information to be able to do a fairly accurate job. The big thing, of course, is meaning. But, from a MT point of view, this pretty much amounts mapping one language onto another (I’m simplifying, of course). This brought about the big question: what is needed to learn a language?

What does a child need, a priori, to be able to start learning, to pick up its first language? In other words, what innate knowledge or ability is required, as a bare minimum (besides, of course, the ability to process inputs from our senses)?

First of all, I’d say we need the notion of communication, that the sounds mean something. Or does observation tell us that? Do we only need to see (hear) that specific sound patterns are related to interactions?

Pattern recognition is a pretty obvious choice for bare requirement. The ability to extract patterns from the flow of sounds. Even before birth, the foetus can distinguish language from noise, and even recognize the sounds of its mother tongue. Studies have shown that the foetus is equipped to recognize patters, be they (for hearing) in music or language. By three and a half month, babies can separate words in a sentence; the phonological word comes before meaning. The brain sees the physical patterns (sound waves) before tackling conceptual ones (meanings).

Generalization and particularization: the ability to come to a general concept based on observation of particular instances, and conversely, to extract an instance from a general idea. This is pretty much relation to pattern recognition.

Am I missing something? Do we need anything else?