Naturalist Intelligence in the Age of the iPhone

In January 2018, Laura Jeliazkov published an article titled “An iPhone in Hand…Worth Two in the Bush?” in The Darthmouth newspaper. In the article, Jeliazkov considers the modern attachment–or perhaps addiction–to our mobile devices and the seemingly endless digital media they offer. As a way of understanding our cognitive relationship to our devices, Jeliazkov cites Naturalist Intelligence. Howard Gardner comments below:

—————————————————————————————————————————————————————————————–

Since the original publication of Frames of Mind, I have added only one intelligence to the original seven—that of the naturalist. (Though, I have speculated about other possible intelligences.) As described in recent publications, the naturalist intelligence initially evolved so that we humans could make consequential distinctions in our natural environment—what to eat, what to spurn, what to hunt, what to avoid, how the weather might change in the next day or the next month.

In the modern era, however, most of us do not have to use the naturalist intelligence to survive in our built-up urban environments. But I’ve argued that the mind and brain capacities that evolved initially for life in the tundra are now readily called upon as we decide what to buy at the supermarket, what clothing to wear, how to decorate our homes, etc.

And now, there is a new powerful force in the environment: our smart devices, with their numerous (soon, innumerable) apps. In dealing with these devices, we not only make available all items for which one could conceivably shop. But also, as is pointed out in the accompanying article by Laura Jeliazkov, we have the opportunity to present and re-present ourselves as often as we want—as well as the chance to get the reactions of others—in words, in still pictures, or in live video. That’s a lot for the naturalist intelligence to do—as Jeliazkov suggests, perhaps too much! While I don’t remember it, she quotes me as saying, “If we went back into the woods to be with our naturalist intelligences, people would have time to think again. But today this may not be possible.”

—————————————————————————————————————————————————————————————–

To read the full article, click here: http://www.thedartmouth.com/article/2018/01/an-iphone-in-hand-worth-two-in-the-bush

A New Possibility for Musical Intelligence

On January 16, 2018 the New York Times published an article by Maureen Towey, titled The Force of Sound, Captured in VR. Dr. Howard Gardner comments on this article and its implications for MI below.

—————————————————————————————————————————————————————————————–

According to the criteria that I devised almost 40 years ago, an intelligence is not equivalent to a sensory system (there are no visual or gustatory or tactile intelligences).  Rather, intelligences are best construed as mental computers that operate on information, irrespective of the sensory system (or, technically, transducer), through which the information initially passes. Thus, for example, linguistic intelligence operates on language messages, whether they are heard, read, or (in the case of the blind) perceived through touch.

This criterion has always posed a difficulty for musical intelligence. While there are certainly aspects of music (rhythm, meter, perhaps timbre) that can be accessed through various sensory systems, the key elements (pitch, harmony) are best accessed through the ear.  We can create visual or tactile versions of pitch or harmony but these are, at best, metaphoric.

With respect to music, the situation is changing. Visual patterns that change over time (as in the screen patterns on your computer) capture some of the musical experience (the composer Alexander Scriabin believed that music was by its nature synesthesic—with the various sensory systems being linked to tones, chords, harmonies). Cochlear implants improve hearing to the extent that musical signals become accessible to many who could not previously listen to or enjoy music. And now, as indicated in this essay from the New York Times, through the technology of virtual reality, yet more aspects of the musical experience can be experienced by someone who is deaf or hard of hearing.  As the caption indicates, it’s now possible to explore “what music feels like to a deaf person.”  I believe that in the coming years, the dependence of music on the ability to hear will continue to diminish–even if it does not totally disappear—hence helping many who were once considered disabled and, as a dividend, bolstering one of the tenets of “MI theory.”

—————————————————————————————————————————————————————————————–

Read the full article, originally published online on November 5, 2017, here: https://www.nytimes.com/2017/11/05/insider/how-we-used-vr-to-explore-what-music-feels-like-to-a-deaf-person.html