You’ve already forgotten almost everything that’s ever happened to you. Even immediately after you finish this, you won’t remember everything you read. A sentence might pop into your mind in a few days, remixed with something else as a new experience enters your stream of consciousness and disappears just as fast. That’s the nature of memory, which is far less reliable than you might think.
Experience is a blur punctuated by a few bright moments that stand out. But even those so-called “flashbulb memories,” salient as they seem at the time, are as easily forgotten as their more banal counterparts. Few people realize how wrong their recollections, even of pivotal or traumatizing events like 9/11, can be.
As humans become more entwined with technology, however, that’s starting to change. What does that mean for the future of memory?
If it’s true that memory is an artificial construct, what does that mean about our relationship with our own experiences? When I first met my colleague James Jorasch, inventor and founder of Science House, he taught me a few of the tricks, including the construction of a memory palace, that he uses to compete in the US Memory Championship.
The book Moonwalking with Einstein details the process used by memory competitors. They don’t memorize an entire deck of cards in order by recalling them the way they appear. Instead, the King of Clubs might become Tiger Woods, who then appears as a character in a larger story that unfolds in your memory palace, which could be your childhood home or any familiar environment.
Memory is complicated, but to try to simplify it: It operates on two tracks. First, the small seahorse-shaped hippocampus in your brain acts like a bouncer at a club, letting in bits that help the brain survive. The pieces of information deemed worthy of recall are passed on to the amygdala, which doesn’t encode a perfect copy. The true point of memory, ultimately, is survival. The brain remembers how to find food or avoid the familiar face of an enemy. Your brain doesn’t need the names, numbers or irrelevant details associated with an event.
“Machines and humans have two different languages of memory,” James says. “Machine memory is binary, while brain memory employs images. In between, we speak a third language: words. Right now, we speak to people as if they are machines and will remember perfectly. In the future, however, we will optimize a shared language for recall.”
This new language will be a synthesis of machine and human memory. It is nascent today in the art and science of visualization. Math and imagination form a collective super-memory that we can tap into to make better decisions about survival together.
As a child, Dr. David Eagleman fell off a roof. Though the drop took less than a second, it felt much longer, sparking his interest in memory and time. Now the director of the Laboratory for Perception and Action and the Initiative on Neuroscience and Law at Baylor University, Eagleman has been known to toss people off a 150-foot roof onto a safety net so he can examine their memory of the event.
Eagleman has interviewed me a couple of times about my own memory. I started keeping detailed diaries decades before either of my grandmothers started showing signs of dementia, and I became obsessed in childhood with the pace of passing time. I wondered why adults rushed around as if they didn’t realize how fast life comes and goes. My desire to slow time down led me to observe and record as much as possible.
“Memory is a reconstruction,” Eagleman said, “colored by our own expectations about how the world works. Memory only exists for one reason from the brain’s perspective: prediction, to improve future behavior.”
People believe that memory is a video recording of reality, Eagleman said, but nothing could be further from the truth, as evidenced by the notorious problems associated with eyewitness testimony and the unreliability of our own recollections. When Facebook recently announced the purchase of Instagram and introduced video to the popular gallery of still images, the move was billed as the future of memory. But this is only true in a surficial way.
What we do is different from why we do it; a distinction that should not be overlooked. Social media shows a visible sliver of the identity you curate. The portrait of you sketched by surveillance, coupled with analytics, on the other hand, nails you to a particular place and time with little room for misremembering. The social self has always been a fragmented avatar. The invisible data generated by each of us is a true picture of reality, though with one fatal flaw: a nearly complete lack of context and reflection.
Wouldn’t it be fantastic if instead of losing the contents of great minds we could have some kind of copy of their contents, for continued access or even to bring solace after death? Eagleman misses his mentor, the Nobel Prize-winning Francis Crick, who is most famous for having co-discovered the structure of the DNA molecule along with James Watson. He wishes that some kind of brain scan had been made with Crick’s permission before his death so the knowledge accumulated over a lifetime wouldn’t have been lost. Asking questions of the android Crick wouldn’t be the same as having the real Crick, but it would be better than nothing.
Now imagine mashing those contents up with the knowledge base of, say, IBM Watson. While our human brains learn from memory, for a machine like IBM Watson, “tracking feedback” is the mechanism by which it learns from past successes and failures.
IBM Fellow Grady Booch is a software architect who works on a number of cognitive computing projects at IBM. Booch and his wife, Jan, are co-developing a multimedia project called Computing: The Human Experience. Our minds, Booch said, are pattern makers, building a web of memories and context, including the time, place and environment in which our experiences occur.
“We often remember what we want to remember or what we remember from the stories told about us,” Booch said. “Furthermore, we are often poor judges of reality. We often get not just details wrong. We sometimes misplace entire memories.”
In the future, this may not be an option. We will be locked into the reality of our geolocated whereabouts. Our searches will be logged like a captain’s log each night at sea for some stranger to read, some other day. So much--the emotions conjured by the sound of a hungry gull’s cry, the pain of separation from a lover on the distant shore, or the sight of pink twilight shimmering on the surface of the ocean--will remain invisible. The last remaining mystery, as usual, will be our motives.