mother-child-banner

100 years ago, child birth was risky and infant mortality rates were horrific.

How would you feel if 30% of infants died? Or if 900 expectant mothers out of every 100,000 died giving birth?

But thanks to technology, the reality today is far different.

Today’s Evidence of Abundance is perhaps THE most important topic I could offer.

***Evidence of Abundance: Maternal & Infant Mortality***

This week’s topic is very personal to all of us. It’s the life and health of your mother, your wife, your children.

Giving birth today is a happy occasion, but for centuries and millennia, it was a risky endeavor. Let’s look at the data.

eoa2-1

 

If we look back over the last hundred years, a mother’s chances of dying in childbirth were as high as 900 deaths per 100,000 births.

As sanitation and modern medicine have improved, those mortality rates have plummeted. Today, a pregnant woman expects that she’ll give birth and live through it.

Let’s look at infant mortality next. I divide this into two parts: (i) infant mortality at birth; and, (ii) having your child living past the age of 5.

This graph shows the death rate per 1,000 live births in the world, starting in 1900 through today. In 1900, the average infant death rate was 18%… and in some parts of the world, this death rate is as much as one-third.

Imagine having one out of three infants dying?

eoa2-2

We see another precipitous drop when we look at children who die before the age of 5. Through improvements in technology, medicine, and even reduction in local conflicts, we’re now able to keep children alive into adulthood — feed them, take care of them, protect them — and this is changing the world.

eoa2-3

As we create this world of abundance, we’re able to build tight family bonds as mothers and children both live longer, better lives.

Please send your friends and family to AbundanceHub.com to sign up for these blogs — this is all about surrounding yourself with abundance-minded thinkers. And if you want my personal coaching on these topics, consider joining my Abundance 360 membership program for entrepreneurs.

[Photo credit: mother and sleeping child courtesy of Shutterstock]

Howard the Duck Apparently Makes a Cameo in Guardians of the Galaxy

Remember Howard the Duck? Probably not in detail, but the unusual, somewhat existentialist Marvel character is about to make a comeback with an apparent cameo in the new Guardians of the Galaxy movie. And he actually looks pretty bad ass.

Read more…



888,246 Handmade Poppies Surround the Tower of London to Commemorate WWI

The moat that surrounds the Tower of London has long stood empty and dry. This summer, it’s getting filled with 888,246 red ceramic poppies, one for each British and Colonial soldiers who perished during World War I.

Read more…



virtual-reality-body-hack 1

Virtual reality can put you in another world—but what about another body? Yifei Chai, a student at the Imperial College London, is using the latest in virtual reality and 3D modeling hardware to virtually “possess” another person.

How does it work? One individual dons a headmounted, twin-angle camera and attaches electrical stimulators to their body. Meanwhile, another person wears an Oculus Rift virtual reality headset streaming footage  from their friend’s camera/view.

A Microsoft Kinect 3D sensor tracks the Rift wearer’s body. The system shocks the appropriate muscles to force the possessed person to lift or lower their arms. The result? The individual wearing the Rift looks down and sees another body, a body that moves when they move—giving the illusion of inhabiting another’s body.

The system is a rough prototype. There’s a noticeable delay between action and reaction, which lessens the illusion’s effectiveness (though it’s evidently still pretty spooky), and there’s a limit to how finely the possessor can control their friend.

Currently, Chai’s system stimulates 34 arm and shoulder muscles. He admits it’s gained a lot more attention than expected. Even so, he hopes to improve it with high-definition versions of the Oculus Rift and Kinect to detect subtler movements.

Beyond offering a fundamentally novel experience, Chai thinks virtual reality systems like his might be used to encourage empathy by literally putting us in someone else’s shoes. This is akin to donning an age simulation suit, which saddles youthful users with a range of age-related maladies from joint stiffness to impaired vision.

The idea is we’re more patient and understanding with people facing challenges we ourselves have experienced. A care worker, for example, might be less apt to become frustrated with a patient after experiencing their challenges firsthand.

Virtual reality might also prove a useful therapy—a way to safely experience uncomfortable situations to ease anxiety and build habits for real world interaction. Training away an extreme fear of public speaking, for example, might include a program of standing and addressing virtual audiences.

For all these applications, the more immersive and realistic, the better. However, not all of them necessarily require control of another person’s movements—and they might be just as effective (and simpler) using digital avatars instead of real people.

That said, I couldn’t watch the video without getting hypothetical.

Chai’s system only allows for the translation of coarse, delayed movement. But what if it could translate fine, detailed movement in real time? Such a futuristic system would be more than just a cool or therapeutic experience. It would be a way to transport skills anywhere, anytime at very nearly light speed.

Currently, a number of hospitals are using telepresence robots (basically a screen and camera on a robotic mount) to allow medical specialists to video chat live with patients, nurses, and doctors hundreds or thousands of miles away. This is a way to more efficiently spread expertise and talent through the system.

oculus-rift-virtual-reality-glovesNow imagine having the ability to transport the hands of a top surgeon at the Mayo Clinic to a field hospital in Africa or a refugee camp in Lebanon. Geography would no longer limit those in need to the doctors nearby (often in short supply).

Virtual surgery could allow folks to volunteer their time without needing to travel to a war zone or move to a refugee camp full time.

But for such applications, it doesn’t make sense to use human surrogates. You’d need to embed stimulators body-wide to even approach decent control of a human. A robot, on the other hand, is designed from the ground up for external control.

And beyond medical applications, we could remotely control robotic surrogates in factories or on construction sites. Heck, in the event of alien invasion, maybe we’d even hook up to giant mechs to do battle on behalf of all humanity. But I digress.

Robots are still a long way from nimbly navigating the real world. And there are other difficult problems beyond mere movement and control. The Da Vinci surgical robot, for example, allows surgeons to perform surgery at a short distance, but it can’t yet translate fine touch sensations. Ideally, we’d translate movement, visuals, and sensation.

Will we control human or robot surrogates using virtual reality? Maybe not. The larger point, however, is the technology will likely find a broad range of applications beyond gaming and entertainment—many of which we’ve yet to fully imagine.

Image Credit: New Scientist/YouTube; BagoGames/Flickr

Surfing a never ending wave is like being stuck in a blender of fun

The barrel never ends, or at least that’s what it feels like. When you think it’s finally going to come crashing down, surfer Benji Brand goes through another water tunnel and keeps going and going. It’s like a never ending wave where you can see everything spin around you and you never have to stop.

Read more…



Comic-Con Wrap-Up: The Shiniest Things We Saw In San Diego!

Comic-Con 2014 was chock full of amazing costumes, cool art, mind-blowing TV pilots and fascinating encounters. Because even a weak Comic-Con is better than most other events. Here’s our complete roundup of all the most incredible things we saw last weekend.

Read more…

Why Songs Get Stuck in Your Head (and How to Stop It)

Whether yours is "Call Me Maybe," "Who Let the Dogs Out," "Mickey," or something equally infectious, at one time or another, you’ve probably had a fragment from a catchy (or obnoxious) tune stuck in your head.

Read more…



At high school, your physics teacher probably drummed it into you that mass and weight are completely different things—but actually, they were wrong all along.

Read more…



uncanny_valleyedThe “uncanny valley” is a term coined by Japanese roboticist Mashahiro Mori in 1970 to describe the strange fact that, as robots become more human-like, we relate to them better—but only to a point. The ”uncanny valley” is this point.

The issue is that, as robots start to approach true human mimicry, when they look and move almost, but not exactly, like a real human, real humans react with a deep and violent sense of revulsion.

This is evolution at work. Biologically, revulsion is a subset of disgust, one of our most fundamental emotions and the by-product of evolution’s early need to prevent an organism from eating foods that could harm that organism. Since survival is at stake, disgust functions less like a normal emotion and more like a phobia—a nearly unshakable hard-wired reaction.

Psychologist Paul Ekman discovered that disgust, alongside contempt, surprise, fear, joy, and sadness, is one of the six universally recognized emotions. But the deepness of this emotion (meaning its incredibly long and critically important evolutionary history) is why Ekman also discovered that in marriages, once one partner starts feeling disgust for the other, the result is almost always divorce.

Why? Because once disgust shows up the brain of the disgust-feeler starts processing the other person (i.e. the disgust trigger) as a toxin. Not only does this bring on an unshakable sense of revulsion (i.e. get me the hell away from this toxic thing response), it de-humanizes the other person, making it much harder for the disgust-feeler to feel empathy. Both spell doom for relationships.

Now, disgust comes in a three flavors. Pathogenic disgust refers to what happens when we encounter infectious microorganisms; moral disgust pertains to social transgressions like lying, cheating, stealing, raping, killing; and sexual disgust emerges from our desire to avoid procreating with “biologically costly mates.” And it is both sexual disgust and pathogenic that creates the uncanny valley.

To protect us from biologically costly mates, the brain’s pattern recognition has a hair-trigger mechanism for recognizing signs of low-fertility and ill-health. Something that acts almost human but not quite, reads—to our brain’s pattern recognition system—as illness.

And this is exactly what goes wrong with robots. When the brain detects human-like features—that is, when we recognize a member of our own species—we tend to pay more attention. But when those features don’t exactly add up to human, we read this as a sign of disease—meaning the close but no cigar robot reads as a costly mate and a toxic substance and our reaction is deep disgust.

uncanny_valley

Repliee Q2. Taken at Index Osaka Note: The model of Repliee Q2 is probably same as Repliee Q1expo, Ayako Fujii, announcer of NHK.

But the uncanny valley is only the first step in what will soon be a much more peculiar progress, one that will fundamentally reshape our consciousness. To explore this process, I want to introduce a downstream extension of this principle—call it the uncanniest valley.

The idea here is complicated, but it starts with the very simple fact that every species knows (and I’m using this word to describe both cognitive awareness and genetic awareness) its own species the best. This knowledge base is what philosopher Thomas Nagel explored in his classic paper on consciousness: ”What Is It Like to Be A Bat.” In this essay, Nagel argues that you can’t ever really understand the consciousness of another species (that is, what it’s like to be a bat) because each species’ perceptual systems are hyper-tuned and hyper-sensitive to its own sensory inputs and experiences. In other words, in the same way that “game recognizes game,” (to borrow a phrase from LL Cool J), species recognize species.

And this brings us to Ellie, the world’s first robo-shrink. Funded by DARPA and developed by researchers at USC’s Institute for Creative Studies, Ellie is an early iteration computer simulated psychologist, a bit of complicated software designed to identify signals of depression and other mental health problems through an assortment of real-time sensors (she was developed to help treat PTSD in soldiers and hopefully decrease the incredibly high rate of military suicides) .

At a technological level, Ellie combines a video camera to track facial expressions, a Microsoft Kinect movement sensor to track gestures and jerks, and a microphone to capture inflection and tone. At a psychological level, Ellie evolved from the suspicion that our twitches and twerks and tones reveal much more about our inner state than our words (thus Ellie tracks 60 different “features”—that’s everything from voice pitch to eye gaze to head tilt). As USC psychologist and one of the leads on the project, Albert Rizzo told NPR: [P]eople are in a constant state of impression management. They’ve got their true self and the self that they want to project to the world. And we know that the body displays things that sometimes people try to keep contained.”

6818431732_16c8be42ae_mMore recently, a new study just found that patients are much more willing to open up to a robot shrink than a human shrink. Here’s how Neuroscience News explained it: ”The mere belief that participants were interacting with only a computer made them more open and honest, researchers found, even when the virtual human asked personal questions such as, ‘What’s something you feel guilty about?’ or ‘Tell me about an event, or something that you wish you could erase from your memory.’ In addition, video analysis of the study subjects’ facial expressions showed that they were also more likely to show more intense signs of sadness — perhaps the most vulnerable of expressions — when they thought only pixels were present.

The reason for this success is pretty straightforward. Robots don’t judge. Humans do.

But this development also tells us a few things about our near future. First, while most people are now aware of the fact that robots are going to steal a ton of jobs in the next 20 years, the jobs that most people think are vulnerable are of the blue-collar variety. Ellie is one reason to disavow yourself of this notion.

As a result of this coming replacement, two major issues are soon to arise. The first is economic. There are about 607,000 social workers in America, 93,000 practicing psychologists, and roughly 50,000 psychiatrists. But, well, with Ellie 2.0 in the pipeline, not for long. (It’s also worth noting that these professions generate about $3.5 billion dollars in annual income, which—assuming robo-therapy is much, much cheaper than human-therapy—will also vanish from the economy.)

But the second issue is philosophical, and this is where the uncanniest valley comes back into the picture. Now, for sure, this particular valley is still hypothetical, and thus based on a few assumptions. So let’s drill down a bit.

The first assumption is that social workers, psychologist and psychiatrists are a deep knowledge base, arguably one of our greatest repositories of “about human” information.

Second, we can also assume that Ellie is going to get better and better and better over time—no great stretch since we know all the technologies that combine to make robo-psychologists possible are, as was well-documented in Abundance, accelerating on exponential growth curves. This means that sooner or later, in the psychological version of the Tricorder, we’re going to have an AI that knows us as well as we know ourselves.

Third—and also as a result of this technological acceleration—we can also assume there will soon come a time when an AI can train up a robo-therapist better than a human can—again, no great stretch because all we’re really talking about is access to a huge database of psychological data combined with ultra-accurate pattern recognition, two already possible developments.

But here’s the thing—when you add this up, what you start to realize is that sooner or later robots will know us better than we know ourselves. In Nagel’s terms, we will no longer be the species that understands our species the best. This is the Uncanniest Valley.

And just as the uncanny valley produces disgust, I’m betting that the uncanniest valley produces a nearly unstoppable fear reaction—a brand new kind of mortal terror, the downstream result of what happens when self loses its evolutionarily unparalleled understanding of self.

Perhaps this will be temporary. It’s not hard to imagine that our journey to this valley will be fortuitous. For certain, the better we know ourselves—and it doesn’t really matter where that knowledge comes from—the better we can care for and optimize ourselves.

Yet I think the fear-response produced by this uncanniest valley will have a similar effect to disgust in relationships—that is, this fear will be extremely hard to shake.

But even if I’m wrong, one this for certain, we’re heading to an inflection point almost with an equal—the point in time when we lose a lot more of ourselves, literally, to technology and another reason that life in the 21st century is about to get a lot more Blade Runner.

More human than human? You betcha. Stay tuned.

[Photo credits: Robert Couse-Baker/Flickr, Wikipedia, Steve Jurvetson/Flickr]

All Blood Donation Centers Should Be This Shade of Red

Every once in a while form meets function in such a wonderful way that an architectural pun is born. It’s hard to find a better example than the Blood Center in Raciborz, Poland. Let’s just say they don’t have to worry about a spill staining the carpet.

Read more…



« Previous posts Back to top