- Audio /
Mar 15, 2016
The 8th Annual Global 'Zeitgeist Day' Symposium Promotes Sustainability, Global Unity, and a Post-Scarcity Society Read More >
Jan 31, 2015
Promotes Global Unity, Social Betterment and a More Humane Society Read More >
Sep 12, 2014
Features Live Music, Short Films, Comedy and Art, Promotes Social Consciousness Through the Power of Art Read More >
Mar 01, 2014
Toronto Main Event and Beyond Read More >
Feb 03, 2014
A New Book by The Zeitgeist Movement Read More >
More Press Releases >
Apr 01, 2016 Host: Casey Davidson
In this episode Casey Davidson (Australian national coordinator for TZM) discusses whether the Zeitgeist Movement should interact with political parties, how to find a balance between making ethical choices and connecting with larger audiences as well as introducing the Brisbane chapter's amusing 'Tinfoil hat scale'.
Mar 20, 2016 Host: Jasiek Luszczki
This episode of TZM global is hosted by Jasiek Luszczki from the Polish chapter of TZM. Today's show features an interview with two activists of the Rotterdam TZM Chapter (Holland) - Anthony Jacobi and Robert Schram.
They talk about their way of utilising the NLRBE-like philosophy and code of conduct within the confines of today's monetary system. They present some ideas on how to move away from "business as usual" (working for profit) to "awareness as usual" (generating social capital) mindset.
Feb 10, 2016 Host: James Phillips
This episode of TZM global is hosted by UK chapter member and TZM education coordinator James Phillips and involves an interview with fellow TZM members Jasiek Thejester and Stefan Kengen from the Polish and Danish chapters of TZM respectively about the recent European meeting held in Rotterdam.
Dec 10, 2015 Host: James Phillips
This episode of TZM global is hosted by UK chapter member and co-coordinator of the movements global educational activism project; TZM education, James Phillips.
Along with other movement related news this episode includes a conversation with fellow TZM education member and Hungarian chapter coordinator, Sztella Kantor regarding her experience of taking the materials of TZM education into schools in Hungary.
If you are interested in taking part in this global initiative then please visit: www.tzmeducation.org
*At the time of publication there was an issue with our podcast provider, blogtalk radio. Therefore the show could only be uploaded in it's edited format to you tube at this time. The full version will be released as soon as this issue is resolved.
Nov 25, 2015 Host: James Phillips
Ep 178 European TZM meeting show - Rotterdam. This episode of TZM Global is hosted by UK chapter team member and co-coordinator of TZM Education (www.tzmeducation.org) James Phillips.
This episode includes an interview with the Global Chapters Administration Coordinator Gilbert Ismail regarding the upcoming European TZM Meetup in Rotterdam next month. For more information, please visit the following link: https://www.facebook.com/events/91743...
Also included in this show is a request for more content for TZM Global Radio. Please send pre-recorded submissions to: email@example.com.
Conventional wisdom would have you believe that most people enter adolescence with a head full of high-minded ideals and a willingness to shake up the system. As they get older, however, they gradually begin to accept the status quo. For me, that process is reversed.
The older I get, the more skeptical I become of our current social model. Why?
Let’s start with this:
It should be of increasing concern to all Americans that there is an extreme disconnect between what Americans believe about man-made climate change, and what science tells us about it. That is to say, despite there being a clear scientific consensus, man-made climate change is more often than not framed as an ambiguous concept in the U.S. mainstream media. Consequently, climate change is generally thought to be far more esoteric than it actually is.
INTRODUCTION AND DISCLAIMER 
The purpose of this project is to enable supporters of a natural law resource based economic model (NLRBE) to understand and appreciate the need to approach the education system in an effort to initiate the value shift required for a more peaceful and sustainable future to emerge.
Today I was reading The Zeitgeist Movement Defined: Realizing a New Train of Thought, again. I did so because I feel the need to express certain frustration on this/my social movement but haven’t found the right words. Also I didn’t want to make any false assumptions on its architecture, so I went straight to the source with a pen in my hand.
I went through the 9 pages that constitute the overview and extracted some notes I would like to post in here:
We need more films about the social, ecological and economic change!
We want to make one and you could help us.
In our Documentary "The Taste of Life" we want to show, that there are people in the whole world, already practicing this change in a great way.
From social symptom to root causes came about as a bi-product of ZDAY 2013 in London, in which all but the introductory talk featured exterior organisations and speakers. Each of whom seek to address a particular social or environmental issue closely aligned with the movement’s materials.
From social symptom to root causes came about as a bi-product of ZDAY 2013 in London, in which all but the introductory talk featured exterior organisations and speakers. Each of whom seek to address a particular social or environmental issue closely aligned with the movement’s materials.
Transcript below. Can also be viewed via PDF HERE.
Welcome to: “3 Questions - What do you propose?” This thought exercise is intended for both the average person, concerned about global problems – along with those who are still confused about - or perhaps even in opposition to The Zeitgeist Movement.
Peter Joseph, ZDay 2016 "Where we go from here" March 26th, Athens Greece [ The Zeitgeist Movement ]
If you’ve ever played around with an old music amplifier, you probably know what a firing neuron https://www.youtube.com/watch?v=8bxpz-YEuao ">sounds like.
A sudden burst of static? Check. A rapid string of pops, like hundreds of bursting balloons? Check. A rough, scratchy bzzzz that unexpectedly assaults your ears? Check again.
Neuroscientists have long used an impressive library of tools to eavesdrop on the electrical chattering of neurons in lab animals. Like linguists deciphering an alien communication, scientists carefully dissect the patterns of neural firing to try to distill the grammatical rules of the brain—the “neural code.”
By cracking the code, we may be able to emulate the way neurons communicate, potentially leading to powerful computers that work like the brain.
It’s been a solid strategy. But as it turns out, scientists may have been only brushing the surface—and missing out on a huge part of the neural conversation.
Recently, http://www.physics.ucla.edu/~mayank/ ">a team from UCLA discovered a hidden layer of neural communication buried within the long, tortuous projections of neurons—the dendrites. Rather than acting as passive conductors of neuronal signals, as previously thought, the scientists discovered that dendrites actively generate their own spikes—five times larger and more frequently than the classic spikes stemming from neuronal bodies (dubbed “soma” in academic spheres).
"It’s like suddenly discovering that cables leading to your computer’s CPU can also process information—utterly bizarre, and somewhat controversial."
“Knowing [dendrites] are much more active than the soma fundamentally changes the nature of our understanding of how the brain computes information,” http://newsroom.ucla.edu/releases/ucla-research-upend-long-held-belief-about-how-neurons-communicate ">says http://www.physics.ucla.edu/~mayank/people.html ">Dr. Mayank Mehta, who led the study.
These findings suggest that learning may be happening at the level of dendrites rather than neurons, using fundamentally different rules than previously thought, Mehta explained to Singularity Hub.
How has such a wealth of computational power previously escaped scientists’ watchful eyes?
Part of it is mainstream neuroscience theory. According to standard teachings, dendrites are passive cables that shuttle electrical signals to the neuronal body, where all the computation occurs. If the integrated signals reach a certain threshold, the cell body generates a sharp electrical current—a spike—that can be measured by sophisticated electronics and amplifiers. These cell body spikes are believed to be the basis of our cognitive abilities, so of course, neuroscientists have turned their focus to deciphering their meanings.
But recent studies in brain slices suggest that the story’s more complicated. When recording from dendrites on neurons in a dish, scientists noticed telltale signs that they may also generate spikes, independent of the cell body. It’s like suddenly discovering that cables leading to your computer’s CPU can also process information—utterly bizarre, and somewhat controversial.
Although these dendritic spikes (or “dendritic action potentials”) have been observed in slices and anesthetized animals, whether they occur in awake animals and contribute to behavior is an open question, http://science.sciencemag.org/content/early/2017/03/08/science.aaj1497 ">explains the team in their paper.
To answer the question, the team decided to record from dendrites in animals going about their daily business. It’s a gigantic challenge: the average diameter of a dendrite is 100 times smaller than a single human hair—imagine trying to hit one with an electrode amongst a jungle of intertwined projections in the brain, without damaging anything else, while the animal is walking around!
Then there’s the actual recording aspect. Scientists usually carefully puncture the membrane with a sharp electrode to pick up signals from the cell body. Do the same to a delicate dendrite, and it shreds into tiny bits.
To get around all these issues, the UCLA team devised a method that allows them to place their electrode near, rather than inside, the dendrites of rats. After a slew of careful experiments to ensure that they were in fact picking up dendritic signals, the team finally had a tool to eavesdrop on their activity—and stream it live to computers—for the first time.
For four days straight, the team monitored their recordings while the rats ate, slept and navigated their way around a maze. The team implanted electrodes into a brain area that’s responsible for planning movements, called the https://en.wikipedia.org/wiki/Posterior_parietal_cortex ">posterior parietal cortex, and patiently waited for signs of chitchatting dendrites.
Overnight, signals appeared on the team’s computer monitor that looked like jagged ocean waves, with each protrusion signaling a spike. Not only were the dendrites firing off action potentials, they were doing so in droves. As the rats slept, the dendrites were chatting away, spiking five times more than the cell bodies from which they originate. When awake and exploring the maze, the firing rate jacked up to ten-fold.
What’s more, the dendrites were also “smart” in that they adapted their firing with time—a kind of plasticity that’s only been observed in neuronal bodies before. Since learning fundamentally relies on the ability to adapt and change, this suggests that the branches may potentially be able to “learn” on their own.
Because the dendrites are so much more active than the cell body, it suggests that a lot of activity and information processing in a neuron is happening in the dendrites without informing the cell body, says Mehta.
"Based purely on volume, because dendrites are 100 times larger than the cell body, it could mean that brains have 100 times more processing capacity than we previously thought."
This semi-independence raises a tantalizing idea: that each dendritic branch can act as a computational unit and process information, much like the US states having their own governance that works in parallel with federal oversight.
Neuroscientists have always thought that learning happens when the cell body of two neurons “https://singularityhub.com/2017/03/15/new-artificial-synapse-bridges-the-gap-to-brain-like-computers/ ">fire together, wire together.” But our results indicate that learning takes place when the input neuron and dendritic spike—rather than cell body spike—happen at the same time, says Mehta.
“This is a fundamentally different learning rule,” he adds.
Curiouser and curiouser
What’s even stranger is how the dendrites managed their own activity. Neuron spikes—the cell body type—is often considered “all or none,” in that you either have an action potential or not.
Zero or one; purely digital.
While dendrites can fire digitally, in addition, they also generated large, graded fluctuations roughly twice as large as the spikes themselves.
“This large range…shows analog computation in the dendrite. This has never been seen before in any neural activity patterns,” says Mehta.
So if dendrites can compute, what are they calculating?
The answer seems to be the here and now. The team looked at how both cell body and dendrites behaved while the rats explored the maze. While the cell body shot off spikes in anticipation of a behavior—turning a corner, stopping or suddenly rushing forward—the dendrites seemed to perform their computations right when the animal does something.
“Our findings suggest [that] individual cortical neurons take information about the current state of the world, present in the dendrites, and form an anticipatory, predictive response at the soma,” http://science.sciencemag.org/content/early/2017/03/08/science.aaj1497 ">explain the authors, adding that this type of computation is often seen in artificial neural network models.
The team plans to take their dendritic recordings to virtual reality in future studies, to understand how networks of neurons learn abstract concepts such as space and time.
The secret lives of neurons
What this study shows is that we’ve been underestimating the computational power of the brain, http://newsroom.ucla.edu/releases/ucla-research-upend-long-held-belief-about-how-neurons-communicate ">says Mehta. Based purely on volume, because dendrites are 100 times larger than the cell body, it could mean that brains have 100 times more processing capacity than we previously thought, at least for rats.
But that’s just a rough estimate. And no doubt the number will change as scientists dig even deeper into the nuances of how neurons function.
This hybrid digital-analog, dendrite-soma, duo-processor parallel computing “is a major departure from what neuroscientists have believed for about 60 years,” says Mehta. It’s like uncovering a secret life of neurons, he adds.
These findings could galvanize other fields that aim to emulate the brain, like artificial intelligence or engineering new kinds of neuron-like computer chips to dramatically boost their computational prowess.
And if repeated by other researchers in the field, our neuroscience textbooks are set with a massive overhaul.
Neurons will no longer be the basic computational unit of the brain—dendrites, with their strange analog-digital hybrid code, will take that throne.
Image Credit: https://www.flickr.com/photos/nichd/21086425615/in/photostream/ ">NIH/NICHD/FlickrCC
Virtual reality has a long history in science fiction. From the https://www.youtube.com/watch?v=3LNvXjb44-U ">Lawnmower Man to the https://www.youtube.com/watch?v=m8e-FF8MsqU ">Matrix, the idea of VR has inspired artists and gamers alike. But it’s only very recently that the technology has moved out of the lab and into people’s homes.
Since the https://singularityhub.com/2013/05/31/oculus-rift-is-breathing-new-life-into-the-dream-of-virtual-reality/ ">2012 Oculus Kickstarter, VR has become a driving passion for technophiles and developers around the world. In 2016, the first consumer devices became mainstream, and now https://singularityhub.com/2016/03/10/virtual-realitys-moment-of-truth-is-finally-here/ ">the only questions seem to be how quickly it will improve, who will adopt it, and what applications will prove the most revolutionary?
https://medium.com/cinenation-show/vr-creator-series-barry-pousman-57febb93d191#.strs5hqcm ">Barry Pousman is one of the field’s leading innovators and a big believer in VR’s transformative potential. Pousman began working in the VR field as a founding member of the United Nations' VR initiative and has served as an advisor to some of the industry’s heavyweights, including https://www.oculus.com/ ">Oculus, https://www.vive.com/ca/ ">HTC Vive, and https://labs.ideo.com/2016/03/07/how-we-did-it-prototyping-in-virtual-reality/ ">IDEO Labs.
Pousman co-directed, co-produced, and shot the now-famous VR film https://with.in/watch/clouds-over-sidra/ ">Clouds Over Sidra, and his work has been screened at the World Economic Forum, the UN General Assembly, and the Sundance Film festival. In fact, his company, https://variablelabs.com/ ">Variable Labs, is building an immersive VR learning platform for businesses with a special focus on corporate training.
I recently caught up with Pousman to get his take on VR’s recent past and its exciting future. In his corporate office in Oakland, California, we discussed the power of VR as an “empathy machine,” its dramatic impact on donations to aid Syrian refugees, and how his home office is already pretty close to Star Trek’s Holodeck.
I know that empathy is a big focus for Variable Labs. Could you say more about how you see immersive experiences helping people to become more empathic? What is the connection between VR and empathy?
What attracted me to the medium of VR in the first place is how incredible VR experiences can be and how much remains unknown within the field.
Although all artistic mediums can invoke empathy VR is unlike traditional mediums (writing, theater, film). VR’s sheer form-factor and the isolating experience it engenders, inspires focus like no other medium before it. And when we marry that with the user experience of seeing and hearing the world from another human’s perspective, you get what Chris Milk calls https://techcrunch.com/2015/02/01/what-it-feels-like/ ">“the empathy machine.”
At Variable Labs, our end-goal is not to foster more empathy in the world, but instead to create measurable and positive behavior change for our audiences using commercial technology. We are engaging in efficacy research for our learning platform to see if and how users internalize and implement the lessons in their own lives.
You co-directed, co-produced and shot the United Nations VR documentary, Clouds Over Sidra. For those who are unfamiliar, could you say something about the film. What was it like making the film? What was the advantage of using VR? And what was the overall impact for the UN?
The 360 film http://with.in/watch/clouds-over-sidra/ ">Clouds Over Sidra allows audiences to spend a day in a refugee camp and is seen through the lens of a young Syrian girl. It was first filmed as an experiment with the https://unitednationsvirtualreality.wordpress.com/ ">United Nations and the VR company, https://with.in/ ">Within, but has since become a model for live-action 360 documentary and documentary journalism.
For me personally, the film was difficult to shoot because of the challenging environment at the camp. Not that it was particularly violent or unclean, but rather that the refugees there were so similar to my own friends and family at home. They were young professionals, doctors, and middle-class children, living as refugees with almost no opportunities to shape their own futures.
"Clouds Over Sidra is now being used by UNICEF street fundraisers and reporting a 100% increase in donations in cities across the world."
Throughout my career of making impact media, I’ve understood how important it is to get these types of stories out and into the hands of people that can really make a difference. And in measuring the actions taken by the audience of this film, it’s clear that it has had a dramatic effect on people.
When Clouds Over Sidra was screened at the last minute during a Syrian Refugee donor conference, they were able to raise $3.8 billion, far surpassing the expected $2.3 billion for the 24-hour event. In fact, the film is now being used by UNICEF street fundraisers and reporting a 100% increase in donations in cities across the world.
We’ve seen a kind of rise and fall of VR over the last forty years or so. In the 1980s and 1990s, there was a lot of excitement about VR linked to books like Neuromancer (1984), and movies like Brainstorm (1983), the Lawnmower Man (1992), and of course, the Matrix trilogy (1999). In your view, has VR now finally come of age?
Has VR come of age? Well-funded organizations such as NASA and the DoD have been using virtual reality for simulated learning since the late 70s. And similar to the computing industry—which began in the DoD and then moved into consumer and personal computing—VR hardware is now finally hitting the consumer market.
This means that instead of spending millions of dollars on https://www.vive.com/ ">VR hardware, anyone can purchase something very similar for only a few hundred dollars.
Steven Spielberg's upcoming film, http://www.imdb.com/title/tt1677720/ ">Ready Player One will raise eyebrows and grow interest and appetite for personal immersive tech. And as these themes continue to grow in mainstream media, consumers and publishers will become increasingly inspired to explore new VR formats and entirely new use-cases.
Personally, I’m excited about further exploring the idea of convergent media, bridging the gap between linear storytelling and audience agency. For example, Pete Middleton’s, http://www.notesonblindness.co.uk/ ">Notes on Blindness, pushes the envelope in this way by involving the audience in the action. And the http://dragons.org/creators/gabo-arora/work/ ">Gabo Arora's upcoming room-scale piece, The Last Goodbye, is another example that uses "activity required" storytelling.
But in my view, VR won’t truly come of age until we can integrate artificial intelligence. Then the virtual worlds and characters will be able to respond dynamically to audience input and we can deliver more seamless human-computer interactions.
There are now a plethora of VR platforms for the mass market: Oculus, HTC Vive, Samsung Gear VR, Google Daydream and more. With the costs of the technology declining and computing capacity accelerating, where do you see VR having the most impact over the next 10-15 years?
For impact from VR, the clear and away winner will be education.
The research from Stanford’s Virtual Human Interaction Lab, the MIT Media Lab, USC’s Institute for Creative Technologies, and many other top-tier institutions has shown the efficacy of VR for learning and development with excellent results. In fact, a new http://www.crossroadstoday.com/story/34006283/new-research-suggests-vr-offers-exciting-new-ways-to-unlock-student-potential ">study from researchers in China showed incredible improvement for students using VR when learning both theoretical concepts and practical skills at the high school level.
"Immersive education will permeate all sectors from medicine to transportation to agriculture."
And immersive education (VR, AR and MR) will permeate all sectors from medicine to transportation to agriculture. E-commerce is going to see a huge shift as well. Amazon and Google will no doubt be creating VR shopping experiences very soon if they haven’t started already. In addition to this, autonomous cars are a perfect fit for VR and AR. Self-driving cars will create an entirely new living room for families with both individual and group VR and AR experiences for learning and entertainment.
VR breaks the square frame of traditional narratives. What does VR mean to art and storytelling?
Seeing well-made and well thought out VR is one of the most satisfying experiences one can have.
I look at the incredible work of https://www.oculus.com/story-studio/ ">Oculus Story Studio, and it’s obvious they’ve tapped into a whole new way of looking at story development for VR. And https://with.in/ ">Within continues to break new boundaries in art and storytelling by adding new technologies while maintaining nuanced storylines, most recently through voice input in their latest work, Life of Us.
One of the best places to discover this sort of content is through http://kaleidovr.com/ ">Kaleidoscope, a traveling VR festival and collective of VR and AR artists, animators, filmmakers, and engineers.
There looks to be a pretty wide array of applications for VR including military training, education, gaming, advertising, entertainment, etc. What kind of projects are you currently working on?
We are excited about the enterprise training space. Imagine on your first day of work you get handed a nice VR headset instead of a stack of books and papers.
We used to think of the platform we’re building as the “Netflix of Learning” but we’ve now started exploring a Virtual Campus model. So imagine on that first day of work, you can (virtually) sit down with your new CEO in their office, meet other employees, speak with your HR manager, and fill out your new-hire forms from inside the headset using the controller.
For now, VR is limited to headsets or head-mounted displays (HMD). What new interfacing systems could we see in the future? When will we get the Star Trek holodeck?
There will be two form factors of VR/AR as we move forward, glasses for mobile use and rooms for higher fidelity experiences. I just installed an HTC Vive in my home office, and it feels pretty close to the Holodeck already! The empty room turns into an art gallery, a paintball field, a deep-sea dive, and a public speaking simulator. And what we get to take out of it is an expanded viewpoint, a raised consciousness, memories and the occasional screenshot. This is just the beginning, and it’s going to change how we learn and play in profound ways.
Image Credit: http://www.shutterstock.com ">Shutterstock
It’s tempting to think of the mind as a layer that sits on top of more primitive cognitive structures. We experience ourselves as conscious beings, after all, in a way that feels different to the rhythm of our heartbeat or the rumblings of our stomach. If the operations of the brain can be separated out and stratified, then perhaps we can construct something akin to just the top layer, and achieve https://aeon.co/essays/can-we-make-consciousness-into-an-engineering-problem " target="_blank">human-like artificial intelligence (AI) while bypassing the messy flesh that characterises organic life.
I understand the appeal of this view, because I co-founded SwiftKey, a predictive-language software company that was bought by Microsoft. Our goal is to emulate the remarkable processes by which human beings can understand and manipulate language. We’ve made some decent progress: I was pretty proud of the elegant new communication system we built for the physicist Stephen Hawking between 2012 and 2014. But despite encouraging results, most of the time I’m reminded that we're nowhere near achieving human-like AI. Why? Because the layered model of cognition is wrong. Most AI researchers are currently missing a central piece of the puzzle: embodiment.
Things took a wrong turn at the beginning of modern AI, back in the 1950s. Computer scientists decided to try to imitate conscious reasoning by building logical systems based on symbols. The method involves associating real-world entities with digital codes to create virtual models of the environment, which could then be projected back onto the world itself. For instance, using symbolic logic, you could instruct a machine to ‘learn’ that a cat is an animal by encoding a specific piece of knowledge using a mathematical formula such as ‘cat > is > animal’. Such formulae can be rolled up into more complex statements that allow the system to manipulate and test propositions — such as whether your average cat is as big as a horse, or likely to chase a mouse.
This method found some early success in simple contrived environments: in ‘SHRDLU’, a virtual world created by the computer scientist Terry Winograd at MIT between 1968-1970, users could talk to the computer in order to move around simple block shapes such as cones and balls. But symbolic logic proved hopelessly inadequate when faced with real-world problems, where fine-tuned symbols broke down in the face of ambiguous definitions and myriad shades of interpretation.
In later decades, as computing power grew, researchers switched to using statistics to extract patterns from massive quantities of data. These methods are often referred to as ‘machine learning’. Rather than trying to encode high-level knowledge and logical reasoning, machine learning employs a bottom-up approach in which algorithms discern relationships by repeating tasks, such as classifying the visual objects in images or transcribing recorded speech into text. Such a system might learn to identify images of cats, for example, by looking at millions of cat photos, or to make a connection between cats and mice based on the way they are referred to throughout large bodies of text.
Machine learning has produced many tremendous practical applications in recent years. We’ve built systems that surpass us at speech recognition, image processing and lip reading; that can beat us at chess, Jeopardy! and Go; and that are learning to create visual art, compose pop music and write their own software programs. To a degree, these self-teaching algorithms mimic what we know about the subconscious processes of organic brains. Machine-learning algorithms start with simple ‘features’ (individual letters or pixels, for instance) and combine them into more complex ‘categories’, taking into account the inherent uncertainty and ambiguity in real-world data. This is somewhat analogous to the visual cortex, which receives electrical signals from the eye and interprets them as identifiable patterns and objects.
But algorithms are a long way from being able to think like us. The biggest distinction lies in our evolved biology, and how that biology processes information. Humans are made up of trillions of eukaryotic cells, which first appeared in the fossil record around 2.5 billion years ago. A human cell is a remarkable piece of networked machinery that has about the same number of components as a modern jumbo jet — all of which arose out of a longstanding, embedded encounter with the natural world. In Basin and Range (1981), the writer John McPhee observed that, if you stand with your arms outstretched to represent the whole history of the Earth, complex organisms began evolving only at the far wrist, while ‘in a single stroke with a medium-grained nail file you could eradicate human history’.
The traditional view of evolution suggests that our cellular complexity evolved from early eukaryotes via random genetic mutation and selection. But in 2005 the biologist James Shapiro at the University of Chicago outlined a radical new http://shapiro.bsd.uchicago.edu/Shapiro.2005.Gene.pdf " target="_blank">narrative. He argued that eukaryotic cells work ‘intelligently’ to adapt a host organism to its environment by manipulating their own DNA in response to environmental stimuli. Recent microbiological findings lend weight to this idea. For example, mammals’ immune systems have the tendency to duplicate sequences of DNA in order to generate effective antibodies to attack disease, and we now know that at least 43 per cent of the human genome is made up of DNA that can be moved from one location to another, through a process of natural ‘genetic engineering’.
Now, it’s a bit of a leap to go from smart, self-organising cells to the brainy sort of intelligence that concerns us here. But the point is that long before we were conscious, thinking beings, our cells were reading data from the environment and working together to mould us into robust, self-sustaining agents. What we take as intelligence, then, is not simply about using symbols to represent the world as it objectively is. Rather, we only have the world as it is revealed to us, which is rooted in our evolved, embodied needs as an organism. Nature ‘has built the apparatus of rationality not just on top of the apparatus of biological regulation, but also from it and with it’, wrote the neuroscientist Antonio Damasio in Descartes’ Error (1994), his seminal book on cognition. In other words, we think with our whole body, not just with the brain.
I suspect that this basic imperative of bodily survival in an uncertain world is the basis of the flexibility and power of human intelligence. But few AI researchers have really embraced the implications of these insights. The motivating drive of most AI algorithms is to infer patterns from vast sets of training data — so it might require millions or even billions of individual cat photos to gain a high degree of accuracy in recognising cats. By contrast, thanks to our needs as an organism, human beings carry with them extraordinarily rich models of the body in its broader environment. We draw on experiences and expectations to predict likely outcomes from a relatively small number of observed samples. So when a human thinks about a cat, she can probably picture the way it moves, hear the sound of purring, feel the impending scratch from an unsheathed claw. She has a rich store of sensory information at her disposal to understand the idea of a ‘cat’, and other related concepts that might help her interact with such a creature.
This means that when a human approaches a new problem, most of the hard work has already been done. In ways that we’re only just beginning to understand, our body and brain, from the cellular level upwards, have already built a model of the world that we can apply almost instantly to a wide array of challenges. But for an AI algorithm, the process begins from scratch each time. There is an active and important line of research, known as ‘inductive transfer’, focused on using prior machine-learned knowledge to inform new solutions. However, as things stand, it’s questionable whether this approach will be able to capture anything like the richness of our own bodily models.
On the same day that SwiftKey unveiled Hawking’s new communications system in 2014, he gave http://www.bbc.co.uk/news/technology-30290540 " target="_blank">an interview to the BBC in which he warned that intelligent machines could end mankind. You can imagine which story ended up dominating the headlines. I agree with Hawking that we should take the risks of rogue AI seriously. But I believe we’re still very far from needing to worry about anything approaching human intelligence — and we have little hope of achieving this goal unless we think carefully about how to give algorithms some kind of long-term, embodied relationship with their environment.https://metrics.aeon.co/count/9c41f6a7-91ab-42b0-a355-12feefd565f9.gif " alt="Aeon counter – do not remove" width="1" height="1" />
This article was originally published at https://aeon.co " target="_blank">Aeon and has been republished under Creative Commons.
Image Credit: https://en.wikipedia.org/wiki/Patroclus#/media/File:Jacques-Louis_David_-_Patroclus_-_WGA06044.jpg ">Patroclus by Jacques-Louis David (1780) via Wikipedia
This article is part of a new series exploring the skills leaders must learn to make the most of rapid change in an increasingly disruptive world. The first article in the series, “https://singularityhub.com/2017/01/11/how-the-most-successful-leaders-will-thrive-in-an-exponential-world/ ">How the Most Successful Leaders Will Thrive in an Exponential World,” broadly outlines four critical leadership skills—futurist, technologist, innovator, and humanitarian—and how they work together.
Today's post, part three in the series, takes a more detailed look at leaders as humanitarians. Be sure to check out part two of the series, "https://singularityhub.com/2017/02/23/how-leaders-dream-boldly-to-bring-new-futures-to-life/ ">How Leaders Dream Boldly to Bring New Futures to Life," and stay tuned for upcoming articles exploring leaders as technologists and innovators.
Recently, Mark Zuckerberg, Facebook’s founder and CEO, posted a https://www.facebook.com/notes/mark-zuckerberg/building-global-community/10154544292806634 ">public manifesto of nearly 6,000 words to Facebook’s community of almost 1.9 billion people called “Building a Global Community.” In the opening lines, readers quickly see that this isn’t about a product update or policy change, but rather focuses on a larger philosophical question that Zuckerberg courageously poses: “Are we building the world we all want?”
The manifesto was not without controversy, raising very public concerns from traditional media companies and questions from Washington insiders who actively wonder about Zuckerberg’s longer-term political aspirations.
Regardless of your interpretation of the manifesto’s intent, what’s remarkable is that a private sector CEO—someone who is typically laser-focused on growth projections and shareholder return—has declared a very ambitious aspiration to use the technology platform to promote and strengthen a global community.
As we enter an era of increasing globalization and connectivity, what is the responsibility of leaders, not just ones elected to public office, to support the betterment of the lives they touch? How might leaders support the foundational needs of their employees, customers, investors and strategic partners—to lead like a humanitarian?
What It Means to Lead Like a Humanitarian
To lead like a humanitarian requires making choices to transform scarce resources into abundant opportunities to positively and responsibly impact communities far beyond our own.
This might mean making big investments in solving our world’s biggest challenges. Or it might mean adopting a business model that intentionally serves a specific population in need or promotes sustainability, community service and employee engagement outside the office.
At its foundation, leading like a humanitarian means taking responsibility for how we connect our work—regardless of the job—to a meaningful purpose beyond growth and profitability.
Unlocking Possibilities by Liberating Scarce Resources
Technology is at the core of some of today’s biggest businesses, and organizations can have more impact now than in the past. While tech can be used to produce great products, it can also be aimed https://singularityhub.com/2016/08/17/solutions-to-the-worlds-biggest-problems-are-within-our-reach/ ">at solving big problems in the world by liberating resources that were once scarce and making them more abundant for more people.
What does this look like in practice? Apps abound that use the sensors and software on your phone for entertainment, everyday productivity, and socializing. But the same sensors, motivated by a different purpose, can be used to make your phone an intelligent aid for the blind, a diagnostic tool for doctors in remote areas, or an off-the-shelf radiation detector.
It’s not to say the first purpose is worthless—it’s great to relax with a quick game of Angry Birds every so often. But it isn’t the only goal worth pursuing, and with a dose of creativity and a different focus, the same skills used to produce games can make tools to help those in need.
This is an example using now-familiar mobile technology, but other technologies are coming with even greater potential for positive impact. These include breakthroughs in areas such as digital fabrication, biotechnology, and artificial intelligence and robotics. As these technologies arrive and become more accessible, we need to consider how they can be used for good too.
But technology isn’t the only resource to which leaders should pay heed.
Perhaps one of the most valuable resources technology can help liberate is human potential. No problem goes unsolved without someone taking up the challenge and aiming to find a solution. Leaders need to motivate and enable team members as much as possible.
And here, technology is proving a good tool too. A recent https://dupress.deloitte.com/dup-us-en/focus/human-capital-trends.html ">Deloitte report on global human capital trends found that the digitization of human capital processes is radically changing how employees engage with work, from the recruitment process through leadership development and career advancement.
Technology is enabling learning to move from episodic, generic training to continuous, blended social exchanges. Platforms such as Degreed, EdCast and Axonify move beyond bounded online classes by offering microlearning and on-demand learning opportunities.
Leaders need to assess if they are supporting a culture conducive to continuous learning and if they are empowering all employees to learn from and with each other.
As we widen our view of what’s possible, what actually happens in practice will change too. Together, the ability of people and technology to solve big problems has never been greater.
Developing New Business Models
As technology enables teams, big and small, to make an impact as never before, leaders and organizations need to reimagine who they are serving, what they are serving, and how they are serving them in viable, sustainable and profitable ways. Businesses no longer need to choose between maximizing profit and helping society. They can choose to do both.
Last year Fortune Magazine’s "http://beta.fortune.com/change-the-world/ ">Change the World" cover story featured 50 successful global companies that are doing well by doing good.
Its top profiled company, GlaxoSmithKline, is making choices to ensure growth and help people by reversing the traditional business model of maximizing revenue through protected drug patents. They are no longer filing patents in poor countries to enable lower prices and improved access to medicine in those countries. They are also partnering with NGOs to retrain workers on the proper administration of drugs and collaborating with governments to make their drugs part of national treatment programs of HIV and other widespread diseases.
http://singularityhub.com/wp-content/uploads/2017/03/LSK-bizchoice-quote.png " alt="" width="5400" height="1240" srcset="http://cdn.singularityhub.com/wp-content/uploads/2017/03/LSK-bizchoice-quote.png 5400w, http://cdn.singularityhub.com/wp-content/uploads/2017/03/LSK-bizchoice-quote-300x69.png 300w, http://cdn.singularityhub.com/wp-content/uploads/2017/03/LSK-bizchoice-quote-768x176.png 768w, http://cdn.singularityhub.com/wp-content/uploads/2017/03/LSK-bizchoice-quote-900x207.png 900w, http://cdn.singularityhub.com/wp-content/uploads/2017/03/LSK-bizchoice-quote-696x160.png 696w, http://cdn.singularityhub.com/wp-content/uploads/2017/03/LSK-bizchoice-quote-1068x245.png 1068w" sizes="(max-width: 5400px) 100vw, 5400px" />
Nearly https://singularityhub.com/2015/05/11/the-world-in-2025-8-predictions-for-the-next-10-years/ ">five billion new people are expected to come online through high-speed internet in the next ten years. Now is the time to imagine what new opportunities are on the horizon—not just for tapping new markets and customers but for how you empower them too.
In an increasingly dynamic world, re-evaluating old business models is a key new strategy.
Leaders need to build proficiency in both critically examining current models and creatively exploring fundamentally new ways of thinking about value creation and capture.
Live a Higher Purpose Within Your Organization
One of the most powerful ways a leader can motivate and enable these changes is to actively and continuously clarify the organization’s higher purpose—the “why” that drives the work—and to make choices that are consistent with what the company stands for.
There has been a lot of social science suggesting all workers—especially those in “Generation Z”—are motivated by work that matters to them. In her book, The Progress Principle, Harvard Business School professor Theresa Amabile argues https://hbr.org/2010/01/the-hbr-list-breakthrough-ideas-for-2010 ">that the most important motivator of great work is the feeling of meaning and progress—that your work matters.
Leading as a humanitarian requires modeling meaning throughout the organization and behaving in ways congruent with core values, internally and externally.
Last year, Marc Benioff, the CEO of Salesforce, www.indystar.com/story/money/2016/05/06/salesforce-uses-expansion-push-lgbt-rights-indiana/84035760 ">pushed for LGBT rights in Indiana, North Carolina and Georgia. In 2015, a company-wide survey revealed Salesforce had a gender discrepancy in pay, which Benioff remedied in what has been called the “$3 million dollar raise.” In January, Salesforce said they would adjust pay again to level out the salaries of employees who joined up through acquisitions and didn’t share the gender equality salary policies. And they’ve said they will monitor the gap as an ongoing initiative and commitment to employees.
In a http://time.com/4276603/marc-benioff-salesforce-lgbt-rfra/ )">Time article last year, Benioff stated his rationale for taking an active stance, “If I were to write a book today, I would call it CEO 2.0: How the Next Generation CEO Has to Be an Advocate for Stakeholders, Not Just Shareholders. That is, today CEOs need to stand up not just for their shareholders, but their employees, their customers, their partners, the community, the environment, schools, everybody. Anything that’s a key part of their ecosystem.”
http://singularityhub.com/wp-content/uploads/2017/03/LSK-consider-quote.png " alt="" width="5400" height="1240" srcset="http://cdn.singularityhub.com/wp-content/uploads/2017/03/LSK-consider-quote.png 5400w, http://cdn.singularityhub.com/wp-content/uploads/2017/03/LSK-consider-quote-300x69.png 300w, http://cdn.singularityhub.com/wp-content/uploads/2017/03/LSK-consider-quote-768x176.png 768w, http://cdn.singularityhub.com/wp-content/uploads/2017/03/LSK-consider-quote-900x207.png 900w, http://cdn.singularityhub.com/wp-content/uploads/2017/03/LSK-consider-quote-696x160.png 696w, http://cdn.singularityhub.com/wp-content/uploads/2017/03/LSK-consider-quote-1068x245.png 1068w" sizes="(max-width: 5400px) 100vw, 5400px" />
There’s More Than One Way to Lead Like a Humanitarian
Leading like a humanitarian is a mindset and set of practices, not a single, defined position. But try asking a simple question as you make decisions about the direction of your organization: How does our work positively impact the world around us, and can we do better?
This shift in view looks beyond only productivity and profit toward empowerment and shared possibility. Equipped with ever-more-powerful technologies, capable of both greater harm and good, leaders need to consider how their decisions will make the world a better place.
Banner Image Credit: http://www.brinkley-ink.com/ ">Zoe Brinkley
Is there a uniform set of moral laws, and if so, can we teach artificial intelligence those laws to keep it from harming us? This is the question explored in an original short film recently released by The Guardian.
In the film, the creators of an AI with general intelligence call in a moral philosopher to help them establish a set of moral guidelines for the AI to learn and follow—which proves to be no easy task.
Complex moral dilemmas often don’t have a clear-cut answer, and humans haven’t yet been able to translate ethics into a set of unambiguous rules. It’s questionable whether such a set of rules can even exist, as ethical problems often involve weighing factors against one another and seeing the situation from different angles.
So how are we going to teach the rules of ethics to artificial intelligence, and by doing so, avoid having AI ultimately do us great harm or even destroy us? This may seem like a theme from science fiction, yet it’s become a matter of mainstream debate in recent years.
https://singularityhub.com/2015/12/20/inside-openai-will-transparency-protect-us-from-artificial-intelligence-run-amok/ ">OpenAI, for example, was funded with a billion dollars in late 2015 to learn how to build safe and beneficial AI. And earlier this year, AI experts convened in Asilomar, California to http://www.kurzweilai.net/beneficial-ai-conference-develops-asilomar-ai-principles-to-guide-future-ai-research ">debate best practices for building beneficial AI.
Concerns have been voiced about https://singularityhub.com/2017/01/31/the-struggle-to-make-ai-less-biased-than-its-creators/ ">AI being racist or sexist, reflecting human bias in a way we didn’t intend it to—but it can only learn from the data available, which in many cases is very human.
As much as the engineers in the film insist ethics can be “solved” and there must be a “definitive set of moral laws,” the philosopher argues that such a set of laws is impossible, because “ethics requires interpretation.”
There’s a sense of urgency to the conversation, and with good reason—all the while, the AI is listening and adjusting its algorithm. One of the most difficult to comprehend—yet most crucial—features of computing and AI is the speed at which it’s improving, and the sense that progress will continue to accelerate. As one of the engineers in the film puts it, “The intelligence explosion will be faster than we can imagine.”
Futurists like Ray Kurzweil predict this intelligence explosion will lead to the singularity—a moment when computers, advancing their own intelligence in an accelerating cycle of improvements, far surpass all human intelligence. The questions both in the film and among leading AI experts are what that moment will look like for humanity, and what we can do to ensure artificial superintelligence benefits rather than harms us.
The engineers and philosopher in the film are mortified when the AI offers to “act just like humans have always acted.” The AI’s idea to instead learn only from history’s religious leaders is met with even more anxiety. If artificial intelligence is going to become smarter than us, we also want it to be morally better than us. Or as the philosopher in the film so concisely puts it: "We can't rely on humanity to provide a model for humanity. That goes without saying."
If we’re unable to teach ethics to an AI, it will end up teaching itself, and what will happen then? It just may decide we humans can’t handle the awesome power we’ve bestowed on it, and it will take off—or take over.
Image Credit: https://www.youtube.com/watch?v=-S8a70KXZlI ">The Guardian/YouTube
https://www.wired.com/2017/03/airbus-swears-podcardrone-serious-idea-definitely/ " target="_blank">Airbus Swears Its Pod/Car/Drone Is a Serious Idea Definitively
Jack Stewart | WIRED
"Airbus came up with a crazy idea to change all of that with Pop.Up, a conceptual two-passenger pod that clips to a set of wheels, hangs under a quadcopter, links with others to create a train, and even zips through a hyperloop tube...As humans pack into increasingly dense global mega-cities, they’ll need new ideas for transport to avoid gridlock."
https://www.fastcompany.com/3068963/how-this-japanese-robotics-master-is-building-better-more-human-androids " target="_blank">How This Japanese Robotics Master Is Building Better, More Human Androids
Harry McCracken | Fast Company
"On the tech side, making a robot look and behave like a person involves everything from electronics to the silicone Ishiguro’s team uses to simulate skin. 'We have a technology to precisely control pneumatic actuators,' he says, noting, as an example of what they need to re-create, that 'the human shoulder has four degrees of freedom.'"
https://www.technologyreview.com/s/603847/a-virtual-version-of-you-that-can-visit-many-vr-worlds/ " target="_blank">A Virtual Version of You That Can Visit Many VR Worlds
Rachel Metz | MIT Technology Review
"The Ready Room demo lets you choose your avatar’s gender, pick from two different body types (both somewhat cartoony), adjust a range of body traits like skin hue, weight, and head shape, and dial in such specific things as the shapes and spacing of eyes, nose, and lips. You can choose clothes, hairstyles, and sneakers, and you can keep a portfolio of the same avatar in different outfits or make several different ones."
http://www.vox.com/policy-and-politics/2017/3/13/14907250/hr1313-bill-genetic-information " target="_blank">A New Bill Would Allow Employers to See Your Genetic Information—Unless You Pay a Fine
Julia Belluz | VOX
"Now this new bill, https://www.congress.gov/115/bills/hr1313/BILLS-115hr1313ih.pdf#page=3 " target="_blank">HR 1313—or the Preserving Employee Wellness Programs Act—seeks to clarify exactly how much personal health data employers can ask their employees to disclose. And in doing so, the bill also opens the door to employers requesting information from personal genetics tests or family medical histories. Unsurprisingly, HR 1313 has captured the media’s imagination. Vanity Fair suggested the bill 'could make one sci-fi dystopia a reality.'"
https://www.nytimes.com/2017/03/13/business/dealbook/intel-mobileye-autonomous-cars-israel.html?_r=0 " target="_blank">Intel Buys Mobileye in $15.3 Billion Bid to Lead Self-Driving Car Market
Mark Scott | The New York Times
"Mobileye, founded in Jerusalem in 1999, has signed deals with several automakers, including Audi, for the use of its vision and camera technology, which uses machine learning and complex neuroscience to help drivers—and increasingly cars themselves—avoid obstacles on the road."
Image Credit: http://www.italdesign.it/press/ " target="_blank">Italdesign
If you think augmented reality is only fun and games, consider that we’ve already witnessed the first known police action taken against hologram technology. During the summer of 2015, a performance by controversial gangster-rapper, Keith Cozart, was shut down when local police discovered the musician was broadcast as a hologram into a benefit concert in Indiana—close to the border of his home state of Illinois.
Cozart, who goes by the stage name “Chief Keef,” is from a rough neighborhood in Chicago, and has ties to local gangs as well as a http://www.billboard.com/articles/news/6649099/chief-keef-chicago-concert-shut-down-rahm-emanuel-mayor-hologram ">criminal record including felony gun charges. His music, which glamorizes a gang lifestyle and violence, has prompted public officials—including Chicago mayor Rahm Emanuel—to pressure music festivals to avoid inviting Cozart because they say it poses a “significant public safety risk.”
Due to outstanding warrants for his arrest, Cozart can’t even return to Chicago, and so unable to perform in the area, he took the innovative approach of performing from California, but as a hologram beamed into the Indiana music festival. http://www.rollingstone.com/music/news/chief-keef-hologram-concert-shut-down-by-police-20150726 ">But even that was too much for police, and the performance was immediately stopped.
The Chief Keef incident signals the beginning of more issues to come. Regulating the free movement of augmented reality and hologram technology will be an increasingly painful headache for police forces and city officials going forward.
Insubordinate holograms have been used in a far more political fashion as well. Two years ago, the Spanish government passed what has become known as the “gag law”—rules aimed at restricting protesters from convening outside government buildings. Demonstrators hoping to voice their dissent for the legislation staged the http://www.independent.co.uk/news/world/europe/spains-hologram-protest-thousands-join-virtual-march-in-madrid-against-new-gag-law-10170650.html ">world’s first hologram protest as a way to circumnavigate the rules. To pull off the stunt, the protesters hired a production company to film marchers walking along a street at another location and then projected that footage as a hologram on top of a translucent fabric which was constructed outside the buildings.
The fact that holograms give the user a special kind of telepresence, one with a full 3D rendering in physical space, means they could be used in increasingly clever ways. Imagine an exiled and controversial figure able to give public speeches in front of large crowds. Edward Snowden, for example, could one day graduate from his https://www.theguardian.com/us-news/shortcuts/2016/jun/27/snowbot-edward-snowden-telepresence-robot ">robot body and appear on stage or in larger public venues—and I’m sure there are more than a few political and military personnel who might seek to legislate that possibility away.
It isn’t just holograms that are causing turmoil. Location-based augmented reality experiences create issues since they cause real people to move around real places in new and unpredictable ways. More than a few https://www.youtube.com/watch?v=MLdWbwQJWI0 ">Pokemon Go flash mobs disrupted cities around the world during last summer’s frenzy.
Niantic, the company behind the game, http://www.mirror.co.uk/tech/pokmon-go-remove-pokstops-holocaust-8537988 ">had to remove in-game locations from the Holocaust Museum and Hiroshima memorial, showcasing just how important it is to create divisions between parts of reality and these new digital worlds. http://time.com/4513371/pokemon-go-netherlands-court-beaches/ ">In the Netherlands, the government is taking Niantic to court over the fact that thousands of players were drawn to protected areas, damaging some of the land.
Public officials in the United States are now responding to these sorts of issues as well. Last year, a state representative in Illinois http://www.ilga.gov/legislation/fulltext.asp?DocName=&SessionId=88&GA=99&DocTypeId=HB&DocNum=6601&GAID=13&LegID=98270&SpecSess=&Session= ">proposed a bill nicknamed “Pidgey’s Law,” which would require companies like Niantic to remove in-game locations at the request of property owners.
More recently, the city of Milwaukee passed an ordinance requiring that games like Pokemon Go obtain permits to use their land as locations inside the game after players https://www.engadget.com/2017/02/06/pokemon-go-milwaukee/ ">left piles of trash in a public park. In China, they have gone so far as to http://venturebeat.com/2017/01/15/china-cites-national-security-as-it-bans-pokemon-go-and-other-ar-games/ ">ban the game entirely, citing threats to consumer safety and questions about the use of player’s geographic data
The United States Congress is http://www.commerce.senate.gov/public/index.cfm/hearings?ID=9C42F271-98FE-4146-ADD9-8909E5C2020D ">already beginning to ask interesting questions in the name of finding ways to protect consumers from the perils of augmented reality, including whether hackers might one day be able to http://www.theverge.com/2016/11/17/13666914/senate-augmented-reality-hearing-pokemon-go ">edit our reality in weird and nefarious ways. Legislators will also need to explore the ways in which telepresence holograms and location-based experiences can be permitted to navigate the world.
The promise of augmented reality is that it will transform our relationship to the physical world—where we go, what we do there, and why we do it—all without changing a single thing about our real-world infrastructure. That’s an incredibly disruptive implication of the technology, but it will also prove to bring about some very real problems.
We can already see the challenges still to come, and going forward we’re going to need protocols and policies that regulate augmented reality in order to maintain our real-world peace of mind.
Image Credit: https://www.youtube.com/watch?v=r6tVVcgX-iw ">Ukraine Today/YouTube
Understanding the human brain is arguably the greatest challenge of modern science. The http://www.sciencemuseum.org.uk/broughttolife/people/paulbroca ">leading approach for most of the https://books.google.co.uk/books?id=020xAQAAIAAJ&printsec=frontcover&dq=how+to+read+character:+a+new+illustrated+hand-book&hl=en&sa=X&redir_esc=y#v=onepage&q=how%20to%20read%20character%3A%20a%20new%20illustrated%20hand-book&f=false ">past 200 years has been to link its functions to different brain regions or even individual neurons (brain cells). But recent research http://www.nature.com/nrn/journal/v10/n3/abs/nrn2575.html ">increasingly suggests that we may be taking completely the wrong path if we are to ever understand the human mind.
The idea that the brain is made up of numerous regions that perform specific tasks is known as “https://theconversation.com/how-our-modular-brain-pieces-the-world-together-58990 ">modularity.” And, at first glance, it has been successful. For example, it can provide an explanation for how we recognize faces by activating a chain of specific brain regions in the http://brainmadesimple.com/occipital-lobe.html ">occipital and http://brainmadesimple.com/temporal-lobe.html ">temporal lobes. Bodies, however, are processed by a different set of brain regions. And scientists believe that yet other areas—memory regions—help combine these perceptual stimuli to create holistic representations of people. The activity of certain brain areas has also been https://theconversation.com/what-a-little-known-brain-region-can-tell-us-about-depression-60410 ">linked to specific conditions and diseases.
The reason this approach has been so popular is partly due to technologies which are giving us unprecedented insight into the brain. https://theconversation.com/brain-scanners-allow-scientists-to-read-minds-could-they-now-enable-a-big-brother-future-72435 ">Functional magnetic resonance imaging (fMRI), which tracks changes in blood flow in the brain, allows scientists to see brain areas light up in response to activities—helping them map functions. Meanwhile, https://theconversation.com/exciting-cells-and-controlling-heartbeats-could-optogenetics-create-drug-free-treatments-56539 ">optogenetics, a technique that uses genetic modification of neurons so that their electrical activity can be controlled with light pulses—can help us to explore their specific contribution to brain function.
While both approaches generate https://www.sciencedaily.com/releases/2014/01/140106103741.htm ">fascinating results, it is not clear whether they will ever provide a meaningful understanding of the brain. A neuroscientist who finds a correlation between a neuron or brain region and a specific but in principle arbitrary physical parameter, such as pain, will be tempted to draw the conclusion that this neuron or this part of the brain controls pain. This is ironic because, even in the neuroscientist, the brain’s inherent function is to find correlations—in whatever task it performs.
But what if we instead considered the possibility that all brain functions are distributed across the brain and that all parts of the brain contribute to all functions? If that is the case, correlations found so far may be a perfect trap of the intellect. We then have to solve the problem of how the region or the neuron type with the specific function interacts with other parts of the brain to generate meaningful, integrated behavior. So far, there is no general solution to this problem just hypotheses in specific cases, such as for recognizing people.
The problem can be illustrated by a recent study which found that the psychedelic drug LSD can https://theconversation.com/how-lsd-helped-us-probe-what-the-sense-of-self-looks-like-in-the-brain-57703 ">disrupt the modular organization that can explain vision. What’s more, the level of disorganization is linked with the severity of the the “breakdown of the self” that people commonly experience when taking the drug. The study found that the drug affected the way that several brain regions were communicating with the rest of the brain, increasing their level of connectivity. So if we ever want to understand what our sense of self really is, we need to understand the underlying connectivity between brain regions as part of a complex network.
A way forward?
Some researchers https://www.scientificamerican.com/article/a-new-phrenology/ ">now believe the brain and its diseases in general can only be understood as an http://www.nature.com/nrn/journal/v10/n3/abs/nrn2575.html ">interplay between tremendous numbers of neurons distributed across the central nervous system. The function of any one neuron is dependent on the functions of all the thousands of neurons it is connected to. These, in turn, are dependent on those of others. The same region or the same neuron may be used across a huge number of contexts, but have different specific functions depending on the context.
It may indeed be a tiny perturbation of these interplays between neurons that, through avalanche effects in the networks, causes conditions like depression or Parkinson’s disease. Either way, we need to understand the mechanisms of the networks in order to understand the causes and symptoms of these diseases. Without the full picture, we are not likely to be able to successfully cure these and many other conditions.
In particular, neuroscience needs to start investigating how network configurations arise from the brain’s lifelong attempts to make sense of the world. We also need to get a clear picture of how the cortex, brainstem and cerebellum interact together with the muscles and the tens of thousands of optical and mechanical sensors of our bodies to create one, integrated picture.
Connecting back to the physical reality is the only way to understand how information is represented in the brain. One of the reasons we have a nervous system in the first place is that the evolution of mobility required a controlling system. Cognitive, mental functions—and even thoughts—can be regarded as mechanisms that evolved in order https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3619124/ ">to better plan for the consequences of movement and actions.
So the way forward for neuroscience may be to focus more on general neural recordings (with optogenetics or fMRI)—without aiming to hold each neuron or brain region responsible for any particular function. This could be fed into theoretical network research, which has the potential to account for a variety of observations and provide an integrated functional explanation. In fact, such a theory should help us design experiments, rather than only the other way around.
It won’t be easy though. Current technologies are expensive—there are major financial resources as well as national and international prestige invested in them. Another obstacle is that the human mind tends to prefer simpler solutions over complex explanations, even if the former can have limited power to explain findings.
The entire relationship between neuroscience and the pharmaceutical industry is also built on the modular model. Typical strategies when it comes to common neurological and psychiatric diseases are to identify one type of receptor in the brain that can be targeted with drugs to solve the whole problem.
For example, SSRIs—which block absorption of serotonin in the brain so that more is freely available—are currently used to treat a number of different mental health problems, including depression. But they don’t work for many patients and there may be a https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4172306/ ">placebo effect involved when they do.
Similarly, epilepsy is today widely seen as a single disease and is http://www.nhs.uk/conditions/Epilepsy/Pages/Introduction.aspx ">treated with anticonvulsant drugs, which work by dampening the activity of all neurons. Such drugs don’t work for everyone either. Indeed, it could be that any minute perturbation of the circuits in the brain—arising from one of thousands of different triggers unique to each patient—could push the brain into an epileptic state.
In this way, neuroscience is gradually losing compass on its purported path towards understanding the brain. It’s absolutely crucial that we get it right. Not only could it be the key to understanding some of the biggest mysteries known to science—such as consciousness—it could also help treat a huge range of debilitating and costly health problems.
https://theconversation.com/profiles/henrik-jorntell-345982 ">Henrik Jörntell, Senior Lecturer in Neuroscience, http://theconversation.com/institutions/lund-university-756 ">Lund University
This article was originally published on http://theconversation.com ">The Conversation. Read the https://theconversation.com/the-brain-a-radical-rethink-is-needed-to-understand-it-74460 ">original article.
Image Credit: http://www.shutterstock.com ">Shutterstock
We live in a hyper-connected world where communication is almost effortless. And yet, despite abundant connection, we still lack interpersonal fulfillment. The next challenge, then, is not increasing the number of relationships possible, but developing the caliber and depth of those relationships.
In other words, it’s now a matter of quality over quantity.
Can we use technology to better understand and facilitate relationships? Might we even apply these tools to romantic relationships? Could we design an AI-based algorithm that connects us with exactly the kind of person we would fall into mutual love with and ignite a happy relationship?
Never have we had so much information about people and what they want. The secret to love may well be in the numbers, and a potent combo of AI and big data could be the matchmaker to end all matchmakers.
Machine matchmakers are already among us
In 2013, the American National Academy of Sciences reported that over a third of people who married in the US between 2005 and 2012 met online, half of them on dating sites. As the number of users grows, new tools are emerging to facilitate and automate this process and manage the data deluge.
When it comes to big data, AI is the perfect tool for the job. Machine learning can find predictive, causal or correlative patterns between variables beyond human limitations. Relationship scientists and dating sites are starting to see how it can be a powerful tool in connecting potential love birds.
For example, eHarmony and other dating sites are collaborating with big AI players like http://www.ibmbigdatahub.com/video/ibm-big-data-analytics-helps-eharmony-identify-more-compatible-matches-real-time ">IBM Watson to deliver 3.5 million personalized matches every day, and so “optimizing for love.”
When a person signs up on eHarmony, they fill out a 150-question survey—personal preferences, physical traits and hobbies, among many other things. Online behavior, such as how active or inactive they are on the platform and how they communicate, is also gathered and added to the mix.
Matching algorithms honed and perfected by psychological and sociological research sift and compare the personal data for over 20 million users. http://www.zdnet.com/article/eharmony-translates-big-data-into-love-and-cash/ ">According to Jason Chuck, a managing director at eHarmony, these algorithms are constantly being improved and tweaked across hundreds of variables.
Clearly, we already live in a world where machines play a critical role in love.
And as the data pile grows and the algorithms get smarter—something happening at a quick pace—it’s not so far out to suggest machines won’t just make relationships happen—they’ll consistently make them successful too.
Missed connection: a general theory of love
But it might be awhile before we toast our favorite, infallible matchmaking AI at the wedding.
One of the biggest challenges to utilizing big data algorithms in online dating naturally comes from the fact there isn’t a simple recipe for love. Why do we fall in love with some people and not others? Why do some romantic relationships make us happier than others? Why do some last and some don’t?
Anyone who has attempted to objectively study and analyze the science of romantic love can tell you the answers to these questions are not simple and are often a result of hundreds of societal, personal and potentially genetic factors. Above all, they are very difficult to generalize and extrapolate across different personalities, nationalities and cultural backgrounds. Even more, short-term attractions do not always lead to long-term compatibility. To determine whether two individuals will continue to be compatible and in love over an extended period of time is an enormous challenge.
But this is where big data analytics and the latest machine learning tools can be of some help, by allowing us to find novel predictive patterns about why some relationships succeed and others fail.
That is, the old way of writing matchmaking algorithms would require the software developer sketch out every causal connection—and to do that, they’d need our nonexistent general theory of love. Emerging machine learning techniques, however, do this footwork for themselves. Given a particular goal and a massive amount of data, they can learn to find connections without being explicitly programmed to do so.
Of course, this still leans heavily on the data itself. And the truth is, dating data is sure to be messy.
Individuals will not be entirely truthful in how they represent themselves. Many users may not have the self-awareness to accurately answer questions about their personalities and the kind of potential partners they think they are attracted to. Objective data gathering tools can help resolve this issue by gathering clearer data about personalities and preferences, as opposed to them being self-reported.
But the perfect love algorithm is yet out of reach. Even so, is it a worthy goal?
In “http://www.slate.com/articles/double_x/doublex/2013/01/amy_webb_s_data_a_love_story_using_algorithms_and_charts_to_game_online.html ">Data, A Love Story,” futurist Amy Webb writes:
“Imagine how much heartache could be averted if you could look into a crystal ball after every first date. …Fortunately, advances in relationship science can make this wish for a crystal ball come true. Researchers are discovering what a relationship will be like years into the future by assessing the traits of the partners, such as personality, values and interests. Furthermore, these traits can be decoded in early stages of dating, which can permit singles to predict with more accuracy which relationships will end up happily ever after.”
If Webb is right, advances in relationship science, data-mining algorithms and machine learning may make finding love almost effortless.
Perhaps, this will take away from the beauty of love for those seeking it. Unrequited love, heartbreaks and painful breakups have plagued humanity throughout history. Yet, many will argue it is the pitfalls of romantic relationships that make the successful ones so special.
It is difficult to deny, however, that mutual love and happy relationships make us happier and more fulfilled as human beings. And if big data analytics can make true love more accessible, reduce the rising rates of failed relationships and maintain happy ones—then why not continue to improve and invest in those tools? For all we know, Cupid could simply be a match-making algorithm.
Image Credit: http://www.shutterstock.com ">Shutterstock
From https://singularityhub.com/2016/12/22/how-to-train-ai-to-do-everything-in-the-digital-universe/ ">AlphaGo’s historic victory against world champion Lee Sedol to http://www.sciencemag.org/news/2017/03/artificial-intelligence-goes-deep-beat-humans-poker ">DeepStack’s sweeping win against professional poker players, artificial intelligence is clearly on a roll.
Part of the momentum comes from breakthroughs in artificial neural networks, which loosely mimic the multi-layer structure of the human brain. But that’s where the similarity ends. While the brain can hum along on energy only enough to power http://www.nature.com/nmat/journal/vaop/ncurrent/full/nmat4870.html ">a light bulb, AlphaGo’s neural network runs on https://en.wikipedia.org/wiki/AlphaGo#Hardware ">a whopping 1,920 CPUs and 280 GPUs, with a total power consumption of roughly one million watts—50,000 times more than its biological counterpart.
Extrapolate those numbers, and it’s easy to see that artificial neural networks have a serious problem—even if scientists design powerfully intelligent machines, they may demand too much energy to be practical for everyday use.
Hardware structure is partly to blame. Our computers, with their separate processor and memory units, are simply not wired appropriately to support the type of massively parallel, energy-efficient computing that the brain elegantly performs.
Recently, a team from Stanford University and http://www.sandia.gov/ ">Sandia National Laboratories took a different approach to brain-like computing systems.
Rather than simulating a neural network with software, they made a device that behaves like the brain’s synapses—the connection between neurons that processes and stores information—and completely overhauled our traditional idea of computing hardware.
The artificial synapse, dubbed the “electrochemical neuromorphic organic device (ENODe),” may one day be used to create chips that perform brain-like computations with minimal energy requirements.
Made of flexible, organic material compatible with the brain, it may even lead to better brain-computer interfaces, paving the way for a cyborg future. The team published their findings in http://www.nature.com/nmat/journal/vaop/ncurrent/full/nmat4856.html ">Nature Materials.
"It's an entirely new family of devices because this type of architecture has not been shown before. For many key metrics, it also performs better than anything that's been done before with inorganics," https://www.sciencedaily.com/releases/2017/02/170221142046.htm ">says study lead author https://salleo.stanford.edu/ ">Dr. Alberto Salleo, a material engineer at Stanford.
The biological synapse
The brain’s computational architecture is fundamentally different than a classical computer. Rather than having separate processing and storage units, the brain uses synapses to perform both functions. Right off the bat, this arrangement is better: it saves the energy required to shuttle data back and forth from the processor to the memory module.
The synapse is a structure where the projections of two neurons meet. It looks a bit like a battery cell, with two membranes and a gap between. As the brain learns, electrical currents hop down one neuronal branch until they reach a synapse. There, they mix together with all the pulses coming from other branches and sum up into a single signal.
When sufficiently strong, the electricity triggers the neuron to release chemicals that drift towards a neighboring neuron’s synapse and, in turn, causes the neuron to fire.
Here’s the crucial bit: every time this happens, the synapse is modified slightly into a different state, in that it subsequently requires less (or more) energy to activate the downstream neuron. In fact, neuroscientists believe that different conductive states are how synapses store information.
The artificial synapse
The new device, ENODe, heavily borrows from nature’s design.
Like a biological synapse, the ENODe consists of two thin films made of flexible organic materials, separated by a thin gap containing an electrolyte that allows protons to pass through. The entire device is controlled by a master switch: when open, the device is in “read-only” mode; when closed, the device is “writable” and ready to store information.
To input data, researchers zapped the top layer of film with a small voltage, causing it to release an electron. To neutralize its charge, the film then “steals” a hydrogen ion from its bottom neighboring film. This redox reaction changes the device’s oxidation level, which in turn alters its conductivity.
Just like biological synapses, the stronger or longer the initial electrical pulse, the more hydrogen ions gets shuffled around, which corresponds to larger conductivity. The scalability was welcomingly linear: with training, the researchers were able to predict within one percent of uncertainty the voltage needed to get to a particular state.
In all, the team programmed 500 distinct conductive states, every single one available for computation—a cornucopia compared to the two-state (0 and 1) common computer, and perfect for supporting neuron-based computational models like artificial neural networks.
The master switch design also helped solve a pesky problem that’s haunted previous generations of brain-like chips: the voltage-time dilemma, which states that you can’t simultaneously get both low-energy switching between states and long stability in a state.
This is because if ions only need a bit of voltage to move during switching (low energy), they can also easily diffuse away after the switch, which means the chips can change randomly, http://www.nature.com/nmat/journal/vaop/ncurrent/full/nmat4870.html ">explains https://ece.umass.edu/faculty/jianhua-joshua-yang ">Dr. J. Joshua Yang and https://ece.umass.edu/faculty/qiangfei-xia ">Dr. Qiangfei Xia of the University of Massachusetts, who wrote http://www.nature.com/nmat/journal/vaop/ncurrent/full/nmat4870.html ">an opinion piece about the study but was not directly involved.
The ENODe circumvents the problem with its “read-only” mode. Here, the master switch flips open, cutting off any external current to the device and preventing proton changes in the layers.
"A miniature version of the device could cut energy consumption by a factor of several http://www.nature.com/nmat/journal/vaop/ncurrent/full/nmat4856.html ">million—well under the energy consumption of a biological synapse."
By decoupling the mechanism that maintains the state of the device from the one that governs switching, the team was able to use a switching voltage of roughly 0.5 millivolts to get to an adjacent state. For comparison, this is about https://www.sciencedaily.com/releases/2017/02/170221142046.htm ">one-tenth the energy needed for a state-of-the-art computer to move data from the processor to the memory unit.
Once locked into a state, the device could maintain it for http://www.nature.com/nmat/journal/vaop/ncurrent/full/nmat4856.html ">25 hours with 0.04 percenthttp://www.nature.com/nmat/journal/vaop/ncurrent/full/nmat4856.html "> variation—a “http://www.nature.com/nmat/journal/vaop/ncurrent/full/nmat4870.html ">striking feature” that puts ENODe well above other similar technologies in terms of reliability.
“Just like a battery, once you charge it stays charged” without needing additional energy input, http://spectrum.ieee.org/tech-talk/semiconductors/design/flexible-organic-artificial-synapse-could-one-day-interface-with-the-brain ">explains study author Dr. A Alec Talin.
ENODe’s energy requirement, though exceedingly low compared to current devices, is still thousands of times higher than the estimates for a single synapse. The team is working hard to miniaturize the device, which could drastically cut down energy consumption by a factor of several http://www.nature.com/nmat/journal/vaop/ncurrent/full/nmat4856.html ">million—well under the energy consumption of a biological synapse.
To show that ENODes actually mimics a synapse, the team brought their design to life using biocompatible plastic and put it through a series of tests.
First, they integrated the ENODe into an electrical circuit and demonstrated its ability to learn a textbook experiment: Pavlovian conditioning, where one stimulus is gradually associated with another after repeated exposure—like linking the sound of a bell to an involuntary mouth-watering response.
Next, the team implemented a three-layer network and trained it to identify hand-written digits—a type of benchmarking task that researchers often run artificial neural networks through to test their performances.
Because building a physical neural network is technologically challenging, for this test the team used the model of their neuron to simulate one instead.
The ENODe-based neural network managed an accuracy between 93 to 97 percent, far higher than that achieved by previous brain-like chips, http://www.nature.com/nmat/journal/vaop/ncurrent/full/nmat4856.html ">reported the authors.
Computational prowess aside, the ENODe is also particularly suited to synapse with the brain. The device is made of organic material that, while not present in brain tissue, is biocompatible and frequently used as a scaffold to grow cells on. The material is also flexible, bendy enough to hug irregular surfaces and may allow researchers to pack multiple ENODes into a tiny volume at high density.
Then there’s the device itself, with its 500 conductance states, that “naturally interfaces with the analog world, with no need for the traditional power-hungry and time consuming analog-to-digital converters,” http://www.nature.com/nmat/journal/vaop/ncurrent/full/nmat4870.html ">remarks Yang and Xie.
“[This] opens up a possibility of interfacing live biological cells [with circuits] that can do computing via artificial synapses,” http://spectrum.ieee.org/tech-talk/semiconductors/design/flexible-organic-artificial-synapse-could-one-day-interface-with-the-brain ">says Talin. “We think that could have huge implications in the future for creating much better brain-machine interfaces.”
Image Credit: http://www.shutterstock.com ">Shutterstock