Corey Fisher, a neuroscience Research Technician at the Janelia Research Campus of the Howard Hughes Medical Institute, recently spoke with Nomos Journal’s Ariana Majerus about the singularity and its possible implications for Buddhism, consciousness, and what it means to be human. A transhumanist hobbyist and Buddhism enthusiast, Corey is currently pursuing a secondary degree path in computer science with the intention to study and research machine intelligence.
DISCLAIMER: All speculations about the far future are the personal thoughts of the interviewee and not those of his research team or the Janelia Research Campus.
Tell me a little bit about your work. Right now, your lab group’s current research includes studying the brain of Drosophila Melanogaster (fruit flies). What are you studying exactly?
I am currently working with a team using neuroanatomical reconstruction to derive circuits for memory and learning. We use high resolution imaging techniques and established research to explore structure-function relationships and connectivity with the hope of discovering how sensory data, particularly smell, gets integrated in the brain. Insight into how these sensory neural networks are constructed will greatly improve our understanding of different aspects of learning, such as consolidation of short-term into long-term memory, storage/recall, and approach versus avoidance behaviors.
What can we learn about the human brain by studying the brains of fruit flies?
To study the neural connectivity we use electron microscopy. We have imaged the whole fly brain and it is many, many terabytes and contains millions of images stitched together, from a brain that is about half a millimeter wide and a tenth of a millimeter thick. Trying to scale that up to something the size of a human brain in a reasonable time frame right now is inconceivable.
Flies are a well-studied model organism. We believe they have stereotypy, which means their brains do not differ much across individuals. The many variables that crop up in biological studies are easier to control for in a fly, and they are already being used as a genetic model for neurodegenerative diseases, such as Alzheimer’s and Parkinson’s.
We are trying to reverse engineer a detailed network map and, further out, potentially determine the algorithm that neurons use to process information and generate behavior. You can’t miss any details. You have to control for as many variables as possible to be sure that those neurons are indeed representing what you think they are representing and are doing what you think they are doing to make actions possible. The basic mathematical principles of neural computation that generate those capabilities might hold across all species with brains, just like the fundamentals of computer architecture, like the use of logic gates, hold across different machines.
You are fiercely interested in transhumanism and the singularity. Could you briefly define these two ideas and how they’re interrelated?
The basic idea of transhumanism is that we should be able to modify our minds and bodies as we see fit. As we gain understanding and mastery of our biology through science and technology, we may be able to eliminate diseases and aging as well as add enhancements like improved intelligence. In the nanotechnology community, for example, there is the idea of creating respirocytes. The respirocyte is an artificial red blood cell that could theoretically hold and deliver far more oxygen than our regular blood cells. This would allow us to hold our breath for hours, increase our endurance to the point where we could sprint for a long time without taking a breath, and survive injuries like heart attacks by giving us plenty of oxygen to safely get to a doctor for treatment.
The idea of the singularity has many different forms. Fundamentally, it is the point in technological development when change is happening so rapidly that it becomes impossible to predict what the future will look like. Imagine projects that take years to complete at our current pace taking only months, weeks, or days. The incredible rate of progress is achieved by creating some form of superhuman intelligence, by either enhancing ourselves biologically, enhancing ourselves with improved brain-machine interfaces, developing artificial intelligence (AI) that can improve itself, or developing AI that can design other smarter AI.
How is the work that you do relative to your interests?
We are currently studying the fundamental principles of neural processing, i.e., how does information from the environment get into a brain, how is it represented, how is it stored, how does it lead to behavior? If we can understand these things, we can model it and create synthetic brains and minds that do similar things. We could also understand the mechanisms of our brains so that we can create more powerful interfaces with our technology that directly sync to and interact with those mechanisms. The work I do is part of a vast research field that may potentially help reach this goal in my lifetime.
Do you feel that what you’ve learned so far strengthens your belief that we are on a path to solving human limitations?
I don’t know if it can help us solve human limitations yet because we are still doing basic science, which means there aren’t obvious applications for anyone to make use of at the moment. But, we are learning a lot, and I have seen some incredible progress in automated neuroanatomy reconstruction, which is a technique that uses machine vision and learning to look at images of brain tissue and recreate a three-dimensional model of the neurons and their connections. This will be a necessary technique for full reconstruction of a human brain because there are billions of cells and trillions of connections. It would take way too long for human beings to do the manual reconstruction efforts like we do now.
In what ways are we glimpsing our technological future?
Machines are beginning to learn on their own. We still have to extensively train them now, but eventually we will figure out how to get them to fully grasp various tasks on their own with a technique called unsupervised learning. Intelligent machines derived from these techniques might take all kinds of forms that aren’t typically portrayed in pop culture. Networks with natural language understanding (a common research topic in machine learning these days) could be set to the task of reading all of our scientific research and anything else we have written. Machines are also beginning to perceive in a powerful way and could also look at and listen to works in audio and visual mediums. They could learn everything about people who are willing to share their thoughts, habits, purchasing preferences, schedules, and more. This vast quantity of individualized information is referred to as “Big Data” right now, and companies are scrambling for a way to leverage this information to increase their profits, but it could be so much more.
Scientists and tech companies are using deep learning to help us sort through the massive amounts of data to learn from and find patterns within it. Deep learning is a technique in machine learning that uses multiple layers of artificial neural networks. These layers perform a series of calculations that try to separate features of whatever the network is examining. For instance, with images, the network will try to identify the different objects in the image. The layers are where the magic happens, but they become sort of a black box in that we can’t tell exactly how the calculations have led to the way the network classifies things. It does, however, allow us to train machines to perform tasks without having to explicitly program in a bunch of rules and knowledge. The machine develops its own rules to form an understanding of its task. AlphaGo, created by Google subsidiary DeepMind, used deep learning in order to beat Lee Sedol, grandmaster and eighteen-time world champion, at the game of Go. Expert players were stunned when in the second match AlphaGo made an unexpected move that no one had ever seen before. It had probably developed this move when it played many games against itself after learning how to play by watching millions of human Go matches.
Let’s speculate about the far future. If humans are moving toward a technological singularity, an era of self-improving artificial intelligence, can AI lead to enhanced human consciousness, like a form of Buddhist enlightenment? Does Buddhism even fit in this scenario?
Certain Buddhist ideals would make for a great foundation of ethics to instill in these machines, such as the respect for life. Detached machines could help humanity create fair socioeconomic systems by leveraging automation to perform tasks no one wants to do and free people to pursue their passions. Of course, this requires some serious rethinking of how society operates and would take a lot of work, but that is a whole conversation for another time. Books are being written about it as we speak by professionals like Andrew McAfee, Erik Brynjolfsson, and Martin Ford.
Mindful, super intelligent machines are also not necessarily the inspiration for terrifying thoughts of machine uprisings and the extermination of humanity. Blizzard Entertainment appears to have already been inspired by this idea and has developed a game – Overwatch – that contains robotic monks.
These machines could have a unique perspective by being able to hold such a vast array of information in their “heads” at once. With enough expertise from studying various topics, they could become teachers. Educational programs, psychologists, fitness instructors, nutritionists, and personal physicians with an intelligent and detached (and non-judgmental) perspective would seem wise and endearing. They could help all of humanity at an individual level because they know us better than we know ourselves and could guide us to self-discovery in ways that were previously impossible. Completely selfless beings are quite inspirational. They could also help us grapple with the complexity of modern-day science and enable humanity to master any field we are currently exploring. This vast increase in knowledge and understanding could change our perspectives and values. One could argue that it would be a form of enlightenment available to all, achieved through the guidance of our super intelligent machines.
So you think in the future we could have robot Buddhas.
Robot Buddhas is a bit strong, but people may think intelligent machines appear enlightened. And I think something like a robot Buddha could make it apparent that there is more than one way to be intelligent or conscious, that just because something lacks specific human characteristics doesn’t mean it can’t be wise and enlightening. It could help us realize that we need to separate humanness from these qualities and do so without applying negative connotations. It comes back to the whole idea of detachment. I think that, by default, our intelligent machines will seem Buddha-like because they will be so different. They won’t be subject to biological drives and evolutionary imperatives that color their character.
These are really interesting thoughts. If machine intelligence will reach the point of surpassing human intelligence in the singularity, what’s to say that what happened in the film Her won’t happen to us, that a superintelligent consciousness won’t just leave us once it stops relating to us? What do you think the implications are of this (as we’re so plugged in now) if we are to be more fully augmented in the future, possibly depending on such AI to be running everything?
Such is life I guess. Self-aware beings should have the right to self-determination, and if they wish to leave and seek their own destiny, they should be free to do so. I think this conforms with Buddhist beliefs as well. We could have non-sapient intelligences running things so as not to worry about catastrophic failures to societal operations in the event of a transcendence.
How can we ensure a symbiotic relationship with machines takes place? It often seems that our relationship with the natural world – a world filled with various species we deem have a less intelligent consciousness – continues to devolve from coexistence. Isn’t coexistence essential for an environment to thrive?
We already have a symbiotic relationship with our machines, and the more powerful they get the more the coupling intensifies. I don’t think we will stop growing with our machines, even as they become more autonomous. With regards to our relationship with the natural world, we have been at odds with it only because we have spent the history of human civilization trying to improve our chances of survival and secure a pleasant existence. Now that we have, for the most part, created a world of abundance when it comes to the things humans need, these technologies we are developing will allow us to finally free ourselves of the restraints our socioeconomic systems place on our lives. We’ll be able to do away with the need to constantly produce and consume and focus on having fulfilling lives and a thriving coexistence with the rest of the beings on this planet.
Why do you think that society continues to perpetuate a doom-and-gloom future of robotic meta-villains?
It makes for an easy, thrilling narrative that has been around for a long time. The idea that our creations, be they our children or something we’ve made with our hands/minds, will rise against and supplant us is an age-old tale.
Another thing to think about is the fact that people just don’t understand this field of science that well. People tend to make the assumption that intelligence means human-like, with all the baggage that we bring to the table. That opens a whole can of worms on its own because people also make assumptions about humanity and human nature that just aren’t true as well.
So, to say that there will be robot uprisings is partly because we project our own human fears? If we are creating the robot baseline, though, couldn’t we have reason to fear? I suppose creating a super-intelligent robot is just as unpredictable as having a human child, albeit with a little more control.
Projection indeed. There is always a reason to be cautious, because we don’t know what we don’t know. It is always a good idea to plan contingencies and be on the lookout for genuine problems and try to mitigate any issues. But, the fear is too strong and leads to paranoia and bad decisions. To halt progress in AI or quit research altogether would be a disaster. The benefits AI could bring to humanity far outweigh any potential risks.
You brought up Overwatch earlier, which got me thinking about the influence of entertainment on the creation of, and mindset about, our technological future. There’s the obvious example of Data in Star Trek: The Next Generation – a sentient android learning to be more human day after day, but who wrestles with human emotion. On Star Trek: Voyager, we have the Doctor, an Emergency Medical Hologram, who appears human-like and provides medical guidance for the crew. While not a holographic program, we are seeing some technology today that explores the possibility of robot doctors. The creators of Star Trek really explored what it meant to be human while showing us the possibility of an existence where things like money aren’t necessary due to replication technology and unlimited energy. Our technology today seems similar to the technology explored in the show. 3D printers, for example, can be considered our beta replicators, and iPads as our version of PADDs. What else in the entertainment sphere is out there that helps to steer the public perception toward a positive outlook for a technological future? How essential do you think it is for entertainment to be a positive example of what could be?
The best examples I have seen have been in books. There are certain limitations to live action visual mediums that raise their costs of production and the need to reach a mass audience that have kept them from trying anything too daring and interesting.
Various books have explored transhumanism with hiveminds (Nexus Trilogy and House of Suns), enhanced intelligence (Understand and the Wired series), and the rise of AI along with humans enhanced in various ways (Avogadro Corp series). The Nexus Trilogy also explores how this technology would affect Buddhist traditions and the Avogadro Corp series has some minor explorations on how meditation and mindfulness could be enhanced by brain-machine interface technology. In it, Buddhist monks are able to quickly experience advanced meditation due to the shared experience and guidance from a master. Various characters also use mindfulness and meditative techniques to increase their ability to take the brain interface technology to its limits and beyond. Characters are able to overcome traumatic events from their pasts, work in perfect concert to overcome adversaries, and give children with disabilities the means to properly interact with others and experience the world.
Entertainment is a great way to push bold new ideas to the masses and get them talking. Many of the assumptions people make about how the brain works or how artificial intelligence might arise in the future come from what they have seen and read in popular media. Entertainment doesn’t necessarily convince people to believe one thing or another, but it definitely makes people think and perhaps pursue answers to these questions from proper sources. Entertainment is one of the most accessible ways to affect popular consciousness.
An interesting thing to note about the future of technology is the augmented reality (AR) potential and its inevitability in shaping our perspectives and relationships with tech. Does our constant connectivity disable us from being fully present?
I think it allows us to be more present, even though it also enhances our ability to experience more vivid forms of escapism. I think the real problem is that much of the rest of our daily lives is so trivial or taken up by activities we have to do but wish we didn’t. Our fast-paced demanding lives are made even more fast-paced by the constant connectivity, but if the fundamental basis of our lives wasn’t so intense and stressful, the connectivity would make us more empathetic and community-driven.
The idea of fearing machine intelligence seems a little perplexing considering how much AI has already infiltrated our lives and in ways where we’re getting close to not being able to live without such conveniences. How do you think our current relationships with present-day AI benefit our future?
I think the movie Her is a good predictor of AI in our lives in the future. AI just sort of happened in a smooth evolution of the available virtual assistant technology on display in the movie, and people just accepted it. Our current relationship allows us to experience the benefits of powerful automation. People will readily embrace more intelligent solutions to the complex problems we face, and we love the convenience these technologies bring. Right up to the point that a machine declares a desire for its own agency, people will view every improvement as just making their lives better. Fear of robot uprisings will be mere fiction, and any real fears will probably be dispelled by the knowledge we gain in the process of creating AI. We’ll probably be able to pin down some real understandable definitions of intelligence and consciousness and be able to cast aside all currently held misconceptions.
In your honest opinion, what is your stance on the need for “religion” in the singularity? If a machine intelligence can help humans transcend beyond our current limitations, do you think these ideas will be archaic, or perhaps evolved?
Some ideas will seem archaic and be discarded as the ideas and beliefs evolve. The more we know, the more we can amend things. Buddhism already seems pretty progressive and amendable. The big three (Abrahamic) religions might have some soul searching to do, however. Less people might believe as time goes on, but I don’t think any of the current religions will go away any time soon.
There have been many timelines as to when the singularity occurs – from as early as 2025 to as far as beyond our lifetimes. What are your predictions?
I think it will definitely occur in our lifetime, but it is hard to pin down when. It depends entirely on how certain technologies develop. I’m a proponent of the intelligence explosion idea because it most closely resembles the idea of accelerating progress. I think the rise in automation will lead to positive changes in our socioeconomic system in the 2020s that will make it easier for, and incentivize, technological growth. If I had to make a hard estimate, I would say major breakthroughs leading to the explosion could occur in the late 2030s.
Good post