It wasn’t long ago that talking about blockchain was like talking about the Internet in 1993, when almost nobody had e-mail; the idea of social networks hadn’t even passed through Mark Zuckerberg’s mind (he was nine years old); and instead of Netflix, your at-home entertainment options were limited to Blockbuster or pirate VHS tapes.
In fact, some of the technologies emerging now have the potential to be just as disruptive as the Internet was. Deep learning, bots, exoskeletons, the Internet of Things, and other exponential innovations might lead to the design and creation of more efficient robots in the relatively near future – robots that are able to better engage with humans in order to transport them, medically diagnose them, and/or support them with physically demanding or repetitive tasks: our relationship with computers is entering a new phase that is not based on the keyboard, but rather the microphone; from paying to own fixed products, we’ll shift towards paying to access goods and services, using platforms similar to Uber or Netflix for everything from furniture to art.
Without a doubt, these new technologies can generate fears and concerns, ranging from the displacement of human workers by machines to the danger of constant surveillance of our actions. Sometimes, thinking about the ways that these technologies might spin out of control can lead us to imagine scenarios such as those posited by the television miniseries Black Mirror, where episodes present worlds in which it is difficult to distinguish reality from virtual reality, human minds can be manipulated, and highly sophisticated robots and artificial intelligence exist that are able to simulate our deceased loved ones.
To understand the possibility and potential of these disruptive innovations, we’ve selected 10 that might play a key role in the future. These technologies include some that are already a part of our daily lives, such as bots and cloud computing, to others that seem poised for exponential development in the near future, such as blockchain and genetic engineering.
Ten Innovations That Are Already Changing Our Lives
The cryptocurrency Bitcoin and its emerging competitor, Ethereum, are generally well known. The system they run on, however, is less understood. Blockchain is the system that allows these cryptocurrencies to operate as viable currencies without regulation. In other words, blockchain removes middlemen such as central banks, allowing cryptocurrencies to exchange hands while eliminating the risk of diversion or double spending .
Blockchain is a decentralized, online digital ledger that stores information in registers called blocks. These blocks form a permanent record of all the transactions executed by users: each block stores information on valid transactions and is linked to a single prior block and a single subsequent block. In practical terms, blockchain allows information to be permanently stored and removes middlemen from the process, facilitating secure and anonymous transactions between individuals that do not know each other and have not previously generated a relationship of trust.
Blockchain has a range of diverse applications ranging from smart contracts and property registrations that can never by modified by error or because of corruption, to more efficient, safe, and quick banking systems (goodbye bank transfers that take multiple days and incur high commission fees). Blockchain can also play a role in the music industry, offering a fairer commercialization strategy for both musicians and consumers in a post-Spotify era, and in the sharing economy, where blockchain technology can be used to increase levels of trust on platforms such as Airbnb.
Blockchain is already changing our financial systems. Many banks are exploring different ways to use blockchain technology to improve their payment processes and information systems, which have stagnated over the past few decades. The United States Federal Reserve is focusing heavily on understanding the potential of this new platform, which, according to Federal Reserve governor Lael Brainard, “…may represent the most significant development in many years in payments, clearing, and settlement.”
Maybe you haven’t heard the term bots, but you’ve definitely interacted with them. Bots are the software designed to automate certain functions. These computer programs attempt to simulate certain simple human interactions; the more data they have access to, the more they learn. They can make restaurant reservations, organize and schedule meetings, or look up information on flight routes and ticket prices. Computerized personal assistants such as Siri (iPhone), Alexa (Amazon), Google Assistant, and Cortana (Microsoft) use multiple bots to complete various tasks.
Currently, there are more than 30,000 active bots on Facebook Messenger alone, and many large companies have included them in their systems. Financial institutions use bots to notify their clients of account activity, e.g. American Express’s “Amex Bot”. Media outlets also use bots to push out breaking news that is aligned with their user’s interests.
Thanks to information gathered by companies like Google over the course of many years, as well as machine learning algorithms, bots have significantly improved in their ability to interact with humans and are even able to respond to questions before a user has finished asking it. Bots are more than just new apps. From a functional perspective, bots can respond to voice and text commands using simple interfaces. However, a lot of work still needs to be done to increase bots’ ability to be more intelligent, automate more processes, and facilitate our daily decisions.
Bots also have some downsides; Ticketmaster is an easy target and a good example. For certain events, such as the Broadway hit Hamilton, close to 60 percent of the best tickets are bought up by bots to be resold at prices significantly above market value.
3. The Internet of Things
Refrigerators that let you know when you’re out of milk, pill bottles that remind you when it’s time to take your medicine, monitors that keep you updated on your child’s temperature and activity, sensor systems that automatically water your house plants depending on their growth and the weather. The Internet of Things (IoT) is the inter-networking of physical devices, vehicles, buildings, and other items that use electronics, software, sensors, and network connectivity to collect and exchange data. Based on an analysis of this data, the IoT can facilitate people’s lives and even automate certain activities.
It is currently estimated that approximately 15 billion devices are connected to the Internet. This number will increase exponentially with the explosion of artificial intelligence, which uses the information that we provide through the billions of connected devices that are constantly gathering and transmitting data to adapt and learn. By 2020, it’s possible that between 26 and 100 billion devices will be connected to the Internet. This massive increase in connected devices will create a new industry of products and services.
In the future, the IoT will be omnipresent in our day-to-day lives, from coffee machines to systems used in operating rooms. In addition to the benefits that the IoT represents, there are also significant dangers. Perhaps the most significant is privacy and information security; similar to the dystopian future imagined by George Orwell in his seminal novel 1984, in which the omnipresent Big Brother constantly monitored every person, the IoT threatens the potential to constantly monitor our every move.
4. Deep Learning
These days, a dystopian world where artificial intelligence dominates the human race seems like a bad joke from the 1970s; for decades now, the common narrative has been a future where humans and robots exist side by side. Until recently, this narrative had been limited to pure fantasy. Now, it’s becoming more and more feasible thanks to technological advances including deep learning – the process that allows machines to learn and is driving the current artificial intelligence boom.
Deep learning uses neural networks – discrete layers, connections, and directions of data propagation that simulate the human brain and allow a machine to learn and connect different types of learning. This focus on creating a machine that functions like a human brain has been around since the 1980s, when it was referred to as neuromorphic computing; however, deep learning was successfully implemented only recently. Thanks to advances in the algorithms used, increased processing power, and the massive amounts of information now available through the Internet, neural networks can now process and identify sounds, images, and other data without having to be specifically programmed to do so.
What’s next? An immediate outcome of deep learning might be the increased use of voice recognition, automation, and use of groups of bots, as well as increased interaction with our devices. But the future has the potential to be radically different: virtual assistants that automatically learn and adapt to our behavior, exponentially increasing their efficiency and ability to predict our needs and questions; self-driving vehicles that will become more “intelligent” the more they are used; and medical advances, including automated programs that are able to read X-rays and CT scans to more precisely identify illnesses at an earlier stage.
5. Voice Recognition
Until recently, many of us only used computerized personal assistants so that we could laugh at their many and myriad mistakes. Yes, these errors were funny. But when you really needed help finding a telephone number, making a reservation, or conducting a simple Google search, these assistants were inefficient and frustrating.
But this is starting to change. The error rate of voice recognition programs has decreased drastically, reaching an approximately 5 percent word error rate. Twenty years ago, the word error rate was almost 100 percent, and machines hardly understood a word we said.
In the past few years, the evolution has been almost exponential; better microphones are available at a lower cost, the Internet and the cloud allow developers to collect conversations for analysis, the processing power of smart phones is constantly increasing, and perhaps most importantly, deep learning technology allows for improved voice processing.
This opens the door to a significantly different future. In this future, we will likely use our keyboards and screens much less, and most of the things we currently use our keyboard for will instead be done by devices that listen, process, talk (ask and respond) and execute. In another dystopian fantasy, these devices could be used to listen in and monitor our conversations at all times; Alexa and Siri could become Big Sister.
6. Genetic Engineering
Recent disruptive innovations don’t just make our lives easier; they might actually be able to save it. In the past few years, biomedical engineers, doctors, and biologists have focused their research on mapping and editing the human genome. By understanding an individual’s genome, scientists can optimize the medicine, food, exercise, etc. that will benefit them; doctors and researchers are also able to identify predispositions to certain illnesses and how to prevent them.
One of these revolutionary gene “editing” techniques, Clustered Regularly Interspaced Short Palindromic Repeats, known as CRISPR, is already being used in humans. Some of the successful applications of CRISPR include the case of a one-year-old girl who was given modified cells as a treatment for leukemia; the case of an adult who saw the size of a tumor in his lungs decrease by a third; and the possibility of treating infectious or autoimmune diseases with gene therapy.
One of the most promising aspects of this technology is immune engineering, or the intervention in and modification of the immune system. These processes allow doctors and researchers to “edit” the immune system, strengthening an individual’s natural ability to prevent or treat certain diseases. Genetic editing has successfully turned immune system cells into “super soldiers” that are able to distinguish between normal cells and harmful cells to eliminate the latter.
This is just the beginning of what this technology represents for the medical field. As the use of this technology increases, so too will the debate regarding its responsible use and ethical implications. The risks are high; genetic modifications could create mutations that are passed on to future generations, create an ecological imbalance, create social inequalities based on athletic or intellectual performance, or create a new form of eugenics where children are “designed” according to the preferences of their parents, increasing discrimination based on race, minority status, or disability.
7. LIDAR Sensors
Many of the technological advances that we experience are in fact the result of a combination of different, interconnected technologies. LIDAR sensors (Light Detection and Ranging or Laser Imaging Detection and Ranging) are one of the technological innovations that allow other, more complex innovations – such as robots, drones, and self-driving vehicles – to function better.
LIDAR sensors can detect the distance between objects using 360° imaging, an object’s velocity (when applicable), and can create a 3D image of its surroundings. Additionally, LIDAR sensors are not impacted by some of the limitations of cameras and work equally well in darkness or fog.
Unlike other systems currently used in robots, drones, and self-driving vehicles – such as cameras, ultrasonic sensors, and radar – LIDAR is neither cheap nor easy to develop. It is an extremely complicated system that works by firing rapid pulses of laser light at a surface and measuring the amount of time it takes for each pulse to bounce back.
Despite the efficiency of LIDAR sensors, there are also some downsides. One downside is the size of the sensors, which are quite large and must be installed on top of the device, creating potential mechanical complications. Another downside is their price; a single sensor can cost almost the same as some new cars (between USD $10,000 and $50,000). Solutions are currently being developed to address the issue of size and price, but a new product is unlikely to be made available within the next three years.
In the movie Iron Man, the main character uses a type of robotic suit to give himself extra-human abilities and fight his enemies. This is an exoskeleton – a wearable mobile machine that runs on electricity. Exoskeletons give the wearer increased strength and resistance and facilitate the movement of the human body.
These machines help paralyzed patients walk; promote faster healing from injuries; and maximize human skills such as strength, speed, and reaction time. Some existing exoskeletons include motors that reduce the effort necessary to walk by a quarter. Others help people that have experienced brain injuries or strokes translate their brain activity and eye movement into the ability to open and close their hands.
Motorized exoskeletons are still in development, and are primarily available for hospitals and rehabilitation centers. They are very expensive (between USD $40,000 and $70,000), and some require a personal trainer in order to function. Increasing user’s independence and lowering costs are the two primary motivations behind their development.
In the future, exoskeletons might become significantly more present in our daily lives. Possible applications for this technology include better supporting patients in rehabilitation or providing physical support to workers, soldiers, or athletes by complementing the abilities and potential of the human body.
9. Virtual Reality
In the terrifying Black Mirror episode “Playtest,” the main character is unable to distinguish between what is real and what is part of a virtual reality video game that forces him to face his worst fears. Yet this is just one possible application of a technology that has been present in our lives for many years through flight simulators, interactive films, and video games.
Virtual reality is a 3D environment generated by software in which participants can manipulate objects, interact, and/or explore the “world” around them. Previously, efforts were focused almost exclusively on improving the auditory and visual aspects of the experience. These days, resources are also being invested to include tactile experiences and to diminish the latency, or the delay from action to reaction within the virtual environment.
Virtual reality allows users to participate in activities that are too expensive, impractical, or dangerous for real life. Recent advances in virtual reality have allowed the technology to expand beyond entertainment and into industries such as medicine, architecture, marketing, sports, and art, among others. These applications include allowing medical students to practice surgical procedures and showing potential homeowners what their future house or apartment will look like.
In the next few years, tools are expected to be developed that allow the complete immersion of the user in the digital environment. Maybe even to a point where virtual reality and actual reality are indistinguishable.
10. Cloud Computing
Cloud computing is another innovation that is already omnipresent in our lives. We use this technology every time we check our e-mail in the morning, listen to Spotify, check Facebook, or watch Netflix. All of these services use the “cloud,” hosting their content on a network of interconnected computers and servers located around the world instead of on a hard drive. The capacity of the cloud to host and process information is unlimited, as both grow in relation to the number of connected computers and servers.
The cloud allows cloud-based super computers to exist, including IBM’s Watson and Google’s DeepMind. Now, any user can pay a service fee to access these supercomputers, opening the door for millions of users to access this processing power for experiment design, app development, etc.
Cloud computing connects the dots between many of the innovations included on this list; bots, the Internet of Things, deep learning, and voice computing wouldn’t be possible or would be extremely limited without the data, information, and processing power provided by the cloud.
Despite its multiple benefits, cloud computing also has its downsides. Just a few months ago, a Distributed Denial of Service (DDoS) attack practically shut down the East Coast of the United States, leaving millions without Internet and taking out the servers of companies like Netflix. The attack took out an entire server network of computers providing Domain Name System services, impacting users around the world.
Another concern associated with the use of cloud computing is the question of content ownership. There is a current debate regarding the true owner of information that has been uploaded to the cloud. Is the content creator the owner? Or is the service that stores said content? Without a clear resolution of this question, individuals or multinational companies can access and use our personal data as they see fit. In the future, the most valuable resource will be information.
We would like to thank Manuel Morato, Germán Muciño, Víctor Rico, and José Carlos Sierra for contributing their time, knowledge, and expertise for the revision of this article. The opinions expressed in this article are the authors’ own.
José Luis Chicoma (@joseluischicoma) is the Executive Director of Ethos Laboratorio de Políticas Públicas (@ethoslabmx). Eugenia Sepúlveda (@Eugeniasep) is Advisor to the Director of Ethos Laboratorio de Políticas Públicas.