Taking Machine Learning to the Next Level

Taking Machine Learning to the Next Level
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

Machine learning has already changed the way we work and process information in the modern business environment. It’s helped us become more efficient—make smarter decisions—and target customers better than ever before. But there’s a whole new type of learning—reinforcement learning—that is going to do a lot more.

Unlike machine learning, which uses data analysis to allow computers to learn without being programmed, reinforcement learning allows computers to learn from experience—just like humans do. Much like Thorndike’s well-known experiment where a cat was trapped in a box until it learned to step on a lever to escape, artificial intelligence (AI) is now learning how to solve problems faster and faster using this technique. It’s almost as if technology is crossing over into humanity as we study its behaviors under various methods of conditioning.

Recently, tech giants in the AI space, including Alphabet, have been making huge leaps in advancement in reinforcement learning. It has the potential to change everything from how we drive to how we interact with one another—and there are tons of ethical issues still to be determined. The following are a few things to know about reinforcement learning.

It’s DifferentMachine learning as we’ve traditionally known it is mostly what you’d call “supervised learning” or programming. In this type of learning, developers create a curated or labeled set of data, and computers learn to associate different shapes, sounds, or words with those curated sets. This process is incredibly labor and time intensive. In reinforcement learning, however, the learning happens much differently. The computer learns by interacting with the world around it. It learns through trial and error what the goal is, rather than being told. In essence, it learns to solve problems—not to look for specific solutions.

Why It MattersIn essence, reinforcement learning is teaching computers to think, not just learn. Using their own judgment, they determine their behavior based on feedback from the world around them, just like we as humans would when operating in our physical environments. That’s a potential game changer for almost every industry, from the military—where AI soldiers could become the norm—to self-driving cars—which could easily take over the highways if we are able to produce vehicles with good “judgment.”

It’s gaining traction.You’ve probably heard that Google’s AlphaGo AI beat the world champion of the game Go simply by playing against itself. But Google’s Deepmind division is not the only supporter. Today, reinforcement learning is being used to do things like identify cancer in MRI scans, and it could eventually be used for everything from public safety and public transit to protecting our energy supply and other natural resources.

It’s Not Perfect—YetThough developers have made major breakthroughs in reinforcement learning, there is still much learning to do on the part of humans. Until now, reinforcement learning has dealt with specific tasks. It can difficult for computers to know what they’re looking for if the goal is immediately clear, such as in a game like Go. Ultimately, to create things like personal assistant robots or medical assistants, we’ll need to develop AI that is flexible enough to learn “common sense” to manage a wide range of issues. Right now, we’re just not there. The issue of “delayed” rewards make reinforcement learning more difficult.

Ethics are an IssueDon’t kid yourself—introducing self-learning robots that can learn faster and better than humans will come with a huge range of issues. On our end, we can only program them to the extent of our human knowledge, which is always going to be limited. If we forget to set system safeties, we could have serious trouble on our hands in terms of public safety. On the other end, the question remains: do we really want to create a world of computers that think—and do—via their own free will, especially when they are smarter than humans? That’s definitely an issue we need to reflect on before jumping too far into the reinforcement learning landscape.

For now, the potential for reinforcement learning is so vast and promising that it would appear to outweigh the risk. There is simply no telling what we—and they—can do.

Article first seen here

Popular in the Community

Close

What's Hot