Artificial intelligence is at the cutting edge of new technology developments. What is happening in the lab? Which discoveries by academic and corporate researchers will set the course for Artificial Intelligence for the coming year and beyond? Here’s what is being worked on and why each topic is important for the future.
Deep Reinforcement Learning
This type of neural network learns by interacting with the environment through observations, actions, and rewards. DRL or deep reinforcement Learning has been used to learn different gaming strategies, including Atari and Go. It even includes the famous AlphaGo program that beat a human champion. Of all learning techniques, DRL is the most general purpose, and it can be used in most business applications. One benefit of this learning technique is that it requires less data than other techniques to train its models. Another important characteristic of this learning technique is that it can be taught through simulation. We expect to see a trend of more business applications that combine DRL with agent-based simulation.
Deep Learning Theory
Deep neural networks mimic the human brain. So far, they have demonstrated the ability to learn from text, images, data, and audio inputs. Deep neural networks have been used and studied for more than a decade, but there is still a lot we don’t understand about how neural networks learn or why they perform so well. A new theory applies the principle of an information bottleneck to deep learning. This theory hypothesizes that after an initial fitting phase, a deep neural network will compress and literally forget data sets that contain a lot of additional meaningless information while preserving information about what the data represents. This is important because understanding precisely how deep learning works allows for greater development and use of AI. Understanding deep neural networks can yield new insights into designing optimal networks and architectures while providing increased transparency for safety-critical or regulatory applications.
Capsule networks are a new type of deep neural network that similarly processes visual information to the brain. This means capsule networks have the capability to maintain hierarchical relationships. This capability represents a huge difference from convolutional neural networks that are not able to take into account important spatial hierarchies between simple and complex objects. Because of this limitation, convolutional neural networks often misclassify objects and have a higher error rate. For typical identification tasks, capsule networks hold the promise of better accuracy and reduced error rates by as much as 50 percent. They also don’t require as much data for training models. We can expect to see the widespread use of capsule networks across many problem domains and deep neural network architectures.