Infinite state space

This discussion of Q functions brings us to an important limitation of traditional reinforcement learning. As you may recall, it assumes a finite and discrete set of state spaces. Unfortunately that isn't the world we live in, nor is it the environment that our agents will find themselves in much of the time. Consider an agent that can play ping pong. One important part of it's state space would be the velocity of the ping pong ball, which is certainly not discrete. An agent that can see, like one we will cover shortly, would be presented with an image, that is a large continuous space.

The Bellman equation we discussed would require us to keep a big matrix of experienced rewards as we moved from state to state. But, when faced with a continuous state space this isn't possible. The possible states are essentially infinite and we can't create a matrix of infinite size.

Luckily for us, we can use a deep neural network to approximate the Q function. This probably doesn't surprise you because you're reading a deep learning book, so you probably guessed deep learning had to come into the picture someplace. This is that place.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.117.73.127