It is an enhanced version of actor-critic, combined with the DQN algorithm. It is a combination of value-based and policy-based methods. When selecting an action,
Deterministic changes the original policy gradient selection action process. Deterministic only selects continuous actions. An output is used as an action.
The policy network is an actor, which outputs an action (action-selection). The value network is a critic , used to evaluate the actor's network selected actions (action value estimated).
The algorithm of DDPG is off-policy. It uses other new strategies to update the current Q value. It will refer to historical records.
It is not necessary to use the samples generated by the current strategy to update the current Q value.
Some previous experiences can be randomly selected. For learning, random sampling disrupts the correlation between experiences and makes the neural network update more efficient.
Taking an action in the current state St. The states St+1 and reward rt+1 after at are only related to the current state and action, and not related to the historical state.
The DDPG algorithm refers to the past historical state when the current action takes every action.