Reinforcement learning algorithms have succeeded over the years in achieving impressive results in a variety of fields. However, these algorithms suffer from certain weaknesses highlighted by Refael Vivanti and al. that may explain the regression of even well-trained agents in certain environments : the difference in variance on rewards between areas of the environment. This difference in variance leads to two problems : Boring Area Trap and Manipulative consultant. We note that the Adaptive Symmetric Reward Noising (ASRN) algorithm proposed by Refael Vivanti and al. has limitations for environments with the following characteristics : long game times and multiple boring area environments. To overcome these problems, we propose three algorithms derived from the ASRN algorithm called Rebooted Adaptive Symmetric Reward Noising (RASRN) : Continuous ε decay RASRN, Full RASRN and Stepwise α decay RASRN. Thanks to two series of experiments carried out on the k-armed bandit problem, we show that our algorithms can better correct the Boring Area Trap problem.