Traditional view attribute the difficulty of RNN optimization to gradients explosion and vanishing. However, recent studies reveal the gradient of neural network are surprisingly stable. While these analyzes focus on simple feedforward networks, few works research more complicated frameworks. The main challenge in RNN research is the dependency of randomness between layers of RNN. We address this problem by finding a converging spot which is independent from this randomness and demonstrate the stability of gradient of RNN by analytical and empirical results.