Tag Archives: PCA

PCA-based pretraining of neural networks

Training of a neural network on large datasets could be a rather long and challenging thing. There are lots of approaches for reducing training time: parallelization, early stopping, momentum, dimensionality reduction etc. They provide faster convergence of training, prevent unnecessary iterations, utilize hardware resources in a more efficient way. In this post we'll see how good initialization can affect training. Continue reading PCA-based pretraining of neural networks