- How do MLPs compare with RBFs
Multilayer perceptrons (MLPs) and radial basis function (RBF) networks are the two most commonly-used types of feedforward network. They have much more in common than most of the NN literature would suggest. The only fundamental difference is the way in which hidden units combine values coming from preceding layers in the network–MLPs use inner products, while RBFs use Euclidean distance.
- How to set learning rate in back propagation?
In standard backprop, too low a learning rate makes the network learn very slowly. Too high a learning rate makes the weights and objective function diverge, so there is no learning at all. If the objective function is quadratic, as in linear models, good learning rates can be computed from the Hessian matrix. If the objective function has many local and global optima, as in typical feedforward NNs with hidden units, the optimal learning rate often changes dramatically during the training process, since the Hessian also changes dramatically. Trying to train a NN using a constant learning rate is usually a tedious process requiring much trial and error. With batch training, there is no need to use a constant learning rate. In fact, there is no reason to use standard backprop at all, since vastly more efficient, reliable, and convenient batch training algorithms exist.
- How to select training algorithms?
I generally use Garamond as my default fonts in Latex writing. Partially because it is old-styled and still used in many French booklets.
If you don’t have Garamond fonts installed with your tex distribution, then following might be helpful for you.
Now let’s play a game. I have a fair coin, which means the probability of showing heads or tails is equally 50%. The rule is very simple, if the number of showing heads is more than 60 in 100 trials, you will win \$40, otherwise you will win nothing. The price of the game is \$1, will you like to play?
When we want to fit your data with some parametric models, there are three categories that
we usually consider first:
Many people often encounter p-value in life, however surprisingly some of them easily get confused by either p-value or
the alpha level, which are the two basic concepts in hypothesis testing. Today we try to give a very easy-to-get
version of these two terms.