Vish Blog
Menu
Home
About
Posts
Contact
Loss functions in machine learning
Posted by
Vish Sangale
on January 27, 2020 ·
1 min read
Mean Square Error Loss (MSE)
Also known as Quadratic loss or L2 loss
Regression problem
Typical activation function : Linear
Mean Square Log Error Loss
Mean Absolute Error Loss (MAE)
L1 loss
Regression problem
Typical activation function: Linear
Mean Bias Error Loss (MBE)
Regression problem
Typical activation function: Linear
SVM loss (Hinge Loss)
Binary classification
Used for max margin classifiers
Typical activation function: Sigmoid
Multiclass SVM Loss
Multiclass classification
Squared Hinge Loss
Softmax Classifier (Multinomial Logistic Regression)
Un-normalized log probabilities of the classes
Want to max the log likelihood, or to minimize the negative log likelihood of the correct class
KL Loss
Kullback Leibler Divergence Loss
Multi-Class classification
Cross-Entropy Loss
Negative log-likelihood
Weighted Cross-Entropy
Balanced Cross-Entropy
Binary Cross-Entropy Loss
Binary classification
Typical activation function: Sigmoid
Multi-Class Cross-Entropy Loss
Multi-Class classification
Typical activation functionL Softmax
Sparse Multi-Class Cross-Entropy Loss
Huber Loss
Smooth Mean Absolute Error
Less sensitive to outliers than squared error loss
Dice Loss
Focal Loss
Tversky loss
Focal Tversky loss
Log Cosh Loss
Quantile Loss
Next
Post
→