Loss Functions
Last updated
Last updated
Review: Loss functions evaluate how well the model performs (how much the predicted result deviates from actual result)
Predicts continuous value (eg. floor size, # of rooms)
Measures magnitude without considering direction
Squaring penalizes further deviated values much more than less deviated values
More robust to outliers (no squaring)
Needs linear programming to calculate gradients
Same as MAE but no absolute
This is kinda bad as it makes things less accurate
Used for seeing positive or negative bias
Predict output from finite set of categorical values (e.g. 0-9)
Score of correct category should be greater than sum of scores of all incorrect categories by some safety margin (usually one)
Usually for SVM
Most common
Loss increases as predicted probability diverges from actual label
Penalizes heavily predictions that are confident but WRONG
https://www.youtube.com/watch?v=pH9xkCK4ATc (great video!)