Thomas Bayes


Log(loss) is convex, which means that we can use gradient descent to find weights that result in a global minimum. 0 / 1 loss is not convex due to its abrupt decision boundary at z = 0, so it is difficult to optimize.

Discover More
Card Puncher Data Processing
Convexity in Software Dev

abstraction physical widgets, you can only get so much better at building widgets (concave). software and you come up with a better way of building software (tool/language/technique/pattern/library),...
Thomas Bayes
Machine Learning - Logistic regression (Classification Algorithm)

The prediction from a logistic regression model can be interpreted as the probability that the label is 1. linear regression can also be used to perform classification problem. Just by transforming the...

Share this page:
Follow us:
Task Runner