"The no-free-lunch theorem of optimization is an impossibility theorem telling us that a general-purpose, universal optimization strategy is impossible. The only way one strategy can outperform another is if it is specialized to the structure of the specific problem under consideration."
Error=Bias+Variance(+Noise)
from scikit-learn.org
from scikit-learn.org
from istockphoto.com
Approach | Implementation | Examples |
---|---|---|
Tolerance | Vergrössert Fehlertoleranz | |
Regularization | Strafe für Komplexität | |
Ensemble | Bagging | |
Ensemble | Boosting | |
Feature selection | Regularization | |
Feature selection | Importance |
Name | Penalty | caret |
---|---|---|
![]() |
- | |
![]() |
method=glmnet | |
![]() |
method=glmnet | |
![]() |
method=glmnet |
from wikipedia.org
from scikit-learn.org
from medium.com
"…some machine learning projects succeed and some fail. What makes the difference?
"The algorithms we used are very standard for Kagglers. […]
from open.edu
"The no-free-lunch theorem of optimization is an impossibility theorem telling us that a general-purpose, universal optimization strategy is impossible. The only way one strategy can outperform another is if it is specialized to the structure of the specific problem under consideration."
Keyboard shortcuts
↑, ←, Pg Up, k | Go to previous slide |
↓, →, Pg Dn, Space, j | Go to next slide |
Home | Go to first slide |
End | Go to last slide |
Number + Return | Go to specific slide |
b / m / f | Toggle blackout / mirrored / fullscreen mode |
c | Clone slideshow |
p | Toggle presenter mode |
t | Restart the presentation timer |
?, h | Toggle this help |
Esc | Back to slideshow |