References
Tags: concept
Sources:
Related notes:
Machine Learning
Updates:
April 20th, 2021: created note.
Notes {{word-count}}
Summary:
Key points:
Maximum Likelihood Estimation (MLE)
ΞΈββargβ‘maxβ‘ΞΈβilogβ‘pΞΈ(yiβ£xi)\theta^{\star} \leftarrow \arg \max _{\theta} \sum_{i} \log p_{\theta}\left(y_{i} \mid x_{i}\right)ΞΈββargmaxΞΈββiβlogpΞΈβ(yiββ£xiβ)
Negative Log-Likelihood (NLL)
ΞΈββargβ‘minβ‘ΞΈββilogβ‘pΞΈ(yiβ£xi)\theta^{\star} \leftarrow \arg \min _{\theta}-\sum_{i} \log p_{\theta}\left(y_{i} \mid x_{i}\right)ΞΈββargminΞΈβββiβlogpΞΈβ(yiββ£xiβ)
The ββilogβ‘pΞΈ(yiβ£xi)-\sum_{i} \log p_{\theta}\left(y_{i} \mid x_{i}\right)ββiβlogpΞΈβ(yiββ£xiβ) part is also a Loss Function.
This is also called Cross-Entropy.
Mean Squared Error is actually Negative Log-Likelihood (NLL).
Negative Log-Likelihood (NLL) is sometimes also called as Cross-Entropy.
This is also called Negative Log-Likelihood (NLL).
This is called the Maximum Likelihood Estimation (MLE), and it can be formulated as a Negative Log-Likelihood (NLL) problem.