Tag:Posteriori

  • Bayesian quantile regression, lasso and adaptive lasso Bayesian quantile regression analysis are implemented in R language

    Time:2021-8-25

    Original link:http://tecdat.cn/?p=22702 abstract Bayesian regression quantile has attracted extensive attention in recent literature. This paper realizes Bayesian coefficient estimation and variable selection in regression quantile (RQ), Bayesian with lasso and adaptive lasso penalty. It also includes the further modeling functions of summarizing the results, drawing the path map, a posteriori histogram, autocorrelation map and drawing […]

  • Block Gibbs Gibbs sampling Bayesian multiple linear regression in R language

    Time:2021-8-23

    Original link:http://tecdat.cn/?p=11617 Original source:Tuo end data tribal official account In this article, I will use Gibbs sampling of block for multiple linear regression to obtain the conditional posterior distribution required for Gibbs sampling of block. Then, the sampler is coded and tested with simulated data. Bayesian model Suppose we have a sample size of the […]

  • Maximum likelihood estimation and maximum a posteriori estimation

    Time:2020-9-1

    preface This series of articles are the reading notes of “deep learning”. You can refer to the original book and read it together for better effect. MLE VS MAP Maximum likelihood function (MLE) and maximum a posteriori probability estimation (map) are two completely different estimation methods. The maximum likelihood function belongs to frequency derivative Statistics […]

  • Maximum likelihood estimation and maximum posterior estimation

    Time:2020-4-10

    Preface This series of articles is the reading notes of deep learning, which can be read together with the original book for better effect. MLE VS MAP Maximum likelihood function (MLE) and maximum a posteriori probability estimation (map) are two completely different estimation methods. The maximum likelihood function belongs to frequency statistics (it is considered […]

  • Fine derivation machine learning: the principle of naive Bayesian model

    Time:2019-12-8

    1 – basic theorem and definition Conditional probability formula: \[ P(A|B)=\dfrac{P(AB)}{P(B)} \] Total probability formula: \[ P(A)=\sum_{j=1}^N P(AB_i)=\sum_{j=1}^N P(B_i)P(A|B_i) \] Bayes formula: \[ P(B_i|A)=\dfrac{P(AB_i)}{P(A)}=\dfrac{P(B_i)P(A|B_i)}{\sum_{j=1}^N P(B_i)P(A|B_i)} \] Probability plus rule: \[ P\left(X=x_i\right)=\sum_{j=1}^N P\left(X=x_i,Y=y_j\right) \] \[ P\left(X\right)=\sum_Y P\left(X,Y\right) \] Probability product rule: \[ P\left(X=x_i,Y=y_j\right)=P\left(Y=y_j|X=x_i\right)P\left(X=x_i\right) \] \[ P\left(X,Y\right)=P\left(Y|X\right)P\left(X\right) \] Generation learning method: Learning with training data\(P(X|Y)\)and\(P(Y)\)The joint probability […]