Tag:Exercises

  • Java daily exercises, make progress a little every day (28)

    Time:2022-4-22

    catalogue 1. Write a method header for a method with no formal parameters and no return value of class AB, which can be called in ab.method (). The method header is in the form of (). 2. What are the errors in the following class definitions? () 3. What is the running result of the […]

  • Python loop structure exercises

    Time:2022-4-18

    catalogue 1. Find the greatest common divisor of two numbers 2. Integer inversion: for example, 12345, output 54321 3. The integers between 1 and 10 are added to obtain the current number whose cumulative value is greater than 20 4. Enter the daily learning time (in hours) from Monday to Friday, and calculate the average […]

  • Data acquisition practice (IV) — download answers to linear algebra exercises

    Time:2022-1-28

    1. General Some time ago, I was reading the third edition of linear algebra textbook “how to learn linear algebra” recommended by many people. There are a lot of exercises in each chapter of this edition. Although the official website provides the answers to the exercises according to the chapters, first, because the website is […]

  • Exercises in Chapter 3 of statistical learning methods

    Time:2021-12-15

    Exercise 3.1 slightly Exercise 3.2 According to the KD tree constructed in example 3.2, the nearest neighbor is\((2,3)^T\) Exercise 3.3 The k-nearest neighbor method mainly needs to construct the corresponding KD tree. Here we use Python to realize the construction and search of KD tree import heapq import numpy as np class KDNode: def __init__(self, […]

  • Exercises in Chapter 4 of statistical learning methods

    Time:2021-12-14

    Exercise 4.1 A priori probability and conditional probability of naive Bayes are derived by maximum likelihood estimation method Hypothetical data set\(T = \{(x^{(1)} , y^{(1)}), (x^{(2)} , y^{(2)}), … , (x^{(M)} , y^{(M)})\}\) , hypothesis\(P(Y=c_k) = \theta_k\), then\(P(Y \ne c_k) = 1 – \theta_k\)。 It is assumed that the value in the dataset is\(c_k\)The number […]

  • Exercises in Chapter 5 of statistical learning methods

    Time:2021-12-13

    Exercise 5.1 Information gain ratio formula:\(g_R(Y, X_i) = \frac{g(Y, X_i)}{H_{X_i}(Y)} = \frac{H(Y) – H(Y|X_i)}{H_{X_i}(Y)}\) Among them,\(H(Y) = -\frac{9}{15} log_2 \frac{9}{15} – \frac{6}{15} log_2 \frac{6}{15} = 0.971\) use\(X_1, X_2, X_3, X_4\)Indicates age, job, own house and credit. According to example 5.2 \(g(Y, X_1) = 0.083, g(Y, X_2) = 0.324, g(Y, X_3) = 0.420, g(Y, X_4) = […]

  • Exercises in Chapter 6 of statistical learning methods

    Time:2021-12-10

    Exercise 6.1 First, explain what is the exponential distribution family. Group number distribution family, also known as exponential family distribution (later replaced by this term), exponential family distribution is satisfied\(f(x|\theta) = h(x) *exp(\eta(\theta)*T(x) – A(\eta))\)Formal probability distribution(\(f(x|\theta)\)It can be the probability density function of probability distribution). Consider the logistic distribution without considering the bias problem […]

  • Exercises in Chapter 7 of statistical learning methods

    Time:2021-12-8

    Exercise 7.1 The difference between the dual form of perceptron and that of support vector machine is that perceptron is transformed by assuming the increment of parameter change; The support vector machine is solved by solving the constrained optimization problem, which is transformed into an unconstrained optimization problem through Lagrange duality. The original form of […]

  • Exercises in Chapter 8 of statistical learning methods

    Time:2021-12-5

    Exercise 8.1 Sklearn.ensemble.adaboostclassifier of scikit learn library can be used for model training slightly Exercise 8.2 List tables for comparison Model name learning strategy Learning loss function learning algorithm Support vector machine Minimize regularization page loss and maximize soft spacing hinge loss Sequential minimum optimization algorithm (SMO) AdaBoost Minimizing the exponential loss of additive model […]

  • Exercises in Chapter 9 of statistical learning methods

    Time:2021-12-2

    Exercise 9.1 EM algorithm is divided into E-step and M-step For step e, calculate the expectation.\(\mu_j^{(i+1)} = \frac{\pi^{(i)}(p^{(i)})^{y_j}(1-p^{(i)})^{1-y_j}}{\pi^{(i)}(p^{(i)})^{y_j}(1-p^{(i)})^{1-y_j} + (1 – \pi^{(i)})(q^{(i)})^{y_j}(1-q^{(i)})^{1-y_j}}\) For Step M, the maximum likelihood is estimated.\(\pi^{(i+1)} = \frac{1}{n} \sum \mu_j^{(i+1)}\) , \(p^{(i+1)} = \frac{\sum \mu_j^{(i+1)}y_j}{\sum \mu_j^{(i+1)}}\) ,\(q^{(i+1)} = \frac{\sum (1-\mu_j^{(i+1)})y_j}{\sum (1 – \mu_j^{(i+1)})}\) First iteration Step e: the observation value is […]

  • Exercises in Chapter 10 of statistical learning methods

    Time:2021-11-29

    Exercise 10.1 By the question,\(T=4, N=3,M=2\) According to algorithm 10.3 The first step is to calculate the final period\(\beta\) : \(\beta_4(1) = 1, \beta_4(2) = 1, \beta_4(3) = 1\) The second step is to calculate each intermediate period\(\beta\) : \(\beta_3(1) = a_{11}b_1(o_4)\beta_4(1) + a_{12}b_2(o_4)\beta_4(2) + a_{13}b_3(o_4)\beta_4(3) = 0.46\) \(\beta_3(2) = a_{21}b_1(o_4)\beta_4(1) + a_{22}b_2(o_4)\beta_4(2) + a_{23}b_3(o_4)\beta_4(3) […]

  • Exercises in Chapter 11 of statistical learning methods

    Time:2021-11-1

    Exercise 11.1 By the question, according to the formula\(P(Y) = \frac{1}{\sum \limits_Y \prod \limits_C \Psi_C(Y_C)} \prod \limits_C \Psi_C(Y_C)\) The factor decomposition of probabilistic undirected graph model is the operation of expressing the joint probability distribution of probabilistic undirected graph model as the product of the function of random variables on its largest clique The maximum […]