• ## Java daily exercises, make progress a little every day (28)

Time：2022-4-22

catalogue 1. Write a method header for a method with no formal parameters and no return value of class AB, which can be called in ab.method (). The method header is in the form of (). 2. What are the errors in the following class definitions? （） 3. What is the running result of the […]

• ## Python loop structure exercises

Time：2022-4-18

catalogue 1. Find the greatest common divisor of two numbers 2. Integer inversion: for example, 12345, output 54321 3. The integers between 1 and 10 are added to obtain the current number whose cumulative value is greater than 20 4. Enter the daily learning time (in hours) from Monday to Friday, and calculate the average […]

• ## Data acquisition practice (IV) — download answers to linear algebra exercises

Time：2022-1-28

1. General Some time ago, I was reading the third edition of linear algebra textbook “how to learn linear algebra” recommended by many people. There are a lot of exercises in each chapter of this edition. Although the official website provides the answers to the exercises according to the chapters, first, because the website is […]

• ## Exercises in Chapter 3 of statistical learning methods

Time：2021-12-15

Exercise 3.1 slightly Exercise 3.2 According to the KD tree constructed in example 3.2, the nearest neighbor is$$(2,3)^T$$ Exercise 3.3 The k-nearest neighbor method mainly needs to construct the corresponding KD tree. Here we use Python to realize the construction and search of KD tree import heapq import numpy as np class KDNode: def __init__(self, […]

• ## Exercises in Chapter 4 of statistical learning methods

Time：2021-12-14

Exercise 4.1 A priori probability and conditional probability of naive Bayes are derived by maximum likelihood estimation method Hypothetical data set$$T = \{(x^{(1)} , y^{(1)}), (x^{(2)} , y^{(2)}), … , (x^{(M)} , y^{(M)})\}$$ ， hypothesis$$P(Y=c_k) = \theta_k$$, then$$P(Y \ne c_k) = 1 – \theta_k$$。 It is assumed that the value in the dataset is$$c_k$$The number […]

• ## Exercises in Chapter 5 of statistical learning methods

Time：2021-12-13

• ## Exercises in Chapter 7 of statistical learning methods

Time：2021-12-8

Exercise 7.1 The difference between the dual form of perceptron and that of support vector machine is that perceptron is transformed by assuming the increment of parameter change; The support vector machine is solved by solving the constrained optimization problem, which is transformed into an unconstrained optimization problem through Lagrange duality. The original form of […]

• ## Exercises in Chapter 8 of statistical learning methods

Time：2021-12-5

Exercise 8.1 Sklearn.ensemble.adaboostclassifier of scikit learn library can be used for model training slightly Exercise 8.2 List tables for comparison Model name learning strategy Learning loss function learning algorithm Support vector machine Minimize regularization page loss and maximize soft spacing hinge loss Sequential minimum optimization algorithm (SMO) AdaBoost Minimizing the exponential loss of additive model […]

• ## Exercises in Chapter 9 of statistical learning methods

Time：2021-12-2

Exercise 9.1 EM algorithm is divided into E-step and M-step For step e, calculate the expectation.$$\mu_j^{(i+1)} = \frac{\pi^{(i)}(p^{(i)})^{y_j}(1-p^{(i)})^{1-y_j}}{\pi^{(i)}(p^{(i)})^{y_j}(1-p^{(i)})^{1-y_j} + (1 – \pi^{(i)})(q^{(i)})^{y_j}(1-q^{(i)})^{1-y_j}}$$ For Step M, the maximum likelihood is estimated.$$\pi^{(i+1)} = \frac{1}{n} \sum \mu_j^{(i+1)}$$ ， $$p^{(i+1)} = \frac{\sum \mu_j^{(i+1)}y_j}{\sum \mu_j^{(i+1)}}$$ ，$$q^{(i+1)} = \frac{\sum (1-\mu_j^{(i+1)})y_j}{\sum (1 – \mu_j^{(i+1)})}$$ First iteration Step e: the observation value is […]

• ## Exercises in Chapter 10 of statistical learning methods

Time：2021-11-29