Tag:Derivatives

  • Tensor analysis and differential geometry

    Time:2022-6-10

    The topic is very big, but the writing is certainly limited, not too much. Tensor analysis is actually very interesting. It looks very advanced, very cow leather and very technical. In fact, it’s better to call it the derivative of vector field to understand. Besides, the differential geometry can startle people by its name. In […]

  • Machine learning algorithm series (0) – basic knowledge

    Time:2022-4-25

    The background knowledge required for reading this article: first, lose mathematical knowledge 1、 Introduction    artificial intelligence (AI) plays a more and more important role in modern life. The support of AI is inseparable from the functions of various voice assistants, color restoration of old movies, intelligent recommendation of e-commerce websites such as Taobao Jingdong, […]

  • Python high-dimensional statistical modeling variable selection: comparison of SCAD smooth shear absolute deviation penalty and lasso penalty function

    Time:2022-4-18

    Original link:http://tecdat.cn/?p=24940  Variable selection is an important part of high-dimensional statistical modeling. Many popular variable selection methods, such as Lasso, have deviations. Smooth clipped absolute deviation, _scad) The regression problem of regular terms or smooth shear absolute deviation (SCAD) estimation attempts to alleviate this deviation problem while retaining the continuous penalty of sparsity. Penalty least […]

  • Back propagation algorithm

    Time:2022-3-12

    1、 Overview The learning fitting ability of multi-layer network is much stronger than that of single-layer network. Therefore, if you want to train multi-layer networks, the previous simple perceptron learning method is obviously insufficient, and you need an algorithm with stronger fitting ability. Back propagation algorithm (BP) is one of the classical methods. It can […]

  • [mindspire: machine learning with little MI] learning of neural network (Part 2)

    Time:2022-3-3

    In the introduction given by Xiao Mi last week, we talked about how to use the back propagation algorithm to calculate the derivative of the cost function. Today, Xiao Mi will continue to show you the implementation process of neural network. There is no more nonsense. Learn it with Xiao Mi quickly~ 4. Implementation note: […]

  • 0501-Variable

    Time:2022-2-9

    0501-Variable catalogue 1、 Variable 1.1 data structure of variable 1.2 back propagation 1.3 autograd derivative and manual derivative Pytorch complete tutorial Directory:https://www.cnblogs.com/nickchen121/p/14662511.html 1、 Variable 1.1 data structure of variable The core data structure of the autograd module is variable, which encapsulates the tensor and records the operation records of the tensor, which is used to […]

  • 003.00 supervised learning

    Time:2022-1-15

    003.00 supervised learning Filing date: September 17, 2019 Update Date: None Relevant software information: Win 10 Python 3.7.2 Note: all contents are welcome to be quoted, just indicate the source and author. If there are errors or improper words in this article, please correct them Subject: 003.00 supervised learning preface: Seeing artificial intelligence, machine learning, […]

  • Math — details

    Time:2022-1-3

    word: multipl   Details: Think about the derivative formula, the integral formula, what it is. Better x = 1 / (2 better x), 1 / x = – 1 / (x ^ 2) You can use different fractions without fractions, such as trigonometric functions, and when both denominators are x numbers (but be careful at […]

  • Gradient descent method for machine learning

    Time:2021-10-25

    Suppose we have time and computational resources to calculate the loss of all possible values of W1. For the regression problem we have been studying, the graph of the loss and W1 is always convex. In other words, the graph is always a bowl graph, as shown in the following figure: The loss and weight […]

  • Sparse regularization of machine learning: L1 regularization

    Time:2021-10-8

    Sparse vectors usually contain many dimensions. establishFeature combinationWill result in more dimensions. Due to the use of such high latitude feature vectors, the model may be very large and require a lot of ram.In high latitude sparse vectors, it is best to reduce the weight to exactly 0 as much as possible. A weight of […]

  • Pre machine learning (I): mathematical symbols and Greek letters

    Time:2021-10-7

    This article is included inMachine learning pre tutorial series。 This paper lists the commonly used mathematical notations of machine learning, including algebra, calculus, linear algebra, probability theory, set theory, statistics and Greek letters. Algebra Symbol name describe example (f∘g) Compound function Nested function (f∘g)(x)=f(g(x)) ∆ Delta Change / difference ∆x=x_1-x_0 e Euler number e=2.718281828 $ […]

  • C + + job – derivative without library function

    Time:2021-6-29

    Derivation of continuous function of one variable.1) Write a function to calculate $f (x) = SiNx$2) Write a function to calculate $f (x) = 2x + 1$3) Write a function to calculate $f (x) = \ frac {1} {1 + e ^ {- x}} $(hint: use exp function)4) Define a function pointer type, which can […]