Do neural networks have anything to do with Fourier transforms?

Time:2022-9-23

Models in both machine learning and deep learning are created by following mathematical functions. From data analysis to predictive modeling, there are generally mathematical principles supported, such as: Euclidean distance is used to detect clusters in clusters.

The Fourier transform is a mathematical method of transforming a function from one domain to another, and it can also be applied to deep learning.

This article will discuss the Fourier transform and how it can be used in the field of deep learning.

What is Fourier Transform?

In mathematics, transformation techniques are used to map a function into a function space that is different from its original function space. Fourier transform is also a transformation technique, which can transform a function from time-domain space to frequency-domain space. Taking an audio wave as an example, the Fourier transform can represent it in terms of the volume and frequency of its notes.

We can say that the transformation performed by the Fourier transform of any function is a function of frequency. where the magnitude of the resulting function is a representation of the frequencies contained in the original function.

Let’s take an example of a signal whose time domain function looks like this:

Do neural networks have anything to do with Fourier transforms?

Get a part of another signal in the same time frame

Do neural networks have anything to do with Fourier transforms?

Call these two signals A(n) and B(n), where n is the time domain. So if we add these signals, the structure of the signal will look like this:

C(n) = A(n) + B(n)

Do neural networks have anything to do with Fourier transforms?

As you can see, the signal addition of the function is the operation of adding two signals. If we try to extract the signal A or B from this added signal C, we will encounter a problem, because these signals are just power additions , regardless of time. That is, the addition operation is the addition of power at the same time.

Do neural networks have anything to do with Fourier transforms?

As you can see in the image above, the frequency domain can easily highlight the differences between the signals. If we wish to convert these signals back to the time domain, we can use the inverse Fourier transform.

Fourier Transform Mathematical Principles

A sequence of sinusoids can be used to represent signals in the time domain, which is the basis of the Fourier transform. So if the function is a continuous signal, the function f can be expressed as:

Do neural networks have anything to do with Fourier transforms?

You can see that the function is composed of infinite sinusoidal additions, which we can think of as a representation of the signal of the function, and that the function has two coefficients that are needed to define the structure of the output signal.

Solving for the Fourier transform integral (essentially a function of frequency) yields these coefficients. The result of the Fourier transform can be thought of as a set of coefficients. It can be expressed mathematically as follows:

Do neural networks have anything to do with Fourier transforms?

And the reciprocal of this function can be seen as the time function we use to convert the frequency domain function to the time domain function, that is, the inverse Fourier transform.

Do neural networks have anything to do with Fourier transforms?

Solving these integrals above yields the values ​​of a and b, and we are talking about the case where the signal is a continuous signal. But in real life, most problems arise from discretely sampled signals, and in order to find the coefficients of this signal transform, we need to perform a Discrete Fourier Transform (DFT).

Using DFT we can get a sequence of equally spaced samples of the same length, and this function consists of a set of equally spaced sample sequences. The coefficients of the function f(t) given above can be obtained by the following function.

Do neural networks have anything to do with Fourier transforms?

The values ​​of a and b will be,

Do neural networks have anything to do with Fourier transforms?

Using the terms a and b in the function f(t), the signal in the frequency domain can be found.

Fourier Transform with Python

Python’s scipy module provides all the transformation techniques needed in mathematics, so it can be used directly

import numpy as np
import matplotlib.pyplot as plt
from scipy.fft import fft, fftfreq

make a sine wave

#  sample points
N = 1200

# sample spacing
T = 1.0 / 1600.0

x = np.linspace(0.0, N*T, N, endpoint=False)
sum = np.sin(50.0 * 2.0*np.pi*x) + 0.5*np.sin(80.0 * 2.0*np.pi*x)

plt.plot(sum)
plt.title('Sine wave')
plt.xlabel('Time')
plt.ylabel('Amplitude')
plt.grid(True, which='both')
plt.show()

Do neural networks have anything to do with Fourier transforms?

In the output above, you can see the sine wave generated using NumPy, which can now be transformed using the FFT module of the scipy library.

sumf = fft(sum)
xf = fftfreq(N, T)[:N//2]
plt.ylabel('frequency')
plt.xlabel('sample')
plt.title("FFT of sum of two sines")
plt.plot(xf, 2.0/N * np.abs(sumf[0:N//2]))
plt.show()

Do neural networks have anything to do with Fourier transforms?

It is now possible to clearly see what the frequencies of the various waves are. These are not obvious when formed as a function of the time domain, and the differences are only clearly seen when represented in the frequency domain.

The basics of the Fourier transform have been understood through the above introduction, but what does it have to do with neural networks now? The Fourier transform is a tool for approximating other frequency domain functions, and neural networks can also approximate arbitrary functions. We will cover the relationship between neural networks and Fourier transforms in the next part of this article.

What is the relationship between neural networks and Fourier transforms?

The Fourier transform can be thought of as a function that helps to approximate other functions, and we also know that neural networks can be thought of as a function approximation technique or a general function approximation technique.

Do neural networks have anything to do with Fourier transforms?

The image above depicts a neural network using the Fourier transform method. The goal of a relatively basic neural network is to hope to approximate an unknown function and its value at a specific time. The task of most neural networks is to learn the entire function or algorithm or function at specified value points in the data, the same is true for Fourier networks to find parameters that approximate the function through iterative techniques.

Fourier Transform in Convolutional Neural Networks

The convolutional layer is the main basic component in the convolutional neural network. In the network, the main job of any convolutional layer is to apply a filter (convolution kernel) to the input data or feature map, and convolve the output of the previous layer. . The task of this layer is to learn the weights of the filters. As seen in a complex convolutional neural network, there are many layers and many filters per layer, which makes it very computationally expensive.

Using the Fourier transform can convert the layer computation to an element-wise product in the frequency domain, the task of the network will be the same, but the energy of the calculator can be saved by using the Fourier transform.

To sum up, we can say that the convolutional layer or the process of the convolutional layer is related to the Fourier transform. Most convolutional layers in the time domain can be thought of as multiplications in the frequency domain. We can easily understand convolution by polynomial multiplication.

Suppose we have to function on y and g for any value of x as follows:

y(x) = ax + b

g(x) = cx + d

And the polynomial multiplication of these functions can be written as the function h

h(x) = y(x).g(x)

= (ax + b)(cx + d)

= ac x² + (ad+bc) x + bd

To sum up, we can say that the convolutional layer process can be defined as the product of the above given functions. The vector form of the function can be written as:

y[n] = ax[n] + b

g[n] = cx[n] + d

Vector multiplication in vector form is:

h[n] = y[n] X g[n]

H[w] = F(y[n]) ‧ F(g[n]) = Y[w] ‧ G[w]

h[n] = F^-1(H[w])

in:

  • The symbol “.” in multiplication means multiplication, and X is convolutional.
  • F and F^-1 are the Fourier transform and the inverse Fourier transform, respectively.
  • “n” and “w” are the time domain and frequency domain, respectively.

To sum up, we can see that if the function is related to the time domain, the convolutional layer ultimately means the Fourier transform and its inverse in multiplication.

How to use Fourier Transform in Deep Learning?

In the previous section, we have seen that the convolution process in the time domain can simply be thought of as multiplication in the frequency domain. This proves that it can be used for various deep learning algorithms, even though it can be used for various static predictive modeling algorithms.

Let’s look at a similar convolutional neural network example so that we don’t stray from the subject of this article.

The convolution math operation is to perform the multiplication in the time domain, while the math behind the Fourier transform is to perform the multiplication in the frequency domain.

Do neural networks have anything to do with Fourier transforms?

In order to apply the Fourier transform in any convolutional neural network, we can make some changes to the inputs and filters.

If the input matrix and filter matrix in the CNN can be converted to frequency domain for multiplication, and the result matrix of frequency domain multiplication can be converted to time domain matrix, there will be no impact on the accuracy of the algorithm. The transformation of the matrix from time domain to frequency domain can be done by Fourier transform or fast Fourier transform, and the transformation from frequency domain to time domain can be done by inverse Fourier transform or inverse fast Fourier transform.

The figure below shows how we can use the Fast Fourier Transform instead of convolution.

Do neural networks have anything to do with Fourier transforms?

As we discussed, the number of filters and layers in any complex network is very high, and as these numbers increase, the computational process using convolution becomes very slow. Using the Fourier transform can reduce the complexity of this calculation and make the model run faster.

If you are interested in the ideas of this article, you can try it yourself, and welcome to leave a message for discussion.

https://avoid.overfit.cn/post/c7fa2a15c85d4192bbab1d98dcbdb882

Author: Lorenzo Castagno