# Five simple steps to master tensorflow tensor

Time：2021-7-28

Author | Orhan g. YAL ç ı n
Compile VK
Source: towards Data Science

If you are reading this article, I believe we have similar interests and will engage in similar industries now / in the future.

In this article, we will delve into the details of tensorflow tensor. We will cover all topics related to tensorflow’s tensor in the following five simple steps:

• Step 1: definition of tensor → what is tensor?

• Step 2: create tensor → create function of tensor object

• Step 3: characteristics of tensor objects

• Step 4: tensor operation → index, basic tensor operation, shape operation, broadcast

• Step 5: special tensor

### Definition of tensor: what is tensor

Tensors are homogeneous multidimensional arrays of tensorflow. They are very similar to numpy arrays, and they are immutable, which means that they cannot be changed once created. You can create a new copy only with edit.

Let’s see how tensors work with the code example. But first, to use tensorflow object, we need to import tensorflow library. We often use numpy with tensorflow, so we can also import numpy using the following line:

``````import tensorflow as tf
import numpy as np``````

### Tensor creation: creating tensor objects

There are several ways to create tf.tensor objects. Let’s start with a few examples. You can use multiple tensorflow functions to create tensor objects, as shown in the following example:

``````#You can use the 'TF. Constant' function to create a tf.tensor object:
x = tf.constant([[1, 2, 3, 4 ,5]])
#You can use the 'TF. Ones' function to create tf.tensor objects:
y = tf.ones((1,5))
#You can use the 'TF. Zeros' function to create tf.tensor objects:
z = tf.zeros((1,5))
#You can use the 'TF. Range' function to create a tf.tensor object:
q = tf.range(start=1, limit=6, delta=1)

print(x)
print(y)
print(z)
print(q)``````
``````Output:

tf.Tensor([[1 2 3 4 5]], shape=(1, 5), dtype=int32)
tf.Tensor([[1. 1. 1. 1. 1.]], shape=(1, 5), dtype=float32)
tf.Tensor([[0. 0. 0. 0. 0.]], shape=(1, 5), dtype=float32)
tf.Tensor([1 2 3 4 5], shape=(5,), dtype=int32)``````

As you can see, we use three different functions to create the tensor object of shape (1,5), and use TF. Range () function to create the fourth tensor object of shape (5,5). Note that tf.ones and tf.zeros accept shapes as required parameters because their element values are predetermined.

### Characteristics of tensor objects

TF. Tensor creates objects that have several characteristics. First, they have dimensional quantities. Second, they have a shape, a list of the lengths of the dimensions. All tensors have a size, that is, the total number of elements in the tensor. Finally, their elements are recorded in a unified data type. Let’s take a closer look at these features.

#### dimension

Tensors are classified according to their dimensions:

• Rank-0 (scalar) tensor: tensor with single value and no axis (0-dimensional);

• Rank-1 tensor: a tensor containing a list of uniaxial (one-dimensional) values;

• Rank-2 tensor: tensor containing 2 axes (2 dimensions); as well as

• Rank-n tensor: a tensor (three-dimensional) containing the n-axis.

For example, we can create a rank-3 tensor by passing a three-level nested list object to tf.constant. For this example, we can divide the number into a nested list of 3 layers. Each layer has 3 elements:

``````three_level_nested_list = [[[0, 1, 2],
[3, 4, 5]],
[[6, 7, 8],
[9, 10, 11]] ]
rank_3_tensor = tf.constant(three_level_nested_list)
print(rank_3_tensor)``````
``````Output:
tf.Tensor( [[[ 0  1  2]
[ 3  4  5]]

[[ 6  7  8]
[ 9 10 11]]],
shape=(2, 2, 3), dtype=int32)``````

We can see “rank”_ 3_ The number of dimensions for which the “tensor” object currently has the “. Ndim” attribute.

``````tensor_ndim = rank_3_tensor.ndim
print("The number of dimensions in our Tensor object is", tensor_ndim)``````
``````Output:
The number of dimensions in our Tensor object is 3``````

#### shape

Shape feature is another property of each tensor. It displays the size of each dimension as a list. We can view the rank created using the. Shape attribute_ 3_ The shape of the tensor object is as follows:

``````tensor_shape = rank_3_tensor.shape
print("The shape of our Tensor object is", tensor_shape)``````
``````Output:
The shape of our Tensor object is (2, 2, 3)``````

As you can see, our tensor has two elements in the first layer, two elements in the second layer and three elements in the third layer.

#### size

Size is another feature of the tensor, which means how many elements there are in the tensor. We cannot use the properties of tensor objects to measure size. Instead, we need to use the TF. Size function. Finally, we convert the output of. Py () to a more readable instance of. Numy ():

``````tensor_size = tf.size(rank_3_tensor).numpy()
print("The size of our Tensor object is", tensor_size)``````
``````Output:
The size of our Tensor object is 12``````

#### data type

Tensors usually contain numeric data types, such as floating-point and integer, but they may also contain many other data types, such as complex numbers and strings.

However, each tensor object must store all its elements in a unified data type. Therefore, we can also use the. Dtype attribute to view the data type selected for a specific tensor object, as follows:

``````tensor_dtype = rank_3_tensor.dtype
print("The data type selected for this Tensor object is", tensor_dtype)``````
``````Output:
The data type selected for this Tensor object is``````

### Tensor operation

#### Indexes

An index is a numeric representation of the position of an item in a sequence. This sequence can refer to many things: a list, a string, or any sequence of values.

Tensorflow also follows standard Python indexing rules, which are similar to list indexes or numpy array indexes.

1. The index starts at zero (0).

2. A negative index (“- n”) value indicates counting backwards from the end.

3. Colon (“:”) for slicing: Start: Stop: step.

4. Commas (“,”) are used to reach deeper levels.

Let’s create a rank with the following lines_ 1_ tensor：

``````single_level_nested_list = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]
rank_1_tensor = tf.constant(single_level_nested_list)
print(rank_1_tensor)``````
``````Output:
tf.Tensor([ 0  1  2  3  4  5  6  7  8  9 10 11],
shape=(12,), dtype=int32)``````

Test our rules 1, 2, 3:

``````#Rule 1, index starts from 0
print("First element is:",
rank_1_tensor[0].numpy())

#Rule 2, negative index
print("Last element is:",
rank_1_tensor[-1].numpy())

#Rule 3, slicing
print("Elements in between the 1st and the last are:",
rank_1_tensor[1:-1].numpy())``````
``````Output:
First element is: 0
Last element is: 11
Elements in between the 1st and the last are: [ 1  2  3  4  5  6  7  8  9 10]``````

Now, let’s create a rank with the following code_ 2_ tensor：

``````two_level_nested_list = [ [0, 1, 2, 3, 4, 5], [6, 7, 8, 9, 10, 11] ]
rank_2_tensor = tf.constant(two_level_nested_list)
print(rank_2_tensor)``````
``````Output:
tf.Tensor( [[ 0  1  2  3  4  5]
[ 6  7  8  9 10 11]], shape=(2, 6), dtype=int32)``````

And test rule 4 with several examples:

``````print("The 1st element of the first level is:",
rank_2_tensor[0].numpy())

print("The 2nd element of the first level is:",
rank_2_tensor[1].numpy())

#Rule 4, commas represent going deeper
print("The 1st element of the second level is:",
rank_2_tensor[0, 0].numpy())

print("The 3rd element of the second level is:",
rank_2_tensor[0, 2].numpy())``````
``````Output:
The first element of the first level is: [0 1 2 3 4 5]
The second element of the first level is: [ 6  7  8  9 10 11]
The first element of the second level is: 0
The third element of the second level is: 2``````

Now that we’ve introduced the basics of indexing, let’s take a look at the basic operations we can do on tensors.

### Tensor basic operation

You can easily perform basic mathematical operations on tensors, such as:

2. Element multiplication

3. Matrix multiplication

4. Find the maximum or minimum value

5. Find the index of the max element

6. Calculate softmax value

Let’s look at these operations. We will create two tensor objects and apply these operations.

``````a = tf.constant([[2, 4],
[6, 8]], dtype=tf.float32)
b = tf.constant([[1, 3],
[5, 7]], dtype=tf.float32)``````

``````#We can use the 'TF. Add()' function and pass the tensor as a parameter.
``````Output:
tf.Tensor( [[ 3.  7.]
[11. 15.]], shape=(2, 2), dtype=float32)``````

multiplication

``````#We can use the 'TF. Multiply()' function and pass the tensor as a parameter.
multiply_tensors = tf.multiply(a,b)
print(multiply_tensors)``````
``````Output:
tf.Tensor( [[ 2. 12.]
[30. 56.]], shape=(2, 2), dtype=float32)``````

Matrix multiplication:

``````#We can use the 'TF. Matmul()' function and pass the tensor as a parameter.
matmul_tensors = tf.matmul(a,b)
print(matmul_tensors)``````
``````Output:
tf.Tensor( [[ 2. 12.]
[30. 56.]], shape=(2, 2), dtype=float32)``````

Note: the matmul operation is the core of the deep learning algorithm. Therefore, although you won’t use matmul directly, it’s important to understand these operations.

Other examples of operations listed above:

``````#Use 'TF. Reduce'_ Max() 'and' tf.reduce_ The min() 'function can find the maximum or minimum value
print("The Max value of the tensor object b is:",
tf.reduce_max(b).numpy())

#Use the 'TF. Argmax()' function to find the index of the largest element
print("The index position of the max element of the tensor object b is:",
tf.argmax(b).numpy())

#Use the TF. NN. Softmax 'function to calculate softmax
print("The softmax computation result of the tensor object b is:",
tf.nn.softmax(b).numpy())``````
``````Output:
The Max value of the tensor object b is: 1.0
The index position of the Max of the tensor object b is: [1 1]
The softmax computation result of the tensor object b is: [[0.11920291 0.880797  ]  [0.11920291 0.880797  ]]``````

#### Manipulate shape

Just like in numpy arrays and pandas data frames, you can also reshape tensor objects.

This deformation operation is very fast because the underlying data does not need to be copied. For reshaping operations, we can use the tf.reshape function

``````#Our initial tensor
a = tf.constant([[1, 2, 3, 4, 5, 6]])
print('The shape of the initial Tensor object is:', a.shape)

b = tf.reshape(a, [6, 1])
print('The shape of the first reshaped Tensor object is:', b.shape)

c = tf.reshape(a, [3, 2])
print('The shape of the second reshaped Tensor object is:', c.shape)

#If we pass - 1 as the shape parameter, the tensor becomes flattened.
print('The shape of the flattened Tensor object is:', tf.reshape(a, [-1]))``````
``````Output:
The shape of our initial Tensor object is: (1, 6)
The shape of our initial Tensor object is: (6, 1)
The shape of our initial Tensor object is: (3, 2)
The shape of our flattened Tensor object is: tf.Tensor([1 2 3 4 5 6], shape=(6,), dtype=int32)``````

As you can see, we can easily reshape our tensor objects. It should be noted, however, that developers must be reasonable in reshaping operations. Otherwise, the tensor may be confused or even wrong. So be careful

When we try to combine multiple tensor objects, the smaller tensor can automatically stretch to accommodate the larger tensor, just like the numpy array. For example, when you try to multiply a scalar tensor by a rank 2 tensor, the scalar will be stretched to multiply each rank 2 tensor element. See the following example:

``````m = tf.constant([5])

n = tf.constant([[1,2],[3,4]])

print(tf.multiply(m, n))``````
``````Output:
tf.Tensor( [[ 5 10]
[15 20]], shape=(2, 2), dtype=int32)``````

Thanks to the broadcast, you don’t have to worry about size matching when doing mathematical operations on tensors.

### Special types of tensors

We tend to generate rectangular tensors and store values as elements. However, tensorflow also supports irregular or special tensor types, including:

1. Ragged tensor

2. String tensor

3. Sparse tensor

Let’s take a closer look at what each is.

#### Ragged tensor

Ragged tensors are tensors with different number elements along the dimension axis

Irregular tensors can be constructed as follows

``````ragged_list = [[1, 2, 3],[4, 5],[6]]

ragged_tensor = tf.ragged.constant(ragged_list)

print(ragged_tensor)``````
``Output:``

#### String tensor

String tensors are tensors that store string objects. We can create a string tensor, just like you create an ordinary tensor object. However, we pass string objects as elements instead of numeric objects, as follows:

``````string_tensor = tf.constant(["With this",
"code, I am",
"creating a String Tensor"])

print(string_tensor)``````
``````Output:
tf.Tensor([b'With this'
b'code, I am'
b'creating a String Tensor'],
shape=(3,), dtype=string)``````

#### Sparse tensor

Finally, the sparse tensor is a rectangular tensor of sparse data. When there are null values in the data, the sparse tensor is the object. Creating sparse tensors is a bit time-consuming and should be more mainstream. Here is an example:

``````sparse_tensor = tf.sparse.SparseTensor(indices=[[0, 0], [2, 2], [4, 4]],
values=[25, 50, 100],
dense_shape=[5, 5])

#We can convert sparse tensors into dense tensors
print(tf.sparse.to_dense(sparse_tensor))``````
``````Output:
tf.Tensor( [[ 25   0   0   0   0]
[  0   0   0   0   0]
[  0   0  50   0   0]
[  0   0   0   0   0]
[  0   0   0   0 100]], shape=(5, 5), dtype=int32)``````

### ending

We have successfully introduced tensorflow’s tensor object basics.

This should give you a lot of confidence because you now know more about the basics of tensorflow framework.

Check out part 1 of this tutorial series:https://link.medium.com/yJp16uPoqab

Welcome to panchuang AI blog:
http://panchuang.net/

Official Chinese document of sklearn machine learning:
http://sklearn123.com/

Welcome to panchuang blog resources summary station:
http://docs.panchuang.net/

## Rust practice – using socket networking API (II)

In the previous section, we have implemented a minimum runnable version. The reason for using rust instead of C is that rust has the necessary abstraction ability and can achieve the same performance as C. In this section, we do the necessary encapsulation for the code in the previous section. By the way, we can […]