# Algorithm | sorting

Time：2020-11-27

## Bubble sort

By exchange: if the current element is larger than the next, the two elements are exchanged. The largest element in each scan area is in place

java code

``````public class Sort_Bubble {
public void bubbleSort(int[] arr){
if (arr == null || arr.length<2) return;

for(int len = arr.length-1; len>0; len--){
for(int i = 0; i< len; i++){
if (arr[i] > arr[i+1]){
swap(arr, i, i+1);
}
}
}
}
public static void swap(int[] arr, int i, int j){
int temp = arr[i];
arr[i] = arr[j];
arr[j] = temp;
}
}``````

Time complexity O (n ^ 2), space complexity O (1)
Stability: stable ## Select sort

Select the largest element from the unordered one at a time Java code`

``````public class Sort_Selection {
public static void selectionSort(int[] arr){
if(arr == null || arr.length<2) return;
for(int start=0; start<arr.length;start++){
int minIndex = start;
for(int i= start; i<arr.length; i++){
minIndex = arr[i] < arr[minIndex] ? i:minIndex;
}
swap(arr, start, minIndex);
}
}

public static void swap(int[] arr, int i, int j){
int temp = arr[i];
arr[i] = arr[j];
arr[j] = temp;
}
}`````` Time complexity O (n ^ 2), space complexity O (1)
Stability: unstable

## Insert sort

The current element is inserted into the ordered sequence in order Java code:

``````public class Sort_Insertion {
public static void insertionSort(int[] arr){
for(int i=1; i<arr.length; i++){
For (int curr = I; curr > 0 & & arr [curr-1] > arr [curr]; curr --) {// if the current element is in a valid position and is smaller than the previous element, it is swapped
swap(arr, curr-1,curr);
}
}
}
public static void swap(int[] arr, int i, int j){
int temp = arr[i];
arr[i] = arr[j];
arr[j] = temp;
}
}``````

The worst (O, n) complexity
Stability: stable

## Merge sort

Divide and conquer strategy:

• Sequence bisection / / O (1)
• Subsequence recursive sorting / / 2 * t (n / 2)
• Merge ordered subsequence / / O (n) java code

``````public class Sort_Merge {
public static void mergeSort(int[] arr){
if (arr == null || arr.length<2) return;
sortProcess(arr, 0, arr.length-1);
}

//Sort
public static void sortProcess(int[] arr, int L, int R){
if (L == R) return;
int mid = L + ((R-L)>>1);
sortProcess(arr, L, mid);
sortProcess(arr, mid+1, R);
merge(arr, L, mid, R);
}
//Merger
public static void merge(int[] arr, int L, int mid, int R){
int[] help = new int[R-L+1];
int i = L;
int j = mid+1;
int k = 0;
while (i<=mid && j<=R){
help[k++] = arr[i]<=arr[j] ? arr[i++]: arr[j++];
}
//There is only one subsequence that has not been accessed
while (i<=mid)
help[k++] = arr[i++];

while (j<=R)
help[k++] = arr[j++];
//Replace the merged sequence with arr
for (i = 0; i<help.length; i++)
arr[L+i] = help[i];
}
}``````

Time complexity: t (n) = 2T(n/2) + O(n) =O(nlogn)
Space complexity: temporary array + stack pressing o (n) + O (logn) = O (n)
Stability: stable

## Quick sort

###### 1. Classic Express

Each time, the last element X of the unordered part is taken as the benchmark, and it is divided into two parts
Left: less than or equal to x,
Right: greater than x. Java code:

``````public class Sort_Quick {
/*************Classic Express*************/
public static void quickSort1(int[] arr, int L, int R){
if (L < R) {
Int posi = posion1 (arr, l, R); // the position of segmentation
Quicksort1 (arr, l, posi - 1); // sort left part
Quicksort1 (arr, posi + 1, R); // sort right part
}
}
public static int position1(int[] arr, int L, int R){
int posi;
int less = L-1;
int more = R;
int i = L;
while (i < more){
If (arr [i] < = arr [R]) swap1 (arr, + + less, I + +); // the current element is less than or equal to the last element, and less than or equal to part, and the latter element is exchanged with the current element
Else if (arr [i] > arr [R]) swap1 (arr, - more, I); // the current element is larger than the last element, and the current element is exchanged with the previous element that is larger than the part
}
swap1(arr, more, R);
posi = more;
return  posi;
}
public static void swap1(int[] arr, int i, int j){
int temp = arr[i];
arr[i] = arr[j];
arr[j] = temp;
}
}``````

Time complexity: worst case o (n ^ 2), average o (nlogn)
Space complexity: worst case o (n), average case o (logn)
Stability: unstable

###### 2

Different from the classical fast layout, the improved fast platoon takes the part less than x as the left part. The operation of the same element is omitted Java code:

``````public class Sort_Quick {
public static void quickSort2(int[] arr, int L, int R){
if (L < R) {
Int [] posi = posion2 (arr, l, R); // the position of segmentation
Quicksort1 (arr, l, posi  - 1); // left part sort
Quicksort1 (arr, posi  + 1, R); // sort right part
}
}
Public static int [] position2 (int [] arr, Int l, int r) {// an array of size 2 holds the location of the slitting point
int[] posi = new int;
int less = L-1;
int more = R;
int i = L;
while (i < more){
If (arr [i] < arr [R]) swap2 (arr, + + less, I + +); // the current element is smaller than the last element, continue to traverse backward
Else if (arr [i] > arr [R]) swap2 (arr, - more, I); // the current element is larger than the last element, and the current element is exchanged with the previous element with a larger part
// do not scan the current element backward
}
swap2(arr, more, R);
posi = less+1;
posi = more;
return  posi;
}
public static void swap2(int[] arr, int i, int j){
int temp = arr[i];
arr[i] = arr[j];
arr[j] = temp;
}
}``````

Time complexity: worst case o (n ^ 2), average case o (nlogn), but the constant term is smaller than classical fast sorting
Space complexity: worst case o (n), average o (logn)
Stability: unstable

###### 3. Random quick arrangement

When the array is completely reversed, the array can only be divided into the left part at a time, and only one element is fixed at a time, so the time spent is O (n ^ 2)
Different from the classical permutation, random permutation is based on the elements in random positions of disordered parts. The expected value of time spent is O (nlogn)

Java code:

``````public class Sort_Quick {
/*************Random rapid scheduling*************/
public static void quickSort3(int[] arr, int L, int R){
if (L < R) {
swap3(arr, L + (int)( Math.random () * (R-L + 1)), R); // randomly select an element in a position as the benchmark and place it at the end
Int [] posi = posion3 (arr, l, R); // the position of segmentation
Quicksort1 (arr, l, posi  - 1); // left part sort
Quicksort1 (arr, posi  + 1, R); // sort right part
}
}
Public static int [] position3 (int [] arr, Int l, int r) {// an array of size 2 holds the location of the shard point
int[] posi = new int;
int less = L-1;
int more = R;
int i = L;
while (i < more){
If (arr [i] < arr [R]) swap3 (arr, + + less, I + +); // the current element is smaller than the last one, continue to traverse backward
Else if (arr [i] > arr [R]) swap3 (arr, - more, I); // the current element is larger than the last element, and the current element is swapped with the previous element that is larger than the part
// do not scan the current element backward
}
swap3(arr, more, R);
posi = less+1;
posi = more;
return  posi;
}
public static void swap3(int[] arr, int i, int j){
int temp = arr[i];
arr[i] = arr[j];
arr[j] = temp;
}
}``````

Expected time complexity: t (n) = 2T (n / 2) + O (n) = O (nlogn)
Space complexity: O (logn)
Stability: unstable
*

## Heap sort

If the largest number in the big root heap is the value of the root, the unordered part is formed into a large root heap each time, and the roots in the large root heap are added to the ordered series. Repeat this process until the entire sequence is in order.

1. The disordered parts form a large pile of roots.
2. The root in the large root heap is swapped with the last element.
3. The last element adds an ordered part.
4. The sinking of the pile continued to form a large root pile. Java code:

``````public class Sort_Peach {
public void peachSort(int[] arr){
if (arr == null || arr.length<2) return;

//Large root pile construction
for (int i = 0; i<arr.length; i++){
Heapinsert (arr, I); // 0 -- large root heap at I position
}
int heapSize = arr.length;
Swap (arr, 0, - heapsize); // the root node exchanges with the last element

//When the number of large root heaps is 0, the entire array is ordered.
while(heapSize>0){
Heapify (arr, 0, heapsize); // root node sinks
Swap (arr, 0, - heapsize); // the root node exchanges with the last element of the unordered part.
}

}
/*****Forming a big root pile***/
public static void heapInsert(int[] arr, int index){
While (arr [index] > arr [(index-1) / 2]) {// if the current node is smaller than the parent node or the current node is the root node, it will stop.
swap(arr, index, (index-1)/2);
index = (index-1)/2;
}
}
/*******Sinking******/
public static void heapify(int[] arr, int index, int heapSize){
int left = index*2+1;
while(left < heapSize){
int largest = left+1<heapSize && arr[left]<arr[left+1]   //
? left + 1 // when left and right children exist, right children are older
: left; // right child does not exist or left child is older
Largest = arr [largest] < arr [index]? Index: largest; // compare the current node with the left and right children
If (largest = = index) break; // the current node is larger than the left and right children, so there is no need to change
//If the left or right child is larger than the current node, it will be swapped
swap(arr, index, largest);
index = largest;
left = index*2+1;
}
}
/******Commutative function******/
public static void swap(int[] arr, int i, int j){
int temp = arr[i];
arr[i] = arr[j];
arr[j] = temp;
}
}``````

Time complexity: initialization heap: Log1 + log2 +… + logn = O (n)
The heapify procedure is called n times. The time complexity of heapify() is O (logn), i.e. o (nlogn)
So the time complexity is O (n) + O (nlogn) = O (nlogn)

#Bucket sorting
Not based on comparison but on buckets. Related to the state of the data being sorted, it is not commonly used
Time complexity O (n), space complexity O (n). Stable sorting.

## Installing and using docker in Ubuntu

Docker is an open source application container engine based onGo languageAnd comply with Apache 2.0 protocol open source. Docker allows developers to package their applications and dependency packages into a lightweight, portable container, and then publish them to any popular Linux machine. It also enables virtualization. Containers use sandbox mechanism completely, and there is no […]