Summary of computer foundation in autumn 2021 interview algorithm, data structure, design pattern, Linux
Summary of computer foundation in autumn 2021 interview – Java foundation, JVM, spring framework
Summary of computer foundation for 2021 autumn recruitment interview database, redis
Summary of computer foundation for 2021 autumn recruitment interview – operating system
Summary of computer foundation of 2021 autumn recruitment interview computer network foundation
Personal project related issues
Mainly through three aspects
- Java language specification:This is fixed by specifying the range and behavior of the values of the basic data types in the Java language, such as the int length of 4 bytes.
- Class file:All java files should be compiled into a unified class file through javac or some other Java compilers
- Java virtual machine:
- Convert bytecode file (. Class) into binary file of corresponding platform through Java virtual machine
- The JVM isPlatform relatedYou need to install the corresponding virtual machine on different operating systems
- Compiling: Java source files are compiled into class bytecode files
- Class loading: the class loader loads bytecode into the method area of the virtual machine.
- Create objects at run time
- Method call, and the JVM execution engine interprets it as machine code
- CPU execute instruction
- Multi thread switch context
- Package:A series of operations and data are combined in a package. When the user calls the package, there is no need to unpack the specific methods in the package.
- Polymorphism:The variables of the parent class can refer to the object of a subclass, and determine the method to be called by dynamic binding at runtime.
- Inheritance:A class can be extended to a subclass, which can inherit the properties and methods of its parent class, and can also add its own member variables and methods. Interfaces can inherit more than one, and classes can only inherit only one.
- rewrite:The child class has the same method name and parameter list as the parent class method. The return value is not greater than the return value of the parent method, the exception thrown is not greater than the exception thrown by the parent class, and the visibility of the method modifier is not less than that of the parent class. Run time polymorphism.
- It is run-time polymorphism, because when the program is running, it will search for the method level by level from the class calling the method according to the inheritance relationship, which can only be done at runtime.
- Heavy load:If there are methods with the same method name but different parameter list in the same class, the return value is not required. Compile time polymorphism.
- Integer is the wrapper class of int, and the variable represented by int is an object, while the variable represented by int is the basic data type
- Valueof refers to wrapping the basic data type into a wrapper class object, and intvalue refers to converting a wrapper class object into a basic data type.
- The comparison of wrapper classes uses equals, which is the comparison between objects
- Byte 1 byte; short 2 byte
- Int, float 4 bytes
- Long, double 8 bytes
- When Boolean appears alone, it is 4 bytes; when array, it is 1 byte
- Char English is 1 byte, GBK Chinese 2 bytes, UTF-8 Chinese 3 bytes
- Value passing for basic data typesVariable valueChanges to the copy do not affect the value of the original variable
- For object-oriented variables, reference passing isObject addressIs not the original variable itself, so theoperationChanges the value of the original variable.
- ==If the comparison object is a basic data type, it is to compare the values of the two; if it is a comparison of reference objects, it is to determine whether the address value of the object is the same
- If the comparison is a string object, it is to judge whether the value of the string is the same; if the object object is compared, the reference address memory is compared; you can define the comparison rules by rewriting the equals method, and you also need to rewrite the hashcode method at the same time
Object is the parent class of all objects. Equals, hashcode, toString method
- Equals is used to compare whether the object address value is the same
- Hashcode returns a hash value calculated from the address of the object
- Why both should be rewritten at the same time
- Using hashcode method to check in advance, hascode is faster, which can avoid calling the equals method every time, and improve the efficiency
- Ensure that it is the same object. If the equals method is overridden but the hashcode method is not overridden, the objects that are equal in the equals comparison will appear. The hashcode method is rewritten to avoid this situation.
- The comparison of the equals of objects with the same hash value is not necessarily equal. There is the case that the hashcode calculated by two objects is equal, which is a hash conflict.
- The characteristics of hash table: there is a certain relationship between the position of keywords in the table and it.
- Resolve hash conflict:
- Open addressing method (linear probe hashing, secondary probing hashing, pseudo random probing hashing)
- Linear detection and hashing: put in the element, if there is a conflict, then find the position where there is no element;
- Square detection and hashing: if there is a conflict, put it in the position of (conflict + 1 square), if there is still a conflict, put it in the position of (conflict – 1 square); if there are still people, put it in the position of (conflict + 2 square), and so on, and so on. If it is a negative number, it will be in reverse order.
- Random detection redistribution
- Chain address method: in case of conflict, continue to link to the previous element to form a linked list. Java HashMap is such a method.
- Rehash: to perform another hash operation with another method
- Create a common overflow area: the hash table is divided into two parts: the basic table and the overflow table. The elements that conflict with the normal form and the basic table are filled in the overflow table.
- Open addressing method (linear probe hashing, secondary probing hashing, pseudo random probing hashing)
clone：The clone method is declared as protected. A class can only clone its own objects through this method. If you want other classes to call this method, you must define the method as public. If an object’s class does not implement the clonable interface, a clonenotsupport exception will be thrown when the object calls the clone method. The default clone method is shallow copy. Generally, to override the clone method, you need to implement the clonable interface and specify the access modifier as public.
Shallow copy: re create the memory in the heap and copy the object. The basic data types of the objects before and after the copy do not affect each other, but the reference types of the objects before and after the copy will affect each other because they share the same memory. (a shallow copy object will regenerate a new object. The new object has no relationship with the original object.)If a property in an object is of reference type, the object corresponding to the property will not be regenerated. Shallow copy will only regenerate the object of the current copy, and will not regenerate the object referenced by its property.
Deep copy: opening up a new area in heap memory to store new objects will regenerate the copied objects and the objects referenced by their attributes.
- Implementation: copy the data type referenced in the copied object; use serialization
There are two main reasons to use internal classes: internal classes can be hidden from other classes in the same package. The internal class method can access data in the scope defining this internal class, including the original private data. Internal classes are compiler phenomena, independent of virtual machines. The compiler converts the internal class into a regular class file, separating the external class name from the internal class name with the dollar symbol $and the virtual machine knows nothing about it.
Static inner class:Decorated by static, it belongs to the external class itself and is loaded only once. Class can be defined components, static internal classes can be defined, can access external class static variables and methods, through the
New outer class. Inner class constructorTo create an object. Static inner classes should be used as long as internal classes do not need to access external class objects.
Member inner class:Each object that belongs to an external class is loaded with the object. Static members and methods cannot be defined, and all contents of external classes can be accessed through
New outer class constructor. New inner class constructorTo create an object.
Local inner class:Defined in methods, constructors, code blocks, loops. Access modifiers cannot be declared, only instance member variables and instance methods can be defined, and the scope is only in the code block that declares this local class.
Anonymous inner class:A local inner class without a name can simplify the code. The anonymous inner class will immediately create an object of the anonymous inner class to return. The object type is equivalent to the subclass type of the current new class. Anonymous inner classes are generally used to implement event listeners and other callbacks.
- Parent static variables and static code blocks
- Subclass static variables and static code blocks
- Parent class common variables and code blocks
- Parent constructor
- Subclass common variables and code blocks
- Subclass constructor
- Both comparable and comparator are used to implementThe comparison and sorting of elements in a collection。
- Comparable is a sort of method implementation defined within a collection, located at java.util Comparator is the sort implemented outside the collection, which is located in java.lang Next.
- Comparable is aThe object itself already supports self comparisonThe required interfaces, such as string and integer, implement the comparable interface by themselves to complete the size comparison operation.
- Comparator is a special comparator. When the object does not support self comparison or the self comparison function cannot meet the requirements, a comparator can be written to complete the size comparison between two objects. Comparator embodies a strategy design pattern, that is, it does not change the object itself, but uses a strategy object to change its behavior.
- Comparable is to complete the comparison by itself, while comparator is to implement comparison by rewriting comparison rules externally.
- Once created, it cannot be modified. Therefore, when modifying the value of a string variable, a new string object is created and assigned to the original variable reference
- Two creation methods
- To assign a string directly is to put the string into the constant pool, and the variables in the stack directly refer to the string in the constant pool.
- In the new method, first create a string object in the heap, and then go to the constant pool to find whether there is a string constant assigned. If it is found, it will be used directly. If it is not found, it will open up space to store strings. Objects are referenced by variables and created in the form of strings.
- All inherited fromAbstractStringBuilderClass is a variable class (this is a bonus)
- The former is not thread safe, while the latter ensures thread safety through synchronized lock
- Therefore, the execution efficiency of StringBuilder is high and that of StringBuffer is low
Public: it is visible to this package and different packages
Protected: not visible to different packages
Default: only visible to subclasses and this class in this package
Private: only visible to this class
- If the modified variable is a basic data type, the value cannot be changed,Access is treated as a constantIf it is a reference variable, it cannot point to another object after initialization. And be sure to explicitly initialize the assignment.
- The modified class cannot be inherited. The default method is final decoration
- Final decorated methods cannot be overridden, but can be overloaded
- Modify the code block so that the code block will open up a space at the place where the JVM is loaded to store the contents of the code block separately and load it only once. The result of execution is stored in the method area and shared by threads. Methods in a static class are associated directly with the class, not with the object. Methods can be used directly by the class name.
- Modify nonlocal member variables in the same way as static code blocks. Because it is shared in the JVM memory, it can cause thread safety problems. Solution: add final; use synchronization (volatile keyword).
- Decorated method, called by the class name. Static methods cannot directly call other member methods and member variables.
The differences are divided into four aspects
- Member variable:The default interface is public static final
- Membership method:Before java8, the default interface was public, java8 added static and default, and java9 added private; there were no restrictions on abstract classes
- Constructor:Neither the interface nor the abstract class can be instantiated, but there is no constructor in the interface
- Inheritance:Interfaces can inherit more than one, and abstract classes can only inherit only one
If you know that a class should be a base class, the first choice should be to make it an interface. Only when there must be method definitions and member variables, the abstract class should be selected. In the choice of interface and abstract class, we must abide by such a principle: behavior model should always be defined by interface rather than abstract class. There are some problems in building behavior model through abstract classes: if there is a product class A, two subclasses B and C have their own functions respectively; if there is a new product requirement that has both B product function and C product function, there will be problems because Java does not allow multiple inheritance. If it is an interface, only two interfaces need to be implemented at the same time.
All exceptions are inherited from the throwable class, which are divided into error and exception.
- The error class describes the internal errors and resource exhaustion errors of the Java runtime system. If such errors occur, there is generally nothing to do.
- The exceptions of error and runtimeException are non checkable exceptions, and others are checktype exceptions.
Common runtimeException exception:
- ClassCastException, bad cast.
- ArrayIndexOutOfBoundsException, array access is out of bounds.
- NullPointerException, null pointer exception.
The common examination type abnormality is as follows
FileNotFoundException, trying to open a file that does not exist.
Classnotfoundexception, trying to find a class object based on the specified string, which does not exist.
IOException, trying to continue reading data beyond the end of the file.
- Throw exception:When an exception is encountered, it is not handled, but thrown to the caller, who will handle it according to the situation. There are two forms of throwing exceptions: one is the exception thrown by the throws keyword, which acts on the method; the other is to throw the exception directly by using the throw statement and act on the method.
- Catch exception:Use try / catch to catch exceptions. Exceptions in try will be caught by the catch code block. If there is a finally code block, it will be executed regardless of whether the exception occurs. It is generally used to release resources. Java 7 can define resources in the try code block and release resources automatically.
- Finally, thePhysical resourcesTo recycle (the JVM garbage collection mechanism reclaims the memory occupied by the object).
- If the recycle is executed in the catch, it will not be executed if there is no exception; if it is put in the try, if it is recycled before the exception occurs, then the catch will not be executed.
- Java7 can initialize or declare resources in try () parentheses, which will be recycled automatically. But resources need to implement the autocolosable interface
Java objects are created when the JVM is running, and the surviving objects are destroyed when the JVM exits. To ensure the persistence of objects and their states, serialization is needed. Serialization is to save an object as a byte stream through objectoutputstream; deserialization is to restore a byte stream to an object.
- To achieveSerializableInterface for serialization.
- Serialization and deserialization must keep the serialization ID consistent.
- Static, transient decorated variables and methods cannot be serialized.
- Implementing externalizable allows you to decide which properties can be serialized
In the running state, for any class, you can know all the properties and methods of this class; for any object, you can call any method and property of it; the function of dynamically obtaining information and calling object methods dynamically is the reflection mechanism of Java. The advantage is that the runtime can get all the information of the class dynamically, but the disadvantage is that it destroys the encapsulation of the class and the constraint of generics.
- The class class saves the runtime information of the object, which can be accessed by ① class name. Class ② object name. Getclass() Class.forName (fully qualified name of class) method to obtain class instance
- Getfields() in class class returns public fields supported by this class; getmethods() returns public methods; getcosntractors() returns constructor array (including public members of parent class)
- Xxxdeclaredxxx() can return an array of all fields, methods, and constructors (excluding members of the parent class)
Additional information can be added to classes, interfaces, or methods and variables to help compilers and JVMs accomplish some specific functions.
- Meta annotation: we can customize an annotation. At this time, we need to use meta annotation in the custom annotation to identify some information
- @Target: constraint annotation action location: method, variable, type, parameter, constructors, loacl_ VARIABLE
- @The life cycle of constraint annotation: source code, class bytecode, runtime
- @Documented: indicates that the annotation should be recorded by the Javadoc tool
- @Inherited: a marked type on the surface is inherited
The purpose of generics is to write reusable code. The nature of generics is parameterized, that is, the data type being manipulated is specified as a parameter.
I think there are three main functions of Java
- Type checking, which advances the ClassCastException of runtime type conversion to compile time through generics.
- Avoid type switching.
- Generics can generalize algorithms and increase the reusability of code.
Generics are implemented by type erasure. The compiler erases all type related information at compile time, so there is no type related information at runtime. For example, a list is represented by only one list at run time. The purpose of this is to ensure compatibility with the binary class libraries developed before Java 5.
How does Java generics work? What is type erasure? How does it work?
1. Type checking: provides type checking before bytecode generation
2. Type erasure: all type parameters are replaced with their qualified types, including classes, variables, and methods (type erasure)
3. If there is a conflict between type erasure and polymorphism, a bridge method is generated in the subclass
4. If the return type of the call to a generic method is erased, a cast is inserted when the method is called
List is a kind ofLinear listStructure, element isOrder、Repeatable。
ArrayListIt implements the data structure based on dynamic array,
LinkedListIt is based on the chain list structure.
- For random access
setMethod query element,
LinkedListLoop through the linked list to find elements.
- For add and delete operations
LinkedListIt’s more efficient because
ArrayListTo move data.
Advantages and disadvantages:
LinkedListThe cost of adding an element at the end is fixed. Yes
ArrayListFor example, adding an item to the inner array points to the added element, which may occasionally lead to reallocation of the array
LinkedListIn other words, the overhead is uniform and allocated to an internal
ArrayListWhen an element is added or deleted from the collection, all elements after the current list move element will be moved. and
LinkedListThe cost of adding or deleting an element in a collection is fixed.
ArrayListThe waste of space is mainly reflected in reserving a certain amount of space at the end of the list
LinkedListThe cost of space is reflected in the fact that each element needs to consume a considerable amount of space
- Neither ArrayList nor LinkedList is thread safe.
ArrayListIt is used when there are more queries but less inserts and deletions
LinkedListIt is used when there are fewer queries and more inserts and deletions
- The bottom layer is implemented by an array,Random access(randomaccess interface), read fast and write slowly. Because the write operation involves the movement of elements, the efficiency of write operation is low.
- Three member variables:
- Elementdata is the data domain of ArrayList, which will reserve some capacity to guarantee performance. The transient modification cannot be serialized;
- Size represents the actual size of the list, private;
- Modcount inherits from abstractlist and records the number of times ArrayList adds or removes elements. Protected transient modification.
- The bottom layer is implemented by the linked list, which requiresSequential accessElement, even if there is an index, it needs to be traversed from the beginning, so it is fast to write and slow to read.
- LinkedList implements the deque interface and has the attribute of queue. It can add elements in the tail, get elements in the head, and operate any element between the head and tail.
- Member variable, serialization principle is similar to ArrayList.
Vector and stack
- The implementation of vector is basically the same as ArrayList, and the underlying layer is also an array. The differences are as follows:
(1) All public methods of vector use synchronized decoration to ensure thread safety.
(2) The growth strategy is different. A member variable capacityincrease is added to the vector to indicate the increment of capacity expansion.
- Stack is a subclass of vector, and its implementation is basically the same as vector. Compared with stack, it provides more methods to express the meaning of stack, such as pop(), top().
- The implementation of vector is basically the same as ArrayList, and the underlying layer is also an array. The differences are as follows:
- The elements in the HashSet aredisorder、Non repetitive, there can be at most one null value.
- The underlying layer of HashSet is implemented through HashMap. The key value of HashMap is the element stored in HashSet. All keys use the same value — a static final decorated variable named present object type object. All operations on set are implemented by calling the method of HashMap directly.
- HashMap is thread unsafe, so HashSet is also thread unsafe.
- De duplication: basic data types compare values directly; reference data types by comparing hashcode and equal methods
- Hashtable inherits from the dictionary class
- The bottom layer is array + linked list, and neither key nor value can be null. Because adding data put operation uses synchronized synchronization lock to realize thread safety.
- The initial capacity is 11, and the expansion mode is oldsize * 2 + 1
- Index calculation method: hash value% table array length, modulus calculation consumes a lot
In contrast, HashMap inherits from abstractmap class; from JDK1.8, the bottom layer is array + linked list / red black tree, and key and value can be null, which makes thread unsafe; the initial capacity is 16, and oldsize * 2 is expanded; the method of calculating data storage index: hash value and array length minus one are used for sum operation.
TreeMapIt is a red black tree based map that provides sequential access. Unlike HashMap, its operations such as get, put and remove are all o (log (n)). The specific order can be determined by the specified comparator or by the natural order of keys
- HashMap inherits from abstractmap and implements the interfaces of map, clonable and serializable.
- The default initialization capacity of HashMap is 16, and the expansion capacity is oldsize * 2; the expansion capacity must be the power of 2, the maximum capacity is the 30th power of 2, and the default load factor is 0.75.
- working principle
HashMap in Map.Entry Key value pairs are stored in the static inner class implementation. HashMap uses the hash algorithm, and in the put and get methods, it uses the hashcode () and equals () methods.
When we call the put method by passing key value pairs, HashMap uses the key hashcode() and hash algorithm to find the index of the stored key value pair. Entry is stored in the linked list, so if there is an entry, it uses the equals() method to check whether the passed key already exists. If it does, it will override the original value. If not, it will create a new entry and save it. When the list depth reaches 8, red and black trees are used to store data.
When we call the get method by passing the key, it uses hashcode () again to find the index in the array, then uses the equals () method to find the correct entry, and then returns its value.
Before JDK 8
The bottom implementation is array + chain table. The main member variables include table array storing data, key value pair size, and loading factor LoadFactor.
The table array is used to record all the data of HashMap. Each subscript corresponds to a linked list. All hash conflicting data will be stored in the same linked list. Entry is the node element of the linked list. It contains four member variables: key key, value value, pointer to the next node, and hash value of the element.
In HashMap, data exists in the form of key value pairs, and the hash value corresponding to the key will be used as its subscript in the array. If the hash values of two elements’ key are the same, hash conflict will be sent and put on the linked list in the same index. (in order to make the query efficiency of HashMap as high as possible, the hash value of key should be dispersed as much as possible.)
Put (k, V) method: adding elements(key points)
① If the key is a null value, it is directly stored in table .
② If the key is not null, the hash value corresponding to the key is calculated first.
③ Call the indexfor method to calculate the hash value of the key and the length of the array to determine the index I of the element.
④ Traverse the linked list corresponding to table [i]. If the key already exists, update the value value, and then return the old value value value.
⑤ If the key does not exist, add 1 to the value of modcount, add a node using the addentry method, and return a null value.
Get method: get the value value of the element according to the key
① If the key is a null value, the getfornullkey method is called. If the size is 0, the linked list is empty and a null value is returned. If the size is not 0, it indicates that there is a linked list. Traverse the linked list of table . If a node with null key is found, its value value value is returned; otherwise, null value is returned.
② If the key is not null, the getentry method is called. If the size is 0, the linked list is empty and null value is returned. If the size is not 0, first calculate the hash value of the key, and then traverse all the nodes in the linked list. If the key value and hash value of the node are the same as the element to be searched, its entry node is returned.
③ If the corresponding entry node is found, use the getValue method to get its value value value and return it. Otherwise, null value is returned.
Hash (object key) method to calculate the hash value: in order to make hash values more dispersed and avoid hash conflicts, XOR and unsigned right shift operations are adopted.
Indexfor() computes element Subscripts: hash value and table array length – 1
Reset method: determine the new capacity expansion threshold according to the newcapacity
① If the current capacity has reached the maximum capacity, set the threshold value to the maximum value of integer, and then the expansion will not be triggered again.
② Create a new entry array with capacity of newcapacity, and call the transfer method to transfer the elements of the old array to the new array.
③ Set the threshold to a smaller value (the product of newcapacity and load factor LoadFactor) and (maximum capacity + 1).
Transfer: transfer old array to new array
① After traversing all the elements of the old array, the rehash method is called to determine whether hash reconstruction is needed. If necessary, the hash value of the element key is recalculated.
② The indexfor method is called to calculate the subscript i of the elements according to the hash value of the key and the length of the array, and the elements of the old array are transferred to the new array by using the header interpolation method.
JDK 8 starts
In the form of array + linked list / red black tree, the element data type of table array is changed to the static implementation class node of entry.
Put method: adding elements(key points)
① Add a method element called valput.
② If the table is empty or there is no element, the capacity will be expanded. Otherwise, the subscript position of the element will be calculated. If it does not exist, a new node will be created for saving.
When the table is not empty, the inserted value can be divided into three situations:
③ If the hash value and key value of the first node and the element to be inserted are the same, the value value value is directly updated.
④ If the first node is of treenode type, call puttreeval method to add a tree node, andThe structure of red and black trees was adjusted
⑤ If it is a linked list node, it traverses the linked list, determines whether it is repeated according to the hash value and key value, and decides whether to update the value or add a new node. If you traverse to the end of the linked list and add the linked list elements, if you reach the tree building threshold, you also need to call the treeifybin method to reconstruct the linked list into a red black tree.
After the insertion of the element is complete:
⑥ Add 1 to the modcount value. If the number of nodes + 1 is greater than the capacity expansion threshold, the expansion is required.
⑦ Return type: if the key value pair is updated, the original value will be returned, otherwise null will be returned
The specific realization of adjusting red black tree is as follows
Each time, the size of the inserted node and the current node is compared. When the inserted node is small, look for the left subtree, otherwise, look for the right subtree. After finding the vacancy, execute two methods: balance insert method, insert the node into the red black tree and adjust the red and black tree to balance it. For the moveroottofront method, because the root node may change after balancing, the node recorded in the table is no longer the root node, and the root node needs to be reset.
Get method: get the value value of the element according to the key
① Call the getNode method to get the node node. If it is not null, the value value of the node is returned, otherwise null is returned.
② If the array is not empty, first compare the hash value and key value of the first node and the element to be searched. If both are the same, it will be returned directly.
③ If the first node is a treenode node, the gettreenode method is called to search. Otherwise, the linked list is searched according to hash value and key value. If it is not found, null is returned.
Hash methodSame as before java8
Resce methodBecause there is no transfer method, Java 8 has its own concrete implementation of HashMap re planning length.
① If the size exceeds the expansion threshold, double the table capacity.
② If the new table capacity is less than the default initialization capacity of 16, reset the table capacity to 16.
③ If the new table capacity is greater than or equal to the maximum capacity, set the threshold value to the maximum value of integer, and return to terminate the expansion. Since the size cannot exceed this value, the expansion will not occur again.
Rearranging data nodes
① If the node is a null value, it is not processed.
② If the node is not null and there is no next node, the hash value is recalculated and stored in a new table array.
③ If the node is a treenode node, the split method is called to adjust the red black tree. If it is too small, it will degenerate back to the linked list.
④ If the node is a linked list node, the linked list needs to be split into the linked list with the return value of hashcode() exceeding the old capacity and the linked list not exceeding the capacity. about
hash & oldCap == 0On the contrary, it needs to be placed in the new subscript position. The new subscript = old subscript + old capacity.
- When the entry method in the previous entry method is called to the next method, it will cause the data to be migrated without using the next method.
- After the table is expanded to newtable, the original data needs to be transferred to the newtable. In the process of transferring elements, the head insertion method is used, that is, the order of the linked list will be flipped, and the ring will be caused by multithreading.
- Java 8 starts to expand the size of the size in the size method, and uses the tail interpolation method instead, which will not cause the problem of dead loop. However, in the case of multithreading, the put method may cause data coverage problems, so the thread is still unsafe.
- The insertion index calculated by thread AB is the same. Thread a suspends after checking the hash conflict, thread B completes the insertion operation, and thread a continues to operate, but will not check the hash conflict, resulting in data coverage
- Storage structure: (optimized)
- 1.7 array + single chain table; entry array stores data, and entry is chain table structure. The elements with hash conflict are stored in the list of array elements with the same subscript. If the hash conflicts too much, the list will be too long, and the efficiency of put/get is low – O (n) complexity
- 1.8 array + single linked list + red black tree; node array can be linked list or tree. When the depth of the linked list is greater than 8, the data is stored in a red black tree, so that the put / get complexity is guaranteed to be o (logn)
- insert data
- 1.7 head insertion method because jdk1.7 is a longitudinal extension of single chain list, it is easy to have reverse order and ring list dead cycle when using head insertion method.
- 1.8 tail insertion method after JDK 1.8 is due to the addition of red and black trees, which can avoid the problem of reverse order and dead cycle of linked list. However, data coverage may occur in the case of multithreading.
- Different expansion methods
- 1.7 expand the capacity before inserting, first create a new array, and recalculate the hash value and index position
- 1.8 insert first and then expand. The original hash value of the data before the expansion and the capacity to be expanded are processed and operated. Whether to expand the capacity is determined according to whether the new participating operation bit of (New capacity-1) is 1 or 0. I don’t really understand that
Why is the expansion followed by insertion in jdk1.7, while it is inserted first and then expanded in JDK1.8?
To be answered
- This is the programmer of the source code according to the probability of Poisson distribution to get a result. The frequency of nodes in the container in the hash bucket follows the Poisson distribution, and the probability that the length of the bucket exceeds 8 is very small.
- If the linked list is less than or equal to 6, the tree will be converted into a linked list, and if it is greater than or equal to 8, it will be converted into a tree. There is a difference of 7 in the middle, which can effectively prevent frequent conversion between the linked list and the tree.
- Compared with HashMap, segment array is added. Each segment array has a corresponding hashentry array.
- Difference: the core data of the segment class, such as value, and the linked list entry < K, V > are volatile, which ensures the visibility when obtaining. The get() method can obtain the latest value at any time. Other structures are similar to HashMap.
- Segment locking technology is adopted. Segment is inherited from reentrantlock. Every time a thread uses a lock to access a segment, it will not affect other segments. The advantage is that it can ensure thread safety, and it will not lock the whole table like hashtable, so that the put and get operations need to be synchronized, so the concurrent HashMap is more efficient.
- In theory, concurrenchashmap supports concurrency of threads at currencylevel (number of segment arrays).
- The volatile keyword before hashentry cannot guarantee the atomicity of concurrency. Therefore, it is necessary to lock before putting. Segment implements reentrantlock. Through the put method of segment object, other threads cannot operate on the hash in the segment, so thread safety can be achieved.
- The segment is found through the hash value calculated by the key, and the specific put operation is performed in the segment object.
- Obtain a lock before inserting data. After obtaining the lock, the corresponding hashentry is searched for data insertion.
- First of all, the key value cannot be empty
- If the key value passed in already exists, the original value will be overridden
- If it does not exist, first determine whether the capacity needs to be expanded, and then insert data
- Data insertion is complete. Release the lock.
- Find the corresponding segment through the hash value calculated by the key value
- Find the value from the hashentry corresponding to the segment
- The procedure does not need to acquire a lock, and the value is a volatile modifier, which ensures that the acquisition must be the latest value.
- In terms of container security, concurrenchashmap in 1.8 gives up the segmentation technology in jdk1.7, and uses CAS mechanism + synchronized to ensure concurrent security. However, the segment definition is retained in the concurrence HashMap implementation, which is only to ensure the compatibility during serialization, and has no structural use.
- Instead of using segement, hashentry is changed to node. Its function is similar to hashentry, in which value and next (linked list) are decorated with volatile. If the number of elements in node array is greater than 8, red black tree is used to store data.
- A null parameter constructor is added to achieve lazy loading and reduce initialization overhead.
- Initialization is done in the put operation.
- Call inittable() to perform initialization, using CAS mechanism
1. Judge whether the sizecl value is less than 0. If it is less than 0, it means that concurrenthashmap is executing initialization operation. Therefore, you need to wait for a while. If other threads fail to initialize, they can replace them
- If the sizecl value is greater than or equal to 0, then based on CAS policy, the preemptive flag sizecl is – 1, which indicates that concurrenchashmap is executing initialization, then constructs table and updates the value of sizecl
- CAS compares the value in memory with the expected value, only when the two values are equal, can the value in memory be modified. CAS does not need to be locked, but it can guarantee the performance and provide thread safety in concurrent scenarios.
Put method(actually implemented by putval)
- Check whether the key is empty, if not, calculate the hashcode
- Determine whether to initialize
- Use CAS policy or synchronized lock to add data to the corresponding linked list or red black tree
- According to the calculated hash value, find the corresponding table array subscript position. If there is no node stored in this position, that is, there is no hash conflict, the data is added to the container by CAS lock free method, and the loop is ended.
- If the above conditions are not met, it will determine whether the container is being expanded by other threads. If the container is being expanded by other threads, it will help to expand the capacity. (the data migration in the expansion will lock the head node with a synchronous lock to prevent other threads putval from operating on the linked list, thus ensuring thread safety.)
- If none of the above is satisfied, it indicates that a hash conflict has occurred, and then the linked list operation or the red black tree operation are performed,In the operation of linked list or red black tree, the synchronized lock will be used to lock the head node to ensure that only one thread can modify the linked list at the same time, so as to prevent the linked list from forming a ring。
- Finally, the size of the linked list is updated to determine whether the capacity needs to be expanded.
- According to the calculated hashcode addressing, if it is on the bucket, it will return the value directly.
- If it’s a red black tree, it gets the value as a tree.
- If it is not satisfied, it will traverse and get the value in the way of linked list.
Transfer expansion method
- The first step: calculate the number of buckets that each thread can process each time. According to the length of the map, calculate the number of buckets (the number of table array) that each thread (CPU) needs to process. By default, each thread processes 16 buckets at a time. If the number is less than 16, it will be forced to 16 buckets.
- Step two: initialize the nexttab. If the new table nexttab is empty, the nexttab will be initialized. The default value is twice the original table
- Step three: introduce forwardingnode, advance and finishing variables to assist in capacity expansion. Forwardingnode indicates that the node has been processed and does not need to be processed. Advance indicates whether the thread can move down to the next bucket (true: it can move down). Finishing indicates whether to end the expansion (true: end the expansion, false: not finished). The specific logic is not mentioned
- Step 4: skip some other details and go straight to data migration,In the process of data transfer, a synchronized lock will be added to lock the head node to synchronize operation to prevent inserting data into the linked list when putval is in progress
- Step five: data migration,If the node on the bucket is a linked list or a red black tree, the node data will be divided into low and high bits. The calculation rule is that the hash value of the node is followed by the length of the table container before the expansion (&). If the result is 0, the data will be placed in the low bit of the new table (the current table is the i-th position, or the i-th position in the new table) If the result is not 0, it will be placed in the high position of the new table (the i-th position in the current table, I + the length of the current table container in the new table)。
- Step 6: if the bucket is attached to a red black tree, it is not only necessary to separate the low and high nodes, but also to determine whether the low and high nodes are stored in the new table in the form of linked list or red black tree.
Lambda expression:Lambda expressions allow functions to be passed into methods as parameters of a method, and are mainly used to simplify the code of anonymous inner classes.
@FunctionalInterfaceAnnotation identifies that there is only one abstract method that can be implicitly converted to lambda expressions.
Method reference:We can directly refer to methods or constructors of existing classes or objects to further simplify lambda expressions. There are four forms of method reference: reference construction method, static method of reference class, arbitrary object method of reference specific class, and method of reference to an object.
Methods in the interface:Interface can be defined
defaultThe modified default method reduces the complexity of interface upgrade, and can also define static methods.
Note:Java 8 introduces the mechanism of repeated annotation. The same annotation can be declared multiple times in the same place. The scope of annotation has also been extended to include local variables, generics, method exceptions, etc.
Type conjecture:The type inference mechanism is strengthened to make the code more concise. For example, the generic parameters in the object can be omitted when defining the generic collection.
Optional class:It is used to handle null pointer exception and improve code readability.
Stream class:The introduction of functional programming style into Java language provides many functions, which can make the code more concise. Methods include
filter()Filter according to conditions
limit()Take the first n elements
skip()Skip the first n elements
concat()Merge stream stream, etc.
Date:Enhanced date and time API, new java.time It mainly includes processing date, time, date / time, time zone, time and clock.
Is a small memory space, can be regarded asIndicator of bytecode execution by the current threadIs the only area without memory overflow. When bytecode interpreter worksSelect the next execution instruction by changing the value of the counter。 Branch, loop, jump, thread recovery and other functions need to rely on the counter.
If the thread is executing a Java method, the counter records the address of the virtual machine bytecode instruction being executed. If it is a local method, the counter value is undefined.
Java virtual machine stack
Is thread private, descriptionJava methodMemory model. When a new thread is created, a stack space is allocated, and the elements in the stack are used to support the virtual machineMethod callWhen each method is executed, a stack frame is created to store the local variable table, operation stack and method exit of the method.Each method from the call to the completion of execution is the stack frame from the stack to the stack process.
The stack depth of thread request is greater than the depth allowed by virtual machineStackOverflowErrorIf the JVM stack allows dynamic expansion, the stack extension cannot request enough memory to throwOutOfMemoryError。
Native Method Stack
Function, throw exception and virtual machine stack are similar, butThe local method stack serves local methods.(local methods are methods implemented by non java code)
The Java heap is the memory managed by the virtual machineThe biggest piece。 Pile is quiltA memory area shared by all threads, created when the virtual machine starts. Its purpose is toStore object instanceAlmost all object instances in Java allocate memory here.
The Java heap can be in a physically discontinuous memory space, but logically it should be considered contiguous. However, for large objects (such as arrays), most virtual machine implementations require continuous memory space for the sake of simplicity and storage efficiency.
If there is no memory in the heap to complete the instance allocation and the heap can no longer be expanded, the virtual machine will throw an outofmemoryerror exception.
Store the virtual machine loadedType information, constants, static variables, code cache after compiler compilationWait for the data.
The virtual machine specification has loose constraints on the method area. It does not need continuous memory, can choose a fixed size and can be expanded like heap, and it can not realize garbage collection. Garbage collection occurs less in the method area, mainly for constant pool and type offloading.
If the method area cannot meet the new memory allocation requirements, an outofmemoryerror exception is thrown.
Runtime Constant Pool
Is part of the method area. In the class file, in addition to the description information such as the version, field, method and interface of the class, there is also another informationConstant pool table, which is used to store various literal quantities and symbol references generated by the compiler, which are stored in the runtime constant pool after class loading.
An important feature of runtime constant pool relative to class file constant pool is dynamic. Java does not require constants to be generated only at compile time.
For example, string
internMethod is a local method. If the string constant pool already contains a string equal to this string object, it will return the reference of the string object of the string in the pool. Otherwise, the string contained in the string object will be added to the constant pool and the reference to the string object will be returned.
Memory overflow and memory leak
- Memory overflow OUTOFMEMORY refers to that the program does not have enough memory space to use when it requests memory.
- Memory leak refers to that the program can not release the applied memory space after applying for memory. Memory leak will eventually lead to memory overflow.
① When the JVM encounters the bytecode new instruction, it first checks whether the symbol reference of a class can be located in the constant pool and whether the class has been loaded.
② After the class load check is passed, the virtual machine will allocate memory for the new objects.
③ After the memory allocation is completed, the virtual machine sets the member variable to zero value to ensure that the instance field of the object can be used without initial value.
④ SettingsObject headIncluding hash code, GC information, lock information, class meta information of the object class.
⑤ Execute init method, initialize member variable, execute instantiation code block, call class construction method, and assign the first address of heap object to reference variable.
- Reasons for garbage collection:
- Java runtime data, such as objects and arrays, are stored in the Java heap. However, if the dynamically created objects are not recycled in time, resulting in continuous accumulation, the heap will be full and memory overflow will occur.
- To solve this problem, Java creates aDaemonsWhen the memory is tight, recycle the useless or unused objects in the heap for a long time to ensure the normal operation of the program.
- Garbage collection is only responsible for releasing the memory occupied by those objects.
All object instances are stored in the heap. Before recycling the heap, the garbage collector should first judge whether the object is still alive.
Reference counting algorithm
Add a reference counter to the object. If there is a place to refer to it, the counter will be increased by 1. When the reference fails, the counter will decrease by 1. If the counter is 0, the object will no longer be used. The algorithm is simple and efficient, but it is rarely used in Java because it has the problem of mutual reference between objects, which leads to the failure to clear the counter.
Reachability analysis algorithm
The basic idea is to think of all referenced objects as a tree from the root node of the treeGC RootsFirst, search down according to the reference relation, and continuously traverse to find all connected tree objects. These objects are called “reachable” objects, or “survival” objects. The rest are considered “unreachable” objects, or “garbage,” of “death.”.
GC roots must be available. Objects that can be used as GC roots:
- Objects referenced in the virtual machine stack, such as parameters and local variables in the method stack where the thread is called.
- In the method area, the object referenced by the class static property, such as the reference type static variable of the class.
- An object referenced by a constant in a method area, such as a reference in a string constant pool.
- In the local method stack, JNI is the object referenced by the native method.
- References within the JVM, such as class objects corresponding to basic data types, some resident exception objects, system class loaders, etc.
- All objects held by synchronized synchronization locks.
Whether an object is alive or not by reference counting or reachability analysis is closely related to reference. Before jdk1.2, the definition referred to was: if the value of the reference type data store represents the starting address of another block of memory, then the reference data is said to be a reference representing a block of memory or an object. After JDK 1.2, Java extended the concept of reference, which can be divided into four types according to their strength
Strong citation: the most traditional definition of reference, which refers to the common reference assignment in code. In any case, as long as the strong reference exists, the garbage collector willReferenced objects are never recycled.
Soft citation: describes objects that are useful but not required. Only the objects associated with soft references will be listed in the recycling scope for secondary recycling before the system is about to have a memory overflow exception. If there is not enough memory in this recycling, the oom exception will be thrown.
Weak reference: describes non essential objects. Reference strength is weaker than soft references. Objects associated with weak references can only survive until the next garbage collection occurs. When the garbage collector starts to work, objects that are only weakly referenced are reclaimed regardless of whether the current memory is sufficient.
Virtual reference: is the weakest reference relationship. Whether an object has a virtual reference does not affect its lifetime at all, and it cannot be used to obtain an object instance through virtual reference. The only purpose of this reference is to receive a system notification when the object is collected by the garbage collector
It is divided into three parts
- New generation: newly created objects. Many local variables will become unreachable objects and die quickly after they are newly created. Therefore, this area is characterized by less surviving objects and more garbage.
- Old age: objects were created very early and survived. The object characteristics of this area are that there are more surviving objects and less garbage.
- Permanent generation: a permanent object. For example, some static files. The feature is that it doesn’t need to be recycled.
It is divided into two stages: marking and cleaning. First, traversing the heap memory by using reachability to mark the objects that are alive or need to be recycled; the second step is to clear the memory of objects marked as garbage.
Marking process is the process of judging whether an object belongs to garbage. If the heap contains a large number of objects and most of them need to be recycled, a lot of mark cleaning must be done, which is inefficient.
Disadvantages: memory space fragmentation exists, and it is easy to trigger full GC when allocating large objects.
- The available memory is divided into two equal sized blocks according to the capacity, and only one block is used at a time. When the space in this block is used up, the surviving objects are copied to another block, and then the current memory space is emptied directly.
- It is suitable for the situation of more garbage and less living objects, so that the number of moves can be reduced.
- Disadvantages: when the object survival rate is high, more copy operations are needed, and the efficiency is low. If you don’t want to waste space, you need to have additional space allocation guarantee, which is not used in the old days.
- First of all, Eden area is the largest, providing heap memory to the outside world. When the Eden area is almost full, the minor GC is performed, and the surviving objects are put into the survivor a area, and the Eden area is cleared;
After Eden area is cleared, it continues to provide external heap memory
- When Eden area is filled again, minor GC is performed on Eden area and survivor a area at the same time. The surviving objects are put into survivor B area, and Eden area and survivor a area are cleared at the same time
- The Eden area continues to provide heap memory to the outside world and repeats the above process, i.eAfter the Eden area is filled, the surviving objects in Eden area and one survivor area are put into another survivor area
- When a survivor area is filled, and there are still objects that have not been copied, or when some objects have survived for about 15 times, the remaining objects will be put into the old area;
When the old area is also filled, major GC is performed to recycle the old area.
- The marking process is the same as the mark cleanup algorithm, but does not directly clean up recyclable objects, but ratherLet all live objects move to the end of memory spaceAnd then clean up the memory outside the boundary.
- It is suitable for the situation of more living objects and less garbage
- The difference between mark cleaning and mark collation: the former is a non mobile algorithm, the latter is mobile. If you move a live object, especially in the old days, where there are a large number of objects surviving each time, the overhead is very high, and the user thread must be suspended when moving; if not, it will lead to space fragmentation.
- The new generation of memory recycling is minor GC
- Possible problem: the old age may refer to the new generation of objects, so it is necessary to scan the old objects, which is equivalent to a full heap scan.
- Solution: card meter technology. Maintain a card table. Dirty card means that the old object may refer to the new generation object. Scan the dirty card. (time for space)
- The major GC collects the garbage of the older generation. In the garbage collection algorithm, the minor GC is generally used: when a survivor area in the heap is full, this part of memory will be put into the old area; after the old area is filled, major GC will be carried out
- Full GC: a collection of all parts of the heap, including the Cenozoic, the old, and the permanent (after JDK 1.8, the permanent generations were removed and replaced with Metaspace Metaspace)
- Is a new generation of single threaded collector using replication algorithms,When it does garbage collection, it must pause all other worker threads until the collection ends. (Stop the world)
- Serial is the default new generation collector for virtual machines running in client mode, with the advantages ofSimple and efficientFor a memory constrained environment, it is the smallest of all collectors; for a single core processor or environment with fewer processor cores, the serial collector has no thread interaction overhead, so it can obtain the highest single thread collection efficiency.
- It is a multi-threaded version of serial. Except for using multithreading for garbage collection, other behaviors are completely consistent (all control parameters, collection algorithm (Replication Algorithm), stop the world, object allocation rules, collection strategy, etc.).
- Parnew is the default new generation collector for virtual machine running in server mode. One important reason is that it can only cooperate with CMS except serial. Since the beginning of JDK 9, parnew plus CMS collector combination is no longer the official recommended service-side collector solution. The official hopes that it can be completely replaced by G1.
- Similar to parnew, the new generation collector, based on the mark copy algorithm, is a multithreaded collector that can be paralleled.
- The feature is that its focus is different from other collectors. The focus of collectors such as CMS is to shorten the pause time of user threads as much as possible, while the goal of parallel scavenge is toAchieve a controllable throughputThroughput is the ratio of the time the processor spends running user code to the total time consumed by the processor.
- High throughput can make use of CPU time efficiently and complete the computing task of the program as soon as possible. It is mainly suitable for computing in the background without too many interactive tasks.
- Adaptive adjustment strategyIt is also an important feature that distinguishes it from parnew. The performance monitoring information can be collected according to the current system operation, and the garbage collection parameters (survivor Eden area proportion, promotion age of the elderly, etc.) can be dynamically adjusted to ensure the maximum throughput.
- Serial old: an older version of the serial collector that uses the mark and groom algorithm.
- Parallel old: an older version of the parallel scavenge collector, which uses multithreading and “mark and groom” algorithms, with throughput first.
- To obtainMinimum recovery pause timeFor the target collector, CMS can be used if you want the system pause time to be as short as possible to give users a better experience.
- Based on mark clear algorithm, it is divided into four steps: initial marking, concurrent marking, relabeling and concurrent clearing.
- STW (stop the world) is still needed for initial tagging and re tagging, and the initial tagging is only the object that can be directly associated with the tag GC roots, which is very fast.
- Concurrent marking is the process of traversing the entire object graph from the directly associated object of GC roots,The most time-consuming of all stepsIt can run concurrently with the garbage collection thread without pausing the user thread.
- Re tagging is to correct the mark records of the part of objects that are changed due to the operation of the user program during concurrent marking. The pause time in this stage is slightly longer than the initial marking, but much shorter than that of concurrent marking.
- Since there is no need to move the live objects, this phase can also be concurrent with user threads.
- Since the garbage collector can work with the user thread in the most time-consuming concurrent marking and concurrent cleaning phase, the overall memory collection process of CMS is executed concurrently with the user thread.
- It is a service-oriented collector. The original design goal is to replace CMS, and the corresponding speed is priority.
- G1 can form a collection set for any part of heap memory,The measure is no longer which generation it belongs toIt is the mixed GC mode of G1.
- Divide the Java heap into multipleIndependent regions of equal sizeEach region can act as the Eden space of the new generation, the survivor space or the elderly space as needed. The collector can adopt different strategies for regions that play different roles. In this way, both newly created objects and old objects that have survived for a period of time and survived multiple collections can obtain good collection results.
- Track the value of garbage accumulation in each region, that isThe empirical value of the space size obtained by recycling and the time required for recycling, maintain a priority list in the background, and set the allowed collection pause time according to the user each timePriority should be given to regions with the largest return on value recovery。 This recycling method ensures that G1 can obtain the highest collection efficiency in limited time.
- in order toAvoid full heap scanThe virtual machine maintains a corresponding registered set for each region in G1. When the virtual machine discoverer writes data of type reference, it generates a write Barrier temporarily interrupts the write operation and checks whether the object referenced by the reference is in a different region (in the case of generational generation, it is to check whether the object in the old generation refers to the object in the new generation). If so, it records the relevant reference information into the registered set of the region to which the referenced object belongs through the card table. When the memory is recycled, adding a remanembered set to the enumeration range of the GC root node can ensure that there is no full heap scan and no omission.
- G1, as a whole, is a collector based on the “mark and sort” algorithm, and locally (between two regions) it is based on the “copy” algorithm. This means that during G1 operationNo memory space fragmentation is generatedAfter collection, it can provide regular available memory. This feature is beneficial to the program running for a long time. When allocating large objects, the next GC will not be triggered in advance because the continuous memory space cannot be found.
G1 operation process:
- Initial marking:Mark the object that GC roots can be directly associated with and modify the value of TAMs (nest top mark start) pointer so that new objects can be correctly allocated in available regions when user threads run concurrently in the next stage. This phase needs STW, but it takes a short time. It is completed synchronously by borrowing minorgc.
- Concurrent token:Starting from GC roots, the reachability of objects in the heap is analyzed, and the object graph of the whole heap is scanned recursively to find out the objects that need to be recycled. This stage is time-consuming, but it can be executed concurrently with user threads. After the scanning is completed, the SATB recorded objects with reference changes during concurrency should be reprocessed.
- Final marking:A short pause is made on the user thread to handle a small number of SATB records left after the end of the concurrency phase.
- Screening and recycling:Sort the recycling value and cost of each region. Specify the recycling plan according to the user’s expected pause time. You can freely select any number of regions to form a recycling set. Then copy the surviving objects that decide to recycle to the empty region, and then clean up all the space of the whole old region. The operation must pause the user thread and be performed in parallel by multiple collector threads.
The user can specify the expected pause time, which is a powerful function of G1, but the value cannot be set too low. Generally, it is appropriate to set the value to 100 ~ 300 ms. G1 does not have the problem of memory space fragmentation, but G1’s memory consumption for garbage collection and the extra execution load of program runtime are higher than CMS.
CMS is the first successful attempt of hotspot to pursue low pause, but there are still three obvious shortcomings: 1) it is very sensitive to processor resources. Although it will not cause user thread pause in concurrent phase, it will reduce the total throughput. ② Unable to handle floating garbage, there may be concurrent failures leading to another fullgc. ③ Because of the mark clean algorithm, it will produce a lot of space debris, which brings trouble to the allocation of large objects.
- First, the javac compiler will
.javaFile becomes JVM loadable
.classBytecode file. The compilation process is divided into:
① Word parsing, through the space segmentation of words, operators, controllers and other information, to form token information flow, pass to the grammar parser.
② Syntax parsing, the token information flow is assembled into idiom tree according to Java syntax rules.
③ Semantic analysis, check whether the use of keywords is reasonable, whether the type matching, whether the scope is correct, etc.
④ Bytecode generation,Convert the information from the previous steps to bytecode。
- After that, the bytecode file is compiled into local machine code by JIT.
- In order to improve the efficiency of running code, the two main ways to detect hot spots in Java code are to detect the hot spots in the code or the code interpreter.
- The information in the class file needs to be loaded into the virtual machine before it can be used. The JVM loads the data describing the class from the class file to the memory, verifies, parses and initializes the data, and finally forms the Java type that can be directly used by the virtual machine. This process is called the class loading mechanism of the virtual machine。
- The loading, connection and initialization of Java classes are completed during the runtime, which increases the performance overhead, but provides high scalability. The feature of Java dynamic extension depends on dynamic loading and connection at runtime.
- From being loaded to unloaded, types go through the whole life cycleLoad, verify, prepare, parse, initialize, use, and unloadThere are seven phases, in which verification, parsing and initialization are called join. The order of loading, validation, preparation and initialization is determined, while parsing is not necessarily: it may start after initialization, which is to support dynamic binding of Java.
putstaticBytecode instruction is not initialized. For example, new instantiates objects, sets static fields, and calls static methods.
② When called on a class reflection, it is not initialized.
③ The parent class was not initialized when the class was initialized. (do not require parent interface initialization when interface initialization, only when using parent interface, such as reference interface constant)
④ When the virtual machine starts, the main class containing the main method is initialized.
⑤ The interface defines the default method. If the implementation class of the interface is initialized, the interface should be initialized before it.
All other ways of referencing types do not trigger initialization, which is called passive references. Examples of passive References:
① When a child class uses a static field of a parent class, only the parent class is initialized.
② Use class through array definition.
③ Constants are stored in the constant pool of the calling class at compile time, and the class that defines the constant is not initialized.
Through the fully qualified class name of a class, the corresponding binary byte stream is obtained. The static storage structure represented by the stream is transformed into the runtime data area of the method area, and then the class instance corresponding to the class is generated in memory as the data access entry of the class in the method area.
Ensure that the byte stream of the class file complies with the constraint.
- If the virtual machine does not check the input byte stream, the system may be attacked by loading the byte stream with error or malicious attempt. Verification mainly includes: file format verification, metadata verification, bytecode verification and symbol reference verification.
After the verification, there is no effect on the running time of the program. If the code has been repeatedly used and validated, you can consider turning off most of the validation in the production environment to shorten the class loading time.
- Allocate memory for class static variables and set a zero value.
The memory allocation in this stage only includes class variables, not instance variables. If the variable is decorated with final, javac will generate the constant value attribute for the variable at compile time, and the virtual opportunity in the preparation stage will set the variable value to the code value.
Replace the symbolic reference in the constant pool with a direct reference.
- Symbol reference describes the reference target with a group of symbols, which can be any form of literal quantity. As long as the target can be located unambiguously when used, the reference target may not have been loaded into the memory of the virtual machine; direct reference refers to the pointer, relative offset or handle that can directly point to the target, and the reference target must already exist in the memory of the virtual machine.
- It is not until this stage that the JVM begins to execute the code written in the class.
- In the preparation phase, the variables are assigned zero values, and in the initialization phase, the variables are assigned according to the programmer’s codeInitialize class variables and other resources.The initialization phase is to execute the
<client>Method, which is automatically generated by javac.
Start class loader（BootStrap）
It is created at the startup of the JVM and is responsible for loading the core classes, such as object, system, etc. It cannot be directly referenced by the program. If you need to delegate the load to the boot class loader, you can use null instead, because the boot class loader is usually implemented by the operating system and does not exist in the JVM system.
Platform class loader
From jdk9, the extension class loader is replaced by the platform class loader, and some extended system classes, such as XML, encryption, compression related function classes, are loaded.
Application class loader
It is responsible for loading the class library on the user’s classpath, which can be used directly in the code. If there is no custom class loader, generally, the application class loader is the default class loader. The custom class loader inherits the classloader and rewrites the
- The parent delegation model requires that all class loaders except the top-level bootloader should have their own parent loaders.
- When a specific class loader receives a request to load a class, it first delegates the loading task to the parent class loader, recursion in turn. If the parent class loader can complete the class loading task, it will return successfully. Only when the parent class loader cannot complete the loading task, it will load itself.
- A class has a priority hierarchy with its loaders to ensure that a class is the same in each class loader environment. Through this hierarchy, the repeated loading of classes can be avoided and the stability of the program can be ensured.
The first class loader of Java virtual machine is bootstrap, which is very special,It is not a Java class, so it does not need to be loaded by others. It is nested in the Java virtual machine kernel, that is, bootstrap has been started when the JVM starts. It is binary code written in C + + (not bytecode)It can load other classes.
- That’s why we found out when we tested
System.class.getClassLoader()The reason why the result is null does not mean that the system class does not have a class loader, but its loader is special
BootstrapClassLoaderBecause it is not a Java class, getting a reference to it must return null.
- That’s why we found out when we tested
Specific meaning of entrustment mechanism
When a Java virtual machine wants to load a class, which class loader is sent to load it?
- First, the class loader of the current thread loads the first class in the thread (assuming class a).
Note: the class loader of the current thread can be obtained through the getcontextclassloader() of the thread class, or the class loader can be set by itself through setcontextclassloader().
- If class B is referenced in class A, the Java virtual machine will use the class loader that loads class A to load class B.
- You can also call
ClassLoader.loadClass()Method to specify a class loader to load a class.
- First, the class loader of the current thread loads the first class in the thread (assuming class a).
The meaning of delegation mechanism: prevent multiple copies of the same bytecode in memory
For example, two classes a and B need to load the system class
- If you do not need to delegate but load your own, class a will load a copy of system bytecode, and class B will load a copy of system bytecode,Two copies of the system bytecode appear in memory.
- If the delegation mechanism is used, it will recursively look up the parent class, that isBootstrap is preferred to try loadingIf not, go down. The system here can be found in bootstrap and loaded. If class B wants to load system at this time, it also starts from bootstrapIf bootstrap finds that the system has been loaded, it can be returned directly to the system in memory without reloadingIn this way, there is only one copy of the bytecode of the system in memory.
Any class must be defined byClass loaderandThe class itselfTogether to establish its uniqueness in the virtual machine. Only when two classes are loaded by the same class loader can they be compared. Otherwise, even if two classes come from the same class file and are loaded by the same JVM, as long as the class loaders are different, the two classes will not be equal.
- New: new state, the state entered after the thread is created
- Runnable: ready state, that is, executable. When the start() method of the thread is called, the thread enters the ready state.
- Running: running state. After obtaining the CPU, the thread scheduler selects a thread from the runnable pool as the current thread’s state. This is also the only way for a thread to enter a running state.
- Blocked: blocked state, after losing CPU time slice. It may be because the lock is occupied by other threads (failed to acquire the synchronization lock), the sleep or join method is called, the wait method is executed, etc.
- Waiting: waiting state. The thread in this state will not be allocated CPU time slice, and other threads are required to notify or interrupt. It may be because the wait and join methods without arguments are called.
- Time waiting: deadline waiting status, which can be returned within a specified time. The export may be due to calling the wait and join methods with parameters.
- Terminated: terminated status, indicating that the current thread has completed execution or exited abnormally.
Inherit thread class to create thread class
(1) Define subclasses of the thread class, andOverride the run method of the classThe method body of the run method represents the task to be completed by the thread
(2) To create an instance of a thread subclass is to create a thread object.
(3) Call the start() method of the thread object to start the thread.
The implementation is simple, but it does not conform to the principle of Richter substitution, and can not inherit other classes.
Creating thread class through runnable interface
(1) Define the implementation class of runnable interface, andOverride the run method of the interfaceThe method body of the run method is the thread execution body of the thread.
(2) Create an instance of the runnable implementation class, and use this instance as the target of the thread to create a thread object. This thread object is the real thread object.
(3) Call the start() method of the thread object to start the thread.
It avoids the limitation of single inheritance and realizes decoupling.
Creating threads through callable and future
(1) Create the implementation class of callable interface, andOverride call methodThe call method will be used as the thread execution body and has a return value.
(2) Create an instance of the callable implementation class, and use the futuretask class to wrap the callable object, which encapsulates the return value of the call () method of the callable object.
(3) Create and start a new thread using the futuretask object as the target of the thread object.
(4) Call the get() method of futuretask object to get the return value after the execution of the child thread
You can get the return value of the thread execution result and throw an exception.
You can also create a thread pool
- Wait(): keep the current thread in the waiting state until other threads call the notify() method or notifyall() method of this object, and the current thread enters the ready state.
- Notify() and notifyall(): wake up single or all threads
- Sleep(): enter the sleep state. Unlike wait, the lock resource will not be released immediately and enter the time waiting state
- Yield(): make the current threadGive up CPU time sliceGive the thread with the same or higher priority, return to the ready state, and compete with other threads for CPU time slice again.
- Join (): used to wait for other threads to terminate. If the current thread calls the join method of another thread, the current thread will enter the blocking state. Only when the other thread finishes can the current thread change from the blocking state to the ready state and wait to obtain the CPU time slice. If the underlying layer uses wait, the lock will also be released.
- start(): its function is to start a new thread, which will execute the corresponding run() method. start()Cannot be called repeatedly.
- run(): run() is just like a normal member method,Can be called repeatedly。 If run() is called alone, run() will be executed in the current thread, andNew threads will not be started!
Each object has a lock to control the synchronous access. The synchronized keyword can interact with the lock of the object to implement the synchronization method or block.
- Sleep() methodThe executing thread actively gives up the CPU (and then the CPU can perform other tasks). After the specified time in sleep, the CPU returns to the thread to continue to execute (Note: the sleep method only gives up the CPU, and does not release the synchronization resource lock)!!! )；Wait() methodIt means that the current thread lets itself temporarily withdraw the synchronous resource lock, so that other threads waiting for the resource can get the resources to run. Only by calling the notify () method, the thread that calls wait () before the wait state is released, and can participate in the competition and synchronize the resource lock, and then be executed. (Note: the role of notify is equivalent to waking the sleeping person, and not assigning tasks to him, that is to say, notify only allows the thread that previously called wait has the right to rejoin the thread scheduling).
- Sleep() methodCan be used anywhere;Wait() methodIt can only be used in synchronization method or synchronization block;
- sleep()It is a thread class method. The call will pause the specified time of this thread, but the monitoring will still be maintained. The object lock will not be released, and it will be automatically recovered at the time;wait()If it is an object method, the call will give up the object lock and enter the waiting queue. Only when notify() / notifyall() is called to wake up the specified thread or all the threads, it will enter the lock pool, and it will enter the running state if it does not obtain the object lock again;
- If a thread calls the wait() method of an object, the thread will be in the waiting pool of the object, and the thread in the waiting pool will not compete for the lock of the object.
- When a thread calls the notifyall() method (wake up all wait threads) or notify() method (wakes up only one wait thread randomly), the awakened thread will enter the lock pool of the object, and the thread in the lock pool will compete for the lock of the object. In other words, after calling notify, as long as a thread will enter the lock pool from the wait pool, and notifyAll will move all the threads in the waiting pool of the object to the lock pool and wait for lock contention
- If a thread does not compete for the object lock, it will remain in the lock pool. Only if the thread calls the wait() method again, it will return to the waiting pool. The thread competing for the object lock will continue to execute until the synchronized code block is executed. It will release the object lock. At this time, the thread in the lock pool will continue to compete for the object lock.
- After notifyAll is called, all threads will be moved from the wait pool to the lock pool, and then participate in the lock competition. If the contention is successful, it will continue to execute. If it fails, it will stay in the lock pool and wait for the lock to be released to participate in the competition again. Notify wakes only one thread.
If isdeamon() is set to false, it is a user thread, which is a normal thread created; if isdeamon() is set to true, it is the guardian thread, which is used to serve the user thread.
The JVM exits when only the guard thread is left. The garbage collection thread is the guardian thread.
- The guardian thread is declared before the thread start method
- The daemons cannot access native resources, such as read and write operations
- Can’t thread pool use Guardian threads?
- Atomicity: in Java, the operations of reading and assigning values to basic data types are atomic operations. The so-called atomic operations refer to those operations that cannot be interrupted or divided, and must be completed or not executed.
- Visibility: Java uses volatile to provide visibility. When a variable is modified by volatile, the modification will be immediately refreshed to main memory. When other threads need to read the variable, it will read the new value in memory. Ordinary variables do not guarantee this.
- Ordering: JMM allows the compiler and processor to reorder instructions, but specifies as if serial semantics, that is, no matter how reordering, the execution result of the program cannot be changed.
What mechanism is used to exchange information? There are two types, shared memory and messaging.
- Shared memory:
Threads share the common state of the program, and communicate implicitly through the common state in write read memory.
Java concurrency uses shared memory model, the communication between threads is always implicit, and the whole communication process is completely transparent to programmers.
There is no common state between threads, and communication between threads must be displayed by sending messages.
- volatileIt tells the program that any read of the variable needs to be obtained from the main memory, and the write must be flushed back to the main memory synchronously to ensure the visibility of the variable access of all threads.
- Locking mechanism synchronizedEnsure that multiple threads are inOnly one can be in a method or synchronization block at a timeTo ensure that the thread has access to the variableAtomicity, visibility and orderliness。
- Waiting for notification mechanismThrough join, wait, notify methods.
Specific: refers to a thread a calling the object’s
waitMethod enters the wait state, and the other thread B calls the object’s
notify/notifyAllMethod, thread a finishes blocking and performs a post order operation after receiving the notification. Object
notify/notifyAllComplete the interaction between the waiting party and the notifying party.
If a thread executes a thread’s
joinMethod, the thread will block and wait for execution
joinMethod, which involves the wait / notify mechanism.
joinThrough the bottom layer
waitImplementation. When a thread terminates, it will call its own
notifyAllMethod to notify all threads waiting on the thread object.
- Pipeline IO flowIt is used for data transmission between threads, and the medium is memory.
Pipeoutputstream and pipedwriter are output streams, equivalent to producers, while pipedinputstream and pipedreader are input streams, equivalent to consumers. The pipeline stream uses a circular buffer array with a default size of 1KB. The input stream reads data from the buffer array, and the output stream writes data to the buffer array. When the array is full, the thread of the output stream is blocked; when the array is empty for the first time, the thread of the input stream is blocked.
- ThreadLocalIs a thread shared variable, but it can create a separate copy for each thread. The copy value is private to the thread and does not affect each other.
After the mutual lock is released, the mutual lock is added to the access quantity.
After locking mutex, any other thread trying to lock mutex again will be blocked until the current thread releases the mutex.
Read write lock
Read write locks are similar to mutexes, but allow higher parallelism. Mutexes are either locked or unlocked, and only one thread at a time can lock them.
The read-write lock can have three states:Lock state in read mode、Lock state in write mode、Unlocked state。 Only one thread can hold the read-write lock of write mode at a time, but multiple threads can hold the read-write lock of read-write mode at the same time.
The most lightweight synchronization mechanism provided by the JVM. Variables modified by volatile have two characteristics
Ensure that this variable is visible to all threads
Visibility means that when a thread changes the value of this variable, the new value is immediately known to other threads.
Disable instruction reorder optimization
Using volatile variable for write operation, the generated assembly instruction operation is prefixed with lock, which is equivalent to a memory barrier,Subsequent instructions cannot be rearranged to the position before the memory barrier.
The instruction with lock prefix has two functions in multi-core processor
① Writes the data of the current processor cache row back to system memory.
② This write back operation will invalidate other data in the CPU that cache the memory address. This operation is equivalent to a store and write operation on the variables in the cache, which can make the changes of volatile variables immediately visible to other processors.
Usage: state quantity marking, variable reading and writing operation, marking state quantity ensures that the modification is immediately visible to the thread, and the efficiency is better than synchronous lock. The implementation of singleton mode is typical double check locking (DCL).
It is the compiler and the processor that adjust the execution order of the instructions successively. However, it must conform to the order, that is, no matter how the instructions are sorted, the results of program execution cannot be changed. The reordering of multithreaded programs can cause inconsistent results, so the volatile keyword is used to prohibit reordering and ensure “orderliness”.
synchronizedThe CPU pessimistic lock mechanism is adopted, that is, the thread obtains exclusive lock,Other threads can only rely on blocking to wait for the thread to release the lock。
- The access of different threads to the synchronization lock is mutually exclusive.In other words, at a certain point in time, the synchronization lock of an object can only be acquired by one thread. Through synchronous lock, we can realize mutual exclusive access to “object / method” in multithreading.
- Synchronized modifies the synchronized usage of static methods and synchronized blocks of codeThe lock is a classTo execute the corresponding synchronization code, the thread needs to obtain the class lock.
- Synchronized modifies the member method, which gets the object instance that currently calls the method.Object lock。
- Volatile keyword is a lightweight implementation of thread synchronization, so volatile performance is definitely better than synchronized keyword.
- Multithreading access to volatile keyword will not block, while synchronized keyword may
- Volatile keyword can guarantee the visibility of data, but it can’t guarantee the atomicity of data. The synchronized keyword guarantees both.
- Volatile keyword is mainly used to solve the visibility of variables among multiple threads, while synchronized keyword is used to solve the synchronization of accessing resources among multiple threads.
Synchronized, when used to modify a method or a block of code, ensures that theAt most one thread can execute the code at the same time。
Lock is an interface, synchronized is a keyword in Java, synchronized is a built-in language implementation;
When an exception occurs, synchronized will automatically release the lock occupied by the thread, so it will not lead to deadlock. When an exception occurs, if lock does not release the lock through unlock(), it is likely to cause deadlockWhen using lock, you need to release the lock in a finally block；
Lock allows the thread waiting for the lock to respond to an interrupt, but synchronized does not,When synchronized is used, the waiting thread will always wait, unable to respond to interrupt;
Lock can be used to know whether the lock has been successfully acquired, but synchronized cannot.
In terms of performance, if the competition for resources is not fierce, the performance of both is similar. When the competition for resources is very fierce (that is, there are a large number of threads competing at the same time), the performance of lock is much better than that of synchronized.
- CAS is to compare and replace. It uses three basic operands: memory address V, old expected value a, and new value B to be modified. If the value of memory location V is equal to the expected value of a, the location is updated to the new value B, otherwise nothing is done. Many CAS operations are spinning: if the operation is not successful, it will be retried until the operation succeeds.
- It uses an optimistic locking mechanism, it will not block any thread, so in terms of efficiency, it will be better than
- Therefore, in the case of very high concurrency, we try to use synchronous lock, while in other cases, we can flexibly adopt CAS mechanism.
- Optimistic locking is a more efficient mechanism. Its principle is to perform an operation without locking each time,If there is a conflict, it will fail and try again until it succeeds. In fact, it is not a lock in essence, so there are many placesAlso known as spin, the main mechanism used in optimistic locking is CAS.
Optimistic lock and pessimistic lock are two ideas, which are used to solve the problem of data competition in concurrent scenarios.
- Optimistic lock: optimistic lock is very optimistic when operating data, thinking that others will not modify the data at the same time. Therefore, the optimistic lock will not be locked. It is only used to judge whether other people have modified the data during the update: if others have modified the data, the operation will be abandoned, otherwise the operation will be executed.
- Pessimistic lock: pessimistic lock is more pessimistic when operating data, thinking that others will modify the data at the same time. Therefore, the data will be locked directly when the data is operated, and the lock will not be released until the operation is completed; during the locking period, no one else can modify the data.
- Pessimistic locking can be implemented by locking code blocks (such as Java’s synchronized keyword) or data (such as exclusive locks in MySQL).
- There are two ways to realize optimistic lock: CAS mechanism and version number mechanism
Servlet is a program that runs on Web servers such as Tomcat and jetty. It can respond to the request of HTTP protocol and implement the logic of users, and finally return the result to the client (browser).
- Loading and instantiation: when a client first requests a servlet, the servlet container will web.xml To instantiate the servlet class.
- initialization: after the servlet is instantiated, the servlet container will call the init method of each servlet to instantiate each instance. After executing the init method, the servlet will be in the “initialized” state.
- Request processing: the servlet calls the service() method to process the client’s request. When there are multiple requests, the servlet container will play multiple threads to access the service () method of the same servlet instance.
- Uninstall Servlet: when the server no longer needs the servlet instance or reloads, the destroy method is called. With this method, the servlet can release all the resources requested in the init method.
- Garbage collection through JVM
- (important) there can be several threads in a thread pool. Reusing the created threads can reduce the resource consumption and improve the response speed of the program;
- (important) it can control the maximum number of concurrent threads and improve the manageability of threads.
- Some functions related to time are realized, such as timing execution, periodic execution, etc.
- The isolated thread environment can be configured with independent thread pool to isolate slower threads from faster ones to avoid mutual influence.
- The queue buffer policy and reject mechanism of task thread are implemented.
① corePoolSize: the number of resident core threads. If it is set too much, it will waste resources. If it is too small, it will cause frequent creation and destruction of threads.
② maximumPoolSize: the maximum number of threads that can be executed simultaneously in the thread pool, which must be greater than 0.
③ Keepalivetime: when the number of threads in the thread pool exceeds the corepoolsize, how long is it allowed to wait for the task to be taken from the workqueue? After this time, it will be destroyed to avoid wasting memory resources.
④ Unit: the time unit of keepalivetime.
⑤ Workqueue: work queue / blocking queue. When the number of thread requests is greater than or equal to corepoolsize, the thread will enter the queue.
⑥ Threadfactory: thread factory, used to create threads. The thread can be named to facilitate error analysis.
⑦ Rejecthandler: used when the thread in the thread pool exceeds the maximumpoolsize
- Policy when rejecting processing tasks
By default, abortpolicy is used to discard the task and throw a rejectedexecutionexception exception;
Callerrunspolicy attempts to submit the task again (the calling thread (the thread submitting the task) processes the task again);
Discardoldestpolicy discards the top task of the queue, and then re submits the rejected task;
Discardpolicy discards the task without throwing an exception.
First, judge whether the core pool size is full. If not, create a thread directly from the core thread pool to execute
If the core thread is full, judge whether the work queue is full. If not, submit the task to the work queue for execution
If the work queue is full, judge whether the whole thread pool is full (maximum pool size). If it is full, execute the rejection policy, or create a new thread to execute the task.
① Create a thread pool. When no task is submitted, there are no threads in the thread pool by default. You can also call the prestartcorethread method to pre create a core thread.
② There are no threads in the thread pool or the number of threads alive in the thread poolLess than the number of core threads(workcount < corepoolsize), for a newly submitted task, the thread pool will create a thread to process the submitted task. At this time, the thread in the thread pool will always be alive. Even if the idle time exceeds keepalivetime, the thread will not be destroyed. Instead, it will always be blocked and waiting for the task in the task queue to execute.
③ When the number of surviving threads in the thread pool has been > = corepoolsize, a newly submitted task will be put into the blocking queue and queued for execution. The threads created before will not be destroyed, but will continue to take the tasks in the blocked queue.
When the task queue is empty, the thread will block until a task is put into the task queue. After getting the task, the thread will continue to execute, and after the execution, it will continue to pick up the task. This is why thread pool queues use blocking queues.
④ When the number of surviving threads in the thread pool is equal to the corepoolsize and the task queue is full (assuming maximumpoolsize > corepoolsize), if a new task is created, the thread pool will continue to create new threads to process new tasks until the number of threads reaches maximumpoolsize.
After these newly created threads have executed the current task, they will not be destroyed when there are still tasks in the task queue. Instead, they will go to the task queue to take the task out for execution. After the current number of threads is greater than the corepoolsize, there will be a logic to judge whether the current thread needs to be destroyed after the current task is executed: if the task can be obtained from the task queue, it will continue to execute; if the task is blocked (the thread is idle), if the time exceeds keepalivetime, null will be returned and the current thread will be destroyed until the thread pool Thread destruction is not performed until the number of threads on the face is equal to the corepoolsize.
⑤ If the current number of threads reaches the maximum pool size, and the task queue is full, and there are new tasks coming, then the rejection strategy is directly adopted. The default processor logic is to use abortpolicy to throw a rejectedexecutionexception exception.
Four types of thread pools can be created through executors’ static factory method
newFixedThreadPool，Fixed size thread poolThe number of core threads is also the maximum number of threads,There are no idle threads，keepAliveTime = 0。 The work queue used is an unbounded blocking queue linked blocking queue, which is suitable for servers with heavy load.
newSingleThreadExecutor, usingSingle threadIt is equivalent to single thread executing all tasks in serial, and all tasks are executed in the specified order (FIFO, LIFO, priority). It is suitable for scenarios where tasks need to be executed sequentially.
newCachedThreadPool, create aCacheable thread poolIf the length of the thread pool exceeds the processing needs, idle threads can be recycled flexibly. If there are no idle threads to recycle, new threads will be created. If the main thread submits tasks faster than the thread processes, the thread pool may continuously create new threads, which in extreme cases will exhaust CPU and memory resources. It is suitable for small programs or light load servers that perform many short-term asynchronous tasks.
newScheduledThreadPool: fixed length route pool,Support regular and periodic task executionIt is suitable for scenarios where multiple background tasks are required and the number of threads is limited. Compared with timer, it is more secure, more powerful, and
newCachedThreadPoolThe difference is that worker threads are not recycled.
RUNNING: running, receiving new tasks or processing tasks in the queue.
SHUTDOWN: closed, no new tasks are received, but tasks in the queue are processed.
STOP: stops, no longer receives new tasks, does not process tasks in the queue, and interrupts tasks in progress.
TIDYING: all tasks have ended with queue size of 0.
- Can be called
shutdownNowMethod to close the thread pool. The principle is to traverse the worker threads in the thread pool and call them one by one
interruptMethod to interrupt the thread.
- The difference is
shutdownNowFirst, set the state of the thread pool to stop, and thenTry to stop all threads in the thread pool, and returns a list of tasks waiting to be executed; and
shutdownJust set the status of the thread pool to shutdown, the tasks in the work queue continue to execute, and the thread pool will send an interrupt signal to those idle threads.
- Usually called
shutdownTo close the thread pool, call if the task does not have to be completed
If the core thread is not set to allow timeout, there will be a problem.
- The allowed timeout of core thread refers to whether the core thread uses the blocking method to wait for the task to arrive when getting the task from the wokerqueue, or obtains the task from the synchronous blocking queue by setting the timeout.
- If the shutdown or shutdownnow methods are not called, the core thread calls the BlockingQueue.take Method gets the task and is in a blocked pending state. The core thread will always be in the blocking state, resulting in memory leakage, and the main thread cannot exit unless forced to kill.
- Task nature: CPU intensive, IO intensive and hybrid.
Tasks with different natures are processed by thread pools of different sizes. CPU intensive tasks should be configured with as few threads as possible; IO intensive tasks should be configured with as many threads as possible; mixed tasks, if they can be split, can be split into one CPU intensive task and one io For intensive tasks, as long as the execution time of two tasks is not different, the throughput after decomposition will be higher than that of serial execution. If the difference is too large, there is no need to decompose.
- Task priority / execution time.
Use priority queues to let tasks with high priority or short execution time execute first.
- Task dependency: whether to rely on other resources, such as database connection.
For tasks that depend on the database connection pool, since the thread needs to wait for the results returned by the database after submitting SQL, the longer the waiting time, the longer the CPU idle time. Therefore, the number of threads should be set as large as possible to improve the CPU utilization.
- Generally, after the task is submitted, the thread pool will use the thread factory to create threads, unless the number of threads in the thread pool is corepoolsize or maxmumpoolsize.
- The core thread can be pre created through the prestartcorethread method or the prestartallcorethreads method before the task is committed.
Once the thread pool has created a thread through threadfactory, it will encapsulate the created thread into the worker object and start the thread at the same time. The newly created thread will execute the task that has just been submitted, and it will continuously fetch the task execution from the workerqueue. The thread reuse of thread pool is achieved by continuously fetching tasks from the workerqueue.
The exclusive lock is implemented by abstractqueuedsynchronizer inherited from the worker class. Each time the task is submitted, the lock operation is performed first, and then the unlock operation is performed after the task is executed.
The thread pool provides three extension points: before and after the run method or call method of the submitted task is called, that is, the beforeexecutor and afterexecutor methods; the other extension point is that when the status of the thread pool changes from tidying to terminated, the terminated method will be called.
Blocking queue supports blocking insertion and removal. When the queue is full, the production thread is blocked until the queue is full. When the queue is empty, the consumer thread is blocked until the queue is not empty. Blocking producers mainly use the
parkUnder the Linux operating system, the implementation method is different
Blocking queue in Java
- Arrayblocking queue is a bounded blocking queue composed of arrays. By default, thread fairness is not guaranteed.
- Linked BlockingQueue: a bounded blocking queue composed of linked lists. The default and maximum length of the queue is the maximum integer value.
- Priority BlockingQueue: supports priority unbounded blocking queue. By default, elements are sorted in ascending order. Customizable
compareToMethod specifies collation or specifies comparator sorting during initialization, which cannot guarantee the order of elements with the same priority.
- Delayqueue, which supports the unbounded blocking queue of delayed acquisition elements, is implemented by priority queue. When creating an element, you can specify how long to get the current element from the queue. Only when the delay expires can the element be obtained from the queue, which is suitable for caching and timing scheduling.
- Synchronous queue, which does not store the blocking queue of elements, each put must wait for a take. By default, non fair policy is used, which is suitable for transitive scenarios with high throughput.
- Linked blocking deque, a bidirectional blocking queue composed of linked lists, can insert and remove elements from both ends of the queue, reducing competition when multiple threads join the queue at the same time.
ThreadLocal is a thread shared variable, which is mainly used to transfer data across classes and methods within a thread. Threadloacl has a static internal class threadlocalmap, whose key is the ThreadLocal object, and the value is the entry object. There is only one value of the object class in the entry. ThreadLocal is shared by threads, but threadlocalmap is private to each thread. ThreadLocal has three main methods: set, get and remove.
- Set method
First get the current thread, and then get the threadlocalmap object map corresponding to the current thread. If the map exists, set the value directly. Key is the current ThreadLocal object, and value is the parameter passed in.
If the map does not exist, it passes through
createMapMethod to create a threadlocalmap object for the current thread and set the value.
- Get method
First get the current thread, and then get the threadlocalmap object map corresponding to the current thread. If the map exists, the current ThreadLocal object is used as the key to obtain the entry type object E. if e exists, its value attribute is returned.
If e does not exist or map does not exist, call
setInitialValueMethod creates a threadlocalmap object for the current thread and returns the default initial value of null.
- Remove method
First, get the object m of its corresponding threadlocalmap type through the current thread. If M is not empty, the key of ThreadLocal and its corresponding value value value will be disconnected.
- Existing problems
Thread reuse will produce dirty data. Since thread pool will reuse thread object, ThreadLocal bound to thread will also be reused. If remove is not called to clear the thread related ThreadLocal information, then if the next thread does not call set to set the initial value, it may get the reused thread information.
ThreadLocal also has the problem of memory leakage. Because ThreadLocal is a weak reference, but the value of entry is a strong reference, value will not be released after ThreadLocal is garbage collected. Therefore, it is necessary to call the remove method in time for cleaning operation.
1. It reduces the coupling between components and realizes the decoupling between software layers
2. It can use many services that are easy to provide, such as transaction management, message service, etc
3. The container provides singleton support
4. The container provides AOP technology, which is easy to implement such functions as authority interception and runtime monitoring
5. Container provides many auxiliary classes, which can accelerate the development of application
6. Spring provides integration support for mainstream application frameworks, such as hibernate, JPA, struts, etc
7. Spring is a low intrusive design, code pollution is very low
8. Independent of various application servers
9. The di mechanism of spring reduces the complexity of business object replacement
10. Spring is highly open and does not force the application to completely rely on spring. Developers can freely choose part or all of spring
IOC control inversion,Object creation and dependency inversion to container implementationYou need to create a container and a description to let the container know the dependencies between objects. Spring manages objects and their dependencies through IOC containers. The main implementation of IOC is di. The object is not to find the dependent class from the container, but to inject the dependent class into it when the container instantiates the object.
IOC is called inversion of control, Di is called dependency injection.
- Inversion of control is the transfer of control right of component objects, from program code itself to external containers, which create objects and manage the dependencies between objects.
- The basic principle of dependency injection is that application components should not be responsible for finding resources or other dependent cooperative objects. The container should be responsible for the configuration of objects, and the logic of searching resources should be extracted from the code of application components and handed over to the container. Di is a more accurate description of IOC, that is, the dependency relationship between components is determined by the container at runtime, that is, the container dynamically injects some dependency into the component.
- Construction method injection
The IOC container will check the construction method of the object and obtain its list of dependent objects. When the object instantiation is completed, the dependent properties will also be injected successfully and can be used directly. The disadvantage is that multiple constructors may be needed when there are many dependent objects.
- Setter method injection
It is better to add setter methods for the properties of dependent objects than constructor injection in terms of description, but the disadvantage is that it can’t enter the ready state after the object construction is completed. The IOC container instantiates the bean object first, and then injects the properties by calling the setter method through reflection.
- Annotation injection
@Autowired: automatically injects by type. If there are multiple matches, it will be searched according to the ID of the specified bean
@Resource: injected according to the ID of the bean, or by type if not found.
@Value: used to inject basic data types and strings.
Dynamic proxy can be used for any delegate class at any time, and can be used in the
InvocationHandler#invokeGet the runtime information, and can do some aspect processing.
Behind the dynamic proxy, a delegate class is generated dynamically
Proxy.classThe proxy class implements the interface of the delegate class and forwards the interface call to the
InvocationHandler#invokeFinally, the corresponding method of the real delegate class is called.
The dynamic proxy mechanism separates the delegation class from the proxy class, which improves the scalability.
For object-oriented programming languages, when it is necessary to introduce a common part for some objects, it will introduce a lot of repetitive code (we can call it crosscutting code).
AOP aspect oriented programming is to solve this problem. We can extract the repeated parts of the code, and use dynamic proxy technology to enhance the method without modifying the source code.
The use of AOP is easy to reduce the duplication of code, reduce the coupling between modules, and is conducive to future operability and maintainability.
If the target object implements the interface, JDK dynamic proxy is adopted by default, and cglib can also be forced to be used; if the target object does not implement the interface, cglib is adopted.
Common scenarios include authority authentication, automatic caching, error handling, logging, debugging and transaction.
Use case: JDBC template is used to connect database and declare transaction
@TransactionalAnnotation opens the transaction.
@Aspect: declare that the annotated class is a faceted bean.
@Before: pre notification refers to the notification executed before a connection point.
@After: Post notification refers to the notification executed when a connection point exits (whether normal return or abnormal exit).
@AfterReturning: notification after return refers to the notification executed after a connection point completes normally. The return value is received by using the returning property.
@AfterThrowing: exception notification, which refers to the notification executed when the method exits abnormally, and
@AfterReturningThere is only one execution, and the exception is received using the throwing property.
- Spring MVC is an application development framework based on MVC architecture to simplify web application development
When the web container starts, it initializes the IOC container, loads the bean definition information and initializes all singleton beans, traverses the beans in the container, obtains the URL of all methods in each controller, and saves the URL and corresponding controller to a map collection.
All requests will be forwarded to the dispatcher servlet for processing, and the dispatcher servlet will request handlermapping to find out the
@ControlerModified bean and
@RequestMappingModify methods and classes to generate handler and handlerinterceptor and return them in the form of a handlerexceptionchain chain chain.
The dispatcher servlet uses handler to find the corresponding handlerapper, calls handler’s method through handlerapper, binds the request parameters to the formal parameters of the method, executes the method to process the request, and obtains the logical view modelandview.
Use viewresolver to parse modelandview to get the physical view view, render the view, fill the data into the view and return it to the client.
DispatcherServletFront end controller, the core of the whole process control, is responsible for receiving requests and forwarding them to the corresponding processing components.
Handler: processor, complete the specific business logic.
HandlerMapping: processor mapper to complete URL to controller mapping.
HandlerInterceptor: processor interceptor, which can be implemented if the interception processing needs to be completed.
HandlerExecutionChain: processor execution chain, including handler and handlerinterceptor.
HandlerAdapter: processor adapter, the dispatcher servlet executes different handler through handleradapter.
ModelAndView: logical view, loading model data information.
ViewResolver: View parser, which parses logical views into physical views.
@RequtestMapping: mapping URL request and method, which can be added on both class and method definitions.
valueProperty specifies the address of the URL request.
methodProperty restricts the type of request. If the URL is not requested using the specified method, 405 error will be reported.
paramsProperty restricts the parameters that must be provided.
@RequestParam: if the formal parameter and URL parameter name of the controller method are inconsistent, you can use this annotation binding.
valueProperty represents the parameter name in the HTTP request,
requiredThe default value is false.
defaultValueProperty specifies the default value when no value is assigned to the parameter.
@PathVariable: Spring MVC supports restful URL through
@PathVariableComplete the parameter binding.
- Simplify development: its role is to quickly build the spring framework.
- Simplify configuration: for example, to create a web project, when using spring, you need to add multiple dependencies in the POM file, while you only need to add a starter web dependency in springboot.
- Simplify deployment: when using spring, you need to deploy tomcat, and then package the project into a war package. Spring boot is embedded with tomcat, so you just need to type the project into a jar package.
@SpringBootApplication: automatically configure the program as necessary. This configuration is equivalent to:
@EnableAutoConfiguration: allows springboot to automatically configure annotations. After opening, springboot can configure beans according to the packages or classes under the current classpath.
@SpringBootConfiguration: equivalent to
@ConfigurationIt’s just that the semantics are different.
- Spring MVC and spring boot belong to spring. Spring MVC is an MVC framework based on spring, and spring boot is a set of rapid development integration package based on spring
- Spring is like a big family with many derivatives, such as boot, security, JPA and so on. However, they are all based on spring’s IOC and AOP. IOC provides a container for dependency injection, while AOP solves aspect oriented programming, and then implements the advanced functions of other derivative products on the basis of bothA MVC framework based on Servlet, mainly to solve the problem of web development, because the configuration of spring is very complex, and the processing of various XML and properties is relatively cumbersome. Therefore, in order to simplify the use of developers, the spring community creatively launched spring boot, which follows the Convention better than configuration, greatly reduces the threshold of spring use, but does not lose the original flexible and powerful functions of spring.
- Load driver: This is through
java.lang.ClassStatic method of class
- Create a connection to the database:use
getConnection(String url , String username , String password )Method to obtain a connection object by passing in the path of the specified database to be connected, the user name and password of the database
- Create a statement object:Static SQL statement, statement; dynamic SQL statement, Preparedstatement; database stored procedure, callablestatement.
- Execute SQL statement
- Traversal result set
- Close JDBC object resources
- In this process, you need to handle exceptions
The following references are used in the collation of knowledge points:
[preparing for autumn move] high quality java knowledge points 1: algorithm, design pattern, Java Foundation
High quality java knowledge points sorting 2: collection, JVM, concurrency
The summary of Java face classics [recitation version] includes all the test points [summarized by myself]
Java thread pool
Essay classification – Java
Answers to some questions about Java hashcode() and equals()
Detailed explanation of Java HashSet implementation principle
Interview questions: the structure of HashMap, what are the differences between 1.7 and 1.8, the most in-depth analysis in history
What is a guardian thread?
Interviewers’ favorite volatile keywords
Must learn ten classic sorting algorithm, see this article is enough (attached complete code moving diagram high quality article)
Why is HashMap thread unsafe in jdk1.7 and JDK1.8?
In depth analysis of the implementation of concurrent HashMap
Detailed explanation of HashMap and concurrent HashMap
Full GC and minor GC in the JVM series
Deep understanding of JVM (3) — seven kinds of garbage collectors
This work adoptsCC agreementThe author and the link to this article must be indicated in the reprint