Java Engineer Interview Questions

Time:2021-10-17

The content covers: Java, mybatis, zookeeper, Dubbo, elasticsearch, memcached, redis, mysql, spring, spring boot, springcloud, rabbitmq, Kafka, Linux, etc
Mybatis interview questions
1. What is mybatis?
1. Mybatis is a semi ORM (object relational mapping) framework. It encapsulates JDBC internally. During development, you only need to pay attention to the SQL statement itself, and you don’t need to spend energy dealing with the complicated processes such as loading drivers, creating connections, creating statements, etc. Programmers directly write the original SQL, which can strictly control the SQL execution performance and has high flexibility.
2. Mybatis can use XML or annotations to configure and map native information, map POJOs to records in the database, and avoid almost all JDBC code, manually setting parameters and obtaining result sets.
3. Various statements to be executed are configured through XML files or annotations, and the final executed SQL statements are generated through the mapping between Java objects and SQL dynamic parameters in the statement. Finally, the MySQL framework executes SQL and maps the results into Java objects and returns them. (the process from executing SQL to returning result).
2. Advantages of mybaits:
Page 34 of 485
1. Programming based on SQL statement is quite flexible and will not have any impact on the existing design of application program or database. SQL is written in XML to decouple SQL and program code for unified management; Provides XML tags, supports writing dynamic SQL statements, and can be reused.
2. Compared with JDBC, it reduces the amount of code by more than 50%, eliminates a large number of redundant codes in JDBC, and does not require manual switch connection;
3. It is well compatible with various databases (since mybatis uses JDBC to connect to the database, mybatis supports all databases supported by JDBC).
4. Good integration with spring;
5. Provide mapping labels to support the mapping between objects and ORM fields in the database; Provide object relationship mapping labels to support object relationship component maintenance.
3. Disadvantages of mybatis framework:
1. The workload of writing SQL statements is large, especially when there are many fields and associated tables, which has certain requirements for developers to write SQL statements.
2. SQL statements depend on the database, resulting in poor database portability, and the database cannot be replaced at will.
4. Mybatis framework is applicable to:
1. Mybatis focuses on SQL itself and is a sufficiently flexible Dao layer solution.
2. Mybatis will be a good choice for projects with high performance requirements or more demand changes, such as Internet projects.
Page 35 of 485
5. What are the differences between mybatis and Hibernate?
1. Unlike hibernate, mybatis is not entirely an ORM framework, because mybatis requires programmers to write SQL statements themselves.
2. Mybatis directly writes the original SQL, which can strictly control the SQL execution performance and has high flexibility. It is very suitable for software development with low requirements for relational data model, because the requirements of this kind of software change frequently, but once the requirements change, it requires rapid output of results. However, the premise of flexibility is that mybatis cannot be database independent. If you need to implement software that supports multiple databases, you need to customize multiple sets of SQL mapping files, which is a heavy workload.
3. Hibernate has strong object / relational mapping ability and good database independence. For software with high requirements for relational model, if developed with hibernate, it can save a lot of code and improve efficiency.
6. What is the difference between #{} and ${}?

{} is precompiled and ${} is string substitution.

When mybatis processes #{}, it will replace #{} in SQL with #{}? Number, call the set method of Preparedstatement to assign value;
When mybatis processes ${}, it replaces ${} with the value of the variable.
Using #{} can effectively prevent SQL injection and improve system security.
7. What happens when the attribute name in the entity class is different from the field name in the table?
The first method: define the alias of the field name in the SQL statement of the query to make the alias of the field name consistent with the attribute name of the entity class.
Page 36 of 485
<select id=”selectorder” parametertype=”int” resultetype=” me.gacl.domain.order”> select order_id id, order_no orderno ,order_price price form orders where order_id=#{id}; </select>
Type 2: map the one-to-one correspondence between field names and entity class attribute names through < resultmap >.
<select id=”getOrder” parameterType=”int” resultMap=”orderresultmap”> select * from orders where order_id=#{id} </select>
<resultMap type=”me.gacl.domain.order” id=”orderresultmap”> <!– Map the primary key field with the ID attribute – > < ID property = “Id” column = “order_ id”>
<!– Use the result attribute to map non primary key fields. Property is the attribute name of the entity class, and column is the attribute in the data table – > < result property = “orderNo” column = “order_ no”/> <result property=”price” column=”order_ price” /> </reslutMap>
8. How to write a fuzzy query like statement?
Type 1: add SQL wildcards in Java code.
string wildcardname = “%smi%”; list<name> names = mapper.selectlike(wildcardname);
Page 37 of 485
<select id=”selectlike”> select * from foo where bar like #{value} </select>
Type 2: splicing wildcards in SQL statements will cause SQL injection
string wildcardname = “smi”; list<name> names = mapper.selectlike(wildcardname);
<select id=”selectlike”> select * from foo where bar like “%”#{value}”%” </select>
9. Usually, an XML Mapping file will write a Dao interface corresponding to it,
Excuse me, what is the working principle of this Dao interface? Methods in Dao interface,
Can the method be overloaded when the parameters are different?
Dao interface is mapper interface. The fully qualified name of the interface is the value of the namespace in the mapping file; The method name of the interface is the ID value of mapper’s statement in the mapping file; The parameters in the interface method are the parameters passed to SQL.
Mapper interface has no implementation class. When calling interface methods, the interface fully qualified name + method name splicing string is used as the key value to uniquely locate a mapperstatement. In mybatis, each < Select >, < Insert >, < update >, < delete > tag will be resolved into a mapperstatement object.
Page 38 of 485
For example: com.mybatis3.mappers.studentdao.findstudentbyid, mapperstatement with ID findstudentbyid under namespace com.mybatis3.mappers.studentdao can be found uniquely.
The method in mapper interface cannot be overloaded because it uses the saving and searching strategy of fully qualified name + method name. The working principle of mapper interface is JDK dynamic proxy. Mybatis runtime will use JDK dynamic proxy to generate proxy object proxy for mapper interface. The proxy object will intercept interface methods, execute the SQL represented by mapperstatement, and then return the SQL execution results.
10. How does mybatis perform paging? What is the principle of paging plug-in?
Mybatis uses the rowbounds object for paging, which is a memory paging for the resultset result set, not a physical paging. You can directly write parameters with physical paging in SQL to complete the physical paging function, or you can use the paging plug-in to complete the physical paging.
The basic principle of the paging plug-in is to use the plug-in interface provided by mybatis to implement a custom plug-in, intercept the SQL to be executed in the plug-in interception method, then rewrite the SQL, and add the corresponding physical paging statements and physical paging parameters according to the dialect of dialect.
11. How does mybatis encapsulate the SQL execution result as a target object and return it?
What are the mapping forms?
The first is to use the < resultmap > tag to define the mapping relationship between database column names and object attribute names one by one.
The second is to use the alias function of SQL column to write the alias of column as object attribute name.
Page 39 of 485
After having the mapping relationship between column name and attribute name, mybatis creates objects through reflection, and uses the attributes reflected to the object to assign values one by one and return them. Those attributes that cannot find the mapping relationship cannot be assigned.
12. How do I perform a batch insert?
First, create a simple insert statement:
<insert id=”insertname”> insert into names (name) values (#{value}) </insert>
Then perform a batch insert in the Java code as follows:
list < string > names = new arraylist(); names.add(“fred”); names.add(“barney”); names.add(“betty”); names.add(“wilma”); // Note that executortype.batch sqlsession sqlsession = sqlsessionfactory.opensession (executortype. Batch); try { namemapper mapper = sqlsession.getmapper(namemapper.class); for (string name: names) { mapper.insertname(name); } sqlsession.commit(); } catch (Exception e) { e.printStackTrace(); sqlSession.rollback();
Page 40 of 485
throw e;
} finally { sqlsession.close(); }
13. How to get automatically generated (Master) key values?
The insert method always returns an int value, which represents the number of rows inserted.
If the self growth strategy is adopted, the automatically generated key value can be set to the passed in parameter object after the insert method is executed.
Example:
<insert id=”insertname” usegeneratedkeys=”true” keyproperty=” id”> insert into names (name) values (#{name}) </insert> name name = new name(); name.setname(“fred”);
int rows = mapper.insertname(name); // After completion, the ID has been set to the object system. Out. Println (“rows inserted =” + rows); system.out.println(“generated key value = ” + name.getid());
14. How to pass multiple parameters in mapper?
Page 41 of 485
1. First:
Functions of Dao layer
public UserselectUser(String name,String area); For the corresponding XML, #{0} represents the first parameter received in the Dao layer, #{1} represents the second parameter in the Dao layer, and more parameters can be added later.
<select id=”selectUser”resultMap=”BaseResultMap”> select * fromuser_user_t whereuser_name = #{0} anduser_area=#{1} </select>
2. The second method: use @ param annotation:
public interface usermapper { user selectuser(@param(“username”) string username,@param(“hashedpassword”) string hashedpassword); }
Then, it can be used in XML as follows (it is recommended to encapsulate it as a map and pass it to mapper as a single parameter):
<select id=”selectuser” resulttype=”user”> select id, username, hashedpassword from some_table where username = #{username} and hashedpassword = #{hashedpassword} </select>
3. The third method is to encapsulate multiple parameters into a map
Page 42 of 485
try {
//The namespace of the mapping file and the ID of the SQL fragment can call the
SQL
//Since we have more than two parameters and only one object parameter is collected in the method, we use the map collection to load our parameters map < string, Object > map = new hashmap(); map.put(“start”, start); map.put(“end”, end); return sqlSession.selectList(“StudentID.pagination”, map); } catch (Exception e) { e.printStackTrace(); sqlSession.rollback(); throw e; } finally { MybatisUtil.closeSqlSession(); }
15. What is the use of mybatis dynamic SQL? How does it work? What are the dynamic SQL?
Mybatis dynamic SQL can write dynamic SQL in XML Mapping file in the form of tags. The execution principle is to complete logical judgment and dynamically splice SQL according to the value of expression.
Mybatis provides nine dynamic SQL Tags: trim | where | set | foreach | if | choose | when | otherwise | bind.
16. In the XML Mapping file, in addition to the common select | insert | updae | delete
What other labels are there besides labels?
Page 43 of 485
Answer: < resultmap >, < parametermap >, < SQL >, < include >, < selectkey >, plus 9 labels of dynamic SQL. Among them, < SQL > is the label of SQL fragment. SQL fragment is introduced through the < include > label, and < selectkey > is the policy label generated for the primary key that does not support self increment.
17. In the XML Mapping file of mybatis, can the IDs of different XML mapping files be repeated?
For different XML mapping files, if namespace is configured, the ID can be repeated; If the namespace is not configured, the ID cannot be repeated;
The reason is that namespace + ID is used as the key of map < string, mapperstatement >. If there is no namespace, there is only ID, and the repeated ID will cause the data to overlap each other. With a namespace, the natural ID can be repeated. If the namespace is different, the namespace + ID will naturally be different.
18. Why is mybatis a semi-automatic ORM mapping tool? It is fully automatic
What is the difference between?
Hibernate is a fully automatic ORM mapping tool. When using hibernate to query associated objects or associated collection objects, it can be obtained directly according to the object relationship model, so it is fully automatic. When mybatis queries Association objects or association collection objects, it needs to write SQL manually. Therefore, it is called a semi-automatic ORM mapping tool.
19. One to one and one to many association queries?
<mapper namespace=”com.lcb.mapping.userMapper”> <!– Association one-to-one association query — >
Page 44 of 485
<select id=”getClass” parameterType=”int” resultMap=”ClassesResultMap”> select * from class c,teacher t where c.teacher_id=t.t_id and c.c_id=#{id} </select>
<resultMap type=”com.lcb.user.Classes” id=”ClassesResultMap”> <!– Field name mapping between entity class and data table — > < ID property = “Id” column = “c_id” / > < result property = “name” column = “c_name” / > < association property = “teacher” javatype = “com. LCB. User. Teacher” > < ID property = “ID” column = “t_id” / > < result property = “name” column = “t_name” / > < / Association > < / resultmap >
<!– Collection one to many association query — > < select id = “getclass2” parametertype = “int” resultmap = “classesresultmap2” > select * from class C, teacher t, student s where c.teacher_ id=t.t_ id and c.c_ id=s.class_ id and c.c_ id=#{id} </select>
<resultMap type=”com.lcb.user.Classes” id=”ClassesResultMap2″> <id property=”id” column=”c_id”/> <result property=”name” column=”c_name”/> <association property=”teacher” javaType=”com.lcb.user.Teacher”> <id property=”id” column=”t_id”/>
Page 45 of 485
<result property=”name” column=”t_name”/> </association>
<collection property=”student” ofType=”com.lcb.user.Student”> <id property=”id” column=”s_id”/> <result property=”name” column=”s_name”/> </collection> </resultMap> </mapper>
20. How many ways can mybatis implement one-to-one? How did you do it?
There are joint query and nested query. Joint query is the joint query of several tables. It can be completed only once by configuring the association node in resultmap and configuring one-to-one classes;
Nested query is to query a table first and then query data in another table according to the foreign key ID of the result in this table. It is also configured through association, but the query of another table is configured through the select attribute.
21. There are several ways for mybatis to realize one to many. How do you operate it?
There are federated queries and nested queries. Joint query is a joint query of several tables. It can be completed only once by configuring one to many classes in the collection node in resultmap; Nested query is to query a table first and then query data in another table according to the foreign key ID of the result in this table. It is also through configuring collection, but the query of another table is configured through the select node.
Page 46 of 485
22. Does mybatis support delayed loading? If supported, its implementation principle is
what?
A: mybatis only supports delayed loading of association objects and Collection Association collection objects. Association refers to one-to-one and Collection refers to one to many queries. In the mybatis configuration file, you can configure whether to enable deferred loading. Lazyloading enabled = true|false.
Its principle is to use CGLIB to create the proxy object of the target object. When calling the target method, it will enter the interceptor method, such as calling a.getB ().GetName (), and the interceptor invoke () method finds that a.getB () is null value. Then it sends the saved query to SQL of the B object separately, and queries the B and then calls a.setB (b). Then the object B attribute of a has a value, and then complete the call of a.getb(). Getname() method. This is the basic principle of delayed loading.
Of course, not only mybatis, but almost all, including hibernate, support delayed loading on the same principle.
23. Mybatis L1 and L2 cache:
1) L1 cache: the HashMap local cache based on the perpetualcache. Its storage scope is session. When the session is flushed or closed, all caches in the session will be cleared. The L1 cache is opened by default.
2) The second level cache has the same mechanism as the first level cache. By default, it is stored in perpetual cache and HashMap. The difference is that its storage scope is mapper (namespace) and the storage source can be customized, such as ehcache. The L2 cache is not enabled by default. To enable L2 cache, the serializable serialization interface (which can be used to save the state of the object) needs to be implemented by using L2 cache attribute class, and < cache / > can be configured in its mapping file;
Page 47 of 485
3) For the cache data update mechanism, after the C / U / D operation of a scope (L1 cache session / L2 cache namespaces), by default, the caches in all selections under the scope will be cleared.
24. What is the interface binding of mybatis? What are the implementation methods?
Interface binding is to define any interface in mybatis, and then bind the methods in the interface with SQL statements. We can call the interface methods directly, so that we can have more flexible choices and settings than the methods provided by sqlsession.
There are two implementation methods for interface binding. One is binding through annotation, that is, add @ select, @ update and other annotations on the interface methods, which contain SQL statements for binding; The other is to bind by writing SQL in XML. In this case, the namespace in the XML Mapping file must be the full pathname of the interface. When the SQL statement is relatively simple, it is bound with annotations. When the SQL statement is relatively complex, it is bound with XML. Generally, it is more bound with XML.
25. What are the requirements when calling the mapper interface of mybatis?
1. The mapper interface method name is the same as the ID of each SQL defined in mapper.xml; 2. The input parameter type of mapper interface method is the same as the parametertype of each SQL defined in mapper.xml; 3. The output parameter type of mapper interface method is the same as the resulttype of each SQL defined in mapper.xml; 4. The namespace in mapper.xml file is the classpath of mapper interface.
26. What are the ways to write mapper?
Page 48 of 485
The first: interface implementation class inherits sqlsessiondaosupport: using this method, you need to write mapper interface, mapper interface implementation class and mapper.xml file.
1. Configure the location of mapper.xml in sqlmapconfig.xml
< mappers > < mapper resource = “address of mapper.xml file” / > < mapper resource = “address of mapper.xml file” / > < / mappers >
1. Define mapper interface 3. Implement class integration. In sqlsessiondaosupport mapper method, you can add, delete, modify and query data in this. Getsqlsession(). 4. Spring configuration
< bean id = “” class = “mapper interface implementation” > < property name = “sqlsessionfactory” ref = “sqlsessionfactory” > < / property > < / bean >
Second: use org.mybatis.spring.mapper.mapperfactorybean:
1. Configure the location of mapper.xml in sqlmapconfig.xml. If the names of mapper.xml and map interface are the same and in the same directory, you can not configure it here
< mappers > < mapper resource = “address of mapper.xml file” / > < mapper resource = “address of mapper.xml file” / > < / mappers >
2. Define mapper interface:
Page 49 of 485
1. The namespace in mapper.xml is the address of the mapper interface. 2. The method name in the mapper interface is consistent with the ID of the statement defined in mapper.xml. 3. It is defined in spring
< bean id = “” class = “org. Mybatis. Spring. Mapper. Maperfactorybean” > < property name = “mapperinterface” value = “mapper interface address” / > < property name = “sqlsessionfactory” ref = “sqlsessionfactory” / > < / bean >
Third: use mapper scanner:
1. Mapper.xml file preparation:
The namespace in mapper.xml is the address of mapper interface; The method name in the mapper interface is consistent with the ID of the statement defined in mapper.xml; If the names of mapper.xml and mapper interface are consistent, you do not need to configure them in sqlmapconfig.xml.
2. Define mapper interface:
Note that the file name of mapper.xml is consistent with the interface name of mapper and placed in the same directory. 3. Configure mapper scanner:
< bean class = “org. Mybatis. Spring. Mapper. Mappercannerconfigurer” > < property name = “basepackage” value = “mapper interface package address” > < / property > < property name = “sqlsessionfactorybeanname” value = “sqlsessionfactory” / > < / bean >
Page 50 of 485
4. After using the scanner, get the mapper implementation object from the spring container.
27. Briefly describe the operation principle of mybatis plug-in and how to write a plug-in.
A: mybatis can only write plug-ins for parameterhandler, resultsethandler, statementhandler and executor. Mybatis uses the dynamic agent of JDK to generate proxy objects for the interfaces to be intercepted to realize the interface method interception function. Whenever the methods of these four interface objects are executed, it will enter the interception method, Specifically, the invoke () method of invocationhandler, of course, will only intercept those methods you specify to be intercepted.
Write a plug-in: implement the interceptor interface of mybatis and copy the intercept () method, then write comments for the plug-in and specify which methods of which interface to intercept. Remember, don’t forget to configure the plug-in you write in the configuration file.
Zookeeper interview questions

  1. Zookeeper interview question?

Zookeeper is an open source distributed coordination service. It is the manager of the cluster. It monitors the status of each node in the cluster and makes the next reasonable operation according to the feedback submitted by the node. Finally, the simple and easy-to-use interface and the system with efficient performance and stable function are provided to users.
Distributed applications can implement functions such as data publish / subscribe, load balancing, naming service, distributed coordination / notification, cluster management, master election, distributed lock and distributed queue based on zookeeper.
Zookeeper ensures the following distributed consistency features:
Page 51 of 485
1. Sequence consistency 2, atomicity 3, single view 4, reliability 5, real-time (final consistency)
The read request of the client can be processed by any machine in the cluster. If the read request is registered with a listener on the node, the listener is also processed by the connected zookeeper machine. For write requests, these requests will be sent to other zookeeper machines at the same time, and the request will return success only after an agreement is reached. Therefore, as zookeeper’s cluster machines increase, the throughput of read requests will increase, but the throughput of write requests will decrease.
Ordering is a very important feature in zookeeper. All updates are globally ordered. Each update has a unique timestamp, which is called zxid (zookeeper transaction ID). The read request is only ordered relative to the update, that is, the return result of the read request will contain the latest zxid of the zookeeper.

  1. What does zookeeper provide?

1. File system 2. Notification mechanism

  1. Zookeeper file system

Zookeeper provides a multi-level node namespace (nodes are called znode). Different from the file system, these nodes can set associated data. In the file system, only file nodes can store data, but directory nodes cannot. In order to ensure high throughput and low latency, zookeeper maintains this tree directory structure in memory. This feature makes zookeeper unable to store a large amount of data. The upper limit of data stored in each node is 1m.
Page 52 of 485

  1. Zab agreement?

Zab protocol is an atomic broadcast protocol specially designed for zookeeper, a distributed coordination service, which supports crash recovery.
Zab protocol includes two basic modes: crash recovery and message broadcasting.
When the entire zookeeper cluster has just started, or more than half of the servers do not maintain normal communication with the leader server due to leader server downtime, restart or network failure, all processes (servers) enter the crash recovery mode, and first elect a new leader server, Then, the follower server in the cluster starts data synchronization with the new leader server. When more than half of the machines in the cluster complete data synchronization with the leader server, they exit the recovery mode and enter the message broadcasting mode. The leader server starts to receive the transaction requests of the client and generate transaction proposals for transaction request processing.

  1. Four types of data nodes znode

1. Persistent – persistent nodes exist on zookeeper all the time unless manually deleted
2. Ephemeral – temporary node. The life cycle of the temporary node is bound to the client session. Once the client session fails (the client is disconnected from zookeeper, not necessarily the session fails), all temporary nodes created by the client will be removed.
3、PERSISTENT_ Sequential – the basic characteristics of a persistent sequential node are the same as those of a persistent node, except that the sequence attribute is added. A self increasing integer maintained by the parent node will be appended to the node name.
Page 53 of 485
4、EPHEMERAL_ Sequential – the basic characteristics of the temporary sequential node are the same as those of the temporary node. The sequence attribute is added. A self increasing integer maintained by the parent node will be appended to the node name.

  1. Zookeeper watcher mechanism — data change notification

Zookeeper allows the client to register a watcher to listen to a znode on the server. When some specified events on the server trigger the watcher, the server will send an event notification to the specified client to realize the distributed notification function, and then the client will make business changes according to the watcher notification status and event type.
Working mechanism:
1. The client registers the watcher. 2. The server processes the watcher. 3. The client calls back the watcher
Summary of watcher features:
1. Once a watcher is triggered, zookeeper will remove it from the corresponding storage, whether it is a server or a client. This design effectively reduces the pressure on the server. Otherwise, for nodes that update very frequently, the server will continue to send event notifications to the client, which is very stressful for both the network and the server.
2. The process that the client executes the watcher callback is a process of serial synchronization.
3. Light weight
Page 54 of 485
3.1. Watcher notification is very simple. It only tells the client that an event has occurred, not the specific content of the event. 3.2 when the client registers the watcher with the server, it does not transfer the real watcher object entity of the client to the server, but only marks it with the boolean type attribute in the client request.
4. The watcher event is sent asynchronously. The notification event of the watcher is sent asynchronously from the server to the client, which has a problem. Different clients and servers communicate through sockets. Due to network delays or other factors, the client listens to the event when it is unavailable. Because zookeeper itself provides an ordering guarantee, That is, after the client listens to events, it will perceive that the znode it monitors has changed. Therefore, we can’t expect to monitor every change of nodes when using zookeeper. Zookeeper can only guarantee the final consistency, but cannot guarantee strong consistency.
5. Register watcher, GetData, exists, getchildren
6. Trigger watcher create, delete, SetData
7. When a client connects to a new server, the watch will be triggered with any session event. When the connection with a server is lost, the watch cannot be received. When the client reconnects, all previously registered watches will be re registered if necessary. Usually this is completely transparent. Only in one special case, the watch may be lost: for an existing watch of an un created znode, if it is created during the client disconnection and then deleted before the client connection, the watch event may be lost.

  1. Client registration watcher implementation

1. Call getdata() / getchildren() / exist() APIs to pass in the watcher object. 2. Mark the request request and encapsulate the watcher into watchregistration. 3. Encapsulate it into a packet object and send the request to the server. 4. After receiving the response from the server, register the watcher in zkwatchermanager for management. 5. Return the request and complete the registration.
Page 55 of 485

  1. Implementation of server processing watcher

1. The server receives the watcher and stores the received client request. It processes the request to determine whether it is necessary to register the watcher. If necessary, connect the node path of the data node to servercnxn (servercnxn represents a connection between the client and the server, and implements the process interface of the watcher. At this time, it can be regarded as a watcher object) Stored in watchtable and watch2paths of watchermanager.
2. Watcher trigger: take the nodedatachanged event triggered by the setdata() transaction request received by the server as an example:
2.1 encapsulating watchedevent encapsulates the notification status (syncconnected), event type (nodedatachanged) and node path into a watchedevent object
2.2 query watcher find watcher according to node path from watchtable
2.3 not found; This indicates that no client has registered watcher on this data node
2.4 find; Extract and delete the corresponding watcher from watchtable and watch2paths (it can be seen from here that the watcher is one-time on the server, and it will fail after one trigger)
3. Call the process method to trigger watcher. Here, process is mainly to send watcher event notification through the TCP connection corresponding to servercnxn.

  1. Client callback watcher

Page 56 of 485
The client sendthread thread receives the event notification and sends it to the eventthread thread to call back the watcher. The watcher mechanism of the client is also one-time. Once triggered, the Watcher will fail.

  1. ACL permission control mechanism

UGO(User/Group/Others)
At present, it is used in Linux / Unix file system and is also the most widely used permission control method. Is a coarse-grained file system permission control mode.
ACL (access control list) access control list
It includes three aspects:
Permission mode (scheme)
1. IP: perform permission control from the IP address granularity. 2. Digest: most commonly used. Use a permission ID similar to Username: password to configure permissions, so as to distinguish different applications for permission control. 3. World: the most open permission control mode, which is a special digest mode. There is only one permission ID “World: anyone”. 4. Super: super user
Authorized object
An authorization object refers to a user or a specified entity granted permission, such as an IP address or a machine light.
Permission
1. Create: data node creation permission, which allows authorized objects to create child nodes under this znode
Page 57 of 485
2. Delete: child node deletion permission, which allows the authorized object to delete the child nodes of the data node. 3. Read: read permission of the data node, which allows the authorized object to access the data node and read its data content or child node list. 4. Write: update permission of the data node, which allows the authorized object to update the data node. 5. Admin: data node management permission, The authorization object is allowed to perform ACL related setting operations on the data node

  1. Chroot feature

After version 3.2.0, the chroot feature was added, which allows each client to set a namespace for itself. If a client has chroot set, any operation of the client to the server will be limited to its own namespace.
By setting chroot, a client can be applied to a subtree of the zookeeper server. In the scenario where multiple applications share a zookeeper into the group, it is very helpful to realize the isolation between different applications.

  1. session management

Bucket splitting strategy: similar sessions are managed in the same block to facilitate zookeeper’s isolation of different blocks and unified processing of the same block.
Allocation principle: the “expiration time” of each session
Calculation formula:
ExpirationTime_ = currentTime + sessionTimeout
Page 58 of 485
Expirationtime = (expirationtime_/ expirationinreval + 1) * expirationinterval. Expirationinterval refers to the timeout check interval of zookeeper session. The default is ticktime

  1. Server role

Leader
1. The only scheduler and handler of transaction requests to ensure the sequencing of cluster transaction processing. 2. The scheduler of various services within the cluster
Follower
1. Process the non transaction request of the client and forward the transaction request to the leader server 2. Participate in the voting of the transaction request proposal 3. Participate in the leader election voting
Observer
1. A server role introduced after version 3.0 improves the non transaction processing capability of the cluster without affecting the transaction processing capability of the cluster. 2. Process the non transaction requests of the client and forward the transaction requests to the leader server. 3. Do not participate in any form of voting

  1. Working status of zookeeper server

The server has four states: looking, following, leading, and subscribing.
Page 59 of 485
1. Looking: looking for leader status. When the server is in this state, it will think that there is no leader in the current cluster, so it needs to enter the leader election state. 2. Following: follower status. Indicates that the current server role is follower. 3. Leading: leader status. Indicates that the current server role is leader. 4. Observing: observer status. Indicates that the current server role is observer.

  1. Data synchronization

After the whole cluster completes the leader election, the learner (collectively referred to as follower and observer) returns to the leader server for registration. After the learner server wants the leader server to complete the registration, enter the data synchronization phase.
Data synchronization process: (all in the way of message transmission)
Learner registers with learner
Data synchronization
Synchronization confirmation
Zookeeper’s data synchronization is generally divided into four categories:
1. Direct differential synchronization (diff synchronization) 2. Rollback first and then differential synchronization (TRUNC + diff synchronization) 3. Rollback synchronization only (TRUNC synchronization) 4. Full synchronization (SNAP synchronization)
Before data synchronization, the leader server will complete data synchronization initialization:
peerLastZxid:
Page 60 of 485
 extract lastzxid (the last zxid processed by the learner server) from the ackpoch message sent when the learner server registers
minCommittedLog:
 minimum zxid in committedlog of leader server proposal cache queue
maxCommittedLog:
 maximum zxid in the leader server proposal cache queue committedlog
Direct differentiation synchronization (diff synchronization)
 scenario: peerlastzxid is between mincommittedlog and maxcommittedlog
Rollback first and then differential synchronization (TRUNC + diff synchronization)
 scenario: when the new leader server finds that a learner server contains a transaction record that it does not have, it needs to roll back the transaction to the zxid existing on the leader server and closest to peerlastzxid
Rollback synchronization only (TRUNC synchronization)
Page 61 of 485
 scenario: peerlastzxid is greater than maxcommittedlog
Full synchronization (SNAP synchronization)
 scenario 1: peerlastzxid is less than mincommittedlog  scenario 2: there is no proposal cache queue on the leader server and peerlastzxid is not equal to lastprocesszxid

  1. How does zookeeper ensure the order consistency of transactions?

Zookeeper uses a globally increasing transaction ID to identify all proposals. Zxid is actually a 64 bit number. The upper 32 bits are epoch (period; era; era; new era) to identify the leader cycle. If a new leader is generated, epoch will increase automatically, and the lower 32 bits are used to increase the count. When a new proposal is generated, it will first send a transaction execution request to other servers according to the two-stage process of the database. If more than half of the machines can execute and succeed, it will start to execute.

  1. Why is there a master in a distributed cluster?

In the distributed environment, some business logic only needs to be executed by one machine in the cluster, and other machines can share the result, which can greatly reduce repeated calculation and improve performance. Therefore, leader election is required.

  1. How to handle ZK node downtime?

Page 62 of 485
Zookeeper itself is also a cluster. It is recommended to configure no less than 3 servers. Zookeeper should also ensure that when one node goes down, other nodes will continue to provide services. If a follower goes down, two servers provide access, because the data on zookeeper has multiple copies, and the data will not be lost; If a leader goes down, zookeeper will elect a new leader. The mechanism of ZK cluster is that as long as more than half of the nodes are normal, the cluster can provide services normally. Only when ZK nodes are hung too much and only half or less than half of the nodes can work, the cluster will fail. Therefore, a cluster with three nodes can hang up one node (the leader can get two votes > 1.5) and a cluster with two nodes can’t hang up any node (the leader can get one vote < = 1)

  1. The difference between zookeeper load balancing and nginx load balancing

ZK’s load balancing can be regulated, nginx can only adjust the weight, and others that need to be controllable need to write their own plug-ins; However, the throughput of nginx is much larger than that of ZK. It should be said to choose which method to use according to the service.

  1. What are the deployment modes of zookeeper?

Deployment mode: stand-alone mode, pseudo cluster mode and cluster mode.

  1. How many machines are required for the cluster? What are the cluster rules?

The cluster rule is 2n + 1, and N > 0, that is, 3.

  1. Does the cluster support dynamic addition of machines?

In fact, it is a horizontal expansion. Zookeeper is not very good in this regard. There are two ways:
Page 63 of 485
Restart all: close all zookeeper services and start after modifying the configuration. Does not affect previous client sessions.
Restart one by one: under the principle that more than half of the machine is available, the restart of one machine does not affect the external services provided by the whole cluster. This is a common way.
Dynamic capacity expansion is supported in version 3.5.
23. Is zookeeper’s watch listening notification to the node permanent? Why?
Not permanent?
no Official statement: a watch event is a one-time trigger. When the data set with the watch changes, the server sends the change to the client set with the watch to notify them.
Why is it not permanent? For example, if the server changes frequently, and the listening clients, in many cases, all clients must be notified of each change, causing great pressure on the network and server. Generally, the client executes GetData (“/ node a”, true). If node a is changed or deleted, the client will get its watch event. However, after node a is changed and the client does not set the watch event, it will not send it to the client. In practical applications, in many cases, our client does not need to know every change of the server. I just need the latest data.

  1. What are the Java clients of zookeeper?

Java client: zkclient provided by ZK and Apache open source cursor.
Page 64 of 485

  1. What is chubby? What do you think compared with zookeeper?

Chubby is owned by Google. It fully implements the Paxos algorithm and is not open source. Zookeeper is an open source implementation of chubby, using Zab protocol and a variant of Paxos algorithm.

  1. Say a few commands commonly used by zookeeper.

Common commands: LS get set create delete, etc.

  1. The relationship and difference between Zab and Paxos algorithm?

Similarities: 1. Both have a role similar to the leader process, which is responsible for coordinating the operation of multiple follower processes. 2. The leader process will wait for more than half of the followers to make correct feedback before submitting a proposal. 3. In the Zab protocol, each proposal contains an epoch value to represent the current leader cycle, The name in Paxos is ballot
difference:
Zab is used to build a highly available distributed data master-slave system (zookeeper), and Paxos is used to build a distributed consistent state machine system.

  1. Typical application scenarios of zookeeper

Page 65 of 485
Zookeeper is a typical publish / subscribe distributed data management and coordination framework. Developers can use it to publish and subscribe distributed data.
By cross using the rich data nodes in zookeeper and cooperating with the watcher event notification mechanism, it is very convenient to build a series of core functions involved in distributed applications, such as:
1. Data publish / subscribe 2, load balancing 3, naming service 4, distributed coordination / notification 5, cluster management 6, master election 7, distributed lock 8, distributed queue

  1. Data publish / subscribe

introduce
Data publish / subscribe system, the so-called configuration center, as its name implies, is that publishers publish data for subscribers to subscribe to.
objective
Dynamically obtain data (configuration information) to realize centralized management of data (configuration information) and dynamic update of data
Design pattern
Push mode pull mode
Page 66 of 485
Data (configuration information) properties
1. The amount of data is usually small. 2. The data content will be dynamically updated during operation. 3. All machines in the cluster share the same configuration
Such as machine list information, runtime switch configuration, database configuration information, etc
Implementation based on zookeeper
 data storage: store the data (configuration information) to a data node on the zookeeper  data acquisition: the application starts the initialization node to read the data from the zookeeper data node and registers a data change watcher on the node  data change: when the data is changed, update the data of the corresponding node of the zookeeper, Zookeeper will send the data change notice to each client, and the client can re read the changed data after receiving the notice.

  1. load balancing

ZK naming service
Naming a service refers to obtaining the address of a resource or service through a specified name and creating a global path using ZK. This path can be used as a name to point to the cluster in the cluster, the address of the service provided, or a remote object.
Distributed notification and coordination
For system scheduling, the operator sends a notification to change the state of a node through the console, and then ZK sends these changes to all clients of the watcher registered with the node.
Page 67 of 485
For performance reporting: each work process creates a temporary node in a directory. And carry the work progress data, so that the summarized process can monitor the changes of directory sub nodes and obtain the real-time global situation of work progress.
ZK naming service (file system)
Naming a service refers to obtaining the address of a resource or service through a specified name, and using ZK to create a global path, that is, a unique path. This path can be used as a name to point to the cluster in the cluster, the address of the service provided, or a remote object.
ZK configuration management (file system, notification mechanism)
The program is distributed and deployed on different machines. The configuration information of the program is placed under the znode of ZK. When the configuration changes, that is, when the znode changes, the configuration can be changed by changing the content of a directory node in ZK and notifying each client with the watcher.
Zookeeper cluster management (file system, notification mechanism)
The so-called cluster management does not care about two points: whether a machine exits and joins, and electing a master. For the first point, all machines agree to create a temporary directory node under the parent directory, and then listen to the child node change message of the parent directory node. Once a machine hangs up, the connection between the machine and zookeeper is disconnected, the temporary directory node created by the machine is deleted, and all other machines are notified that a brother directory is deleted. Therefore, everyone knows that it is on board. The addition of new machines is similar. All machines receive a notification: the new brother directory is added, and the highcount is available again. For the second point, let’s make a slight change. All machines create a temporary sequence number directory node, and each time select the machine with the lowest number as the master.
Zookeeper distributed lock (file system, notification mechanism)
With zookeeper’s consistent file system, the problem of locks becomes easier. Lock services can be divided into two categories: one is to keep exclusive, and the other is to control timing.
Page 68 of 485
For the first type, we regard a znode on zookeeper as a lock, which is implemented by creating eznode. All clients create / distribute_ The lock node, the client that is successfully created in the end, owns the lock. Delete the distribution created by yourself after use_ The lock node releases the lock. For the second category, / distribute_ Lock already exists in advance. All clients create temporary sequential numbered directory nodes under it. Like the selected master, the one with the smallest number obtains the lock and deletes it after use.
Zookeeper queue management (file system, notification mechanism)
There are two types of queues:
1. Synchronize queues. When all the members of a queue gather, the queue is available. Otherwise, it waits for all the members to arrive. 2. The queue performs queue in and queue out operations in FIFO mode.
First, create temporary directory nodes in the agreed directory and listen to whether the number of nodes is the number we require.
The second category is consistent with the basic principle of the control timing scenario in the distributed lock service. The columns are numbered and the columns are numbered. Create persistence in a specific directory_ Serial node. When the creation is successful, watcher notifies the waiting queue. The queue deletes the node with the lowest serial number for consumption. In this scenario, zookeeper’s znode is used for message storage. The data stored in znode is the message content in the message queue, and the serial number is the message number, which can be taken out in order. Since the created nodes are persistent, there is no need to worry about the loss of queue messages.
Dubbo interview questions
1. Why Dubbo?
Page 69 of 485
With the further development of service-oriented, there are more and more services, and the invocation and dependency between services are becoming more and more complex, resulting in the birth of service-oriented architecture (SOA),
Therefore, a series of corresponding technologies have been derived, such as service framework encapsulating service provision, service invocation, connection processing, communication protocol, serialization mode, service discovery, service routing, log output and so on.
In this way, a service governance framework for distributed systems emerged, and Dubbo also emerged.
2. What are the layers of Dubbo’s overall architecture design?
Interface service layer: this layer is related to business logic, and the corresponding interface and implementation are designed according to the business of provider and consumer
Configuration layer (config): external configuration interface, centered on serviceconfig and referenceconfig
Service proxy layer (proxy): transparent proxy of service interface, which generates the client stub and skeleton of the service. It takes serviceproxy as the center and the extension interface is proxyfactory
Service registration layer (Registry): encapsulates the registration and discovery of service addresses, takes the service URL as the center, and the extension interfaces are registryfactory, registry and registryservice
Routing layer (cluster): encapsulates the routing and load balancing of multiple providers, bridges the registry, takes invoker as the center, and extends the interfaces to cluster, directory, router and loadlancce
Monitoring layer: RPC call times and call time monitoring, with statistics as the center, and the extended interfaces are monitorfactory, monitor and monitorservice
Remote call layer (protocal): encapsulates RPC calls, takes invocation and result as the center, and the extension interfaces are protocal, invoker and exporter
Page 70 of 485
Information exchange layer (exchange): encapsulating the request response mode, synchronous to asynchronous. Centered on request and response, the extended interfaces are exchange, exchangechannel, exchangeclient and exchangeserver
Network transport layer: Abstract Mina and netty as unified interfaces, message as the center, and extended interfaces as channel, transporter, client, server and codec
Data serialization layer: some reusable tools with extended interfaces of serialization, objectinput, objectoutput and ThreadPool
3. What communication framework is used by default? Are there any other options?
By default, netty framework and Mina are also recommended.
4. Is the service call blocked?
It is blocked by default and can be called asynchronously. If there is no return value, you can do so.
Dubbo is a non blocking parallel call based on NiO. The client can call multiple remote services in parallel without starting multithreading. Compared with multithreading, the overhead is small. Asynchronous call will return a future object.
5. What registry do you usually use? Are there any other options?
Zookeeper, redis, multicast and simple registries are recommended, but not recommended.
Page 71 of 485
6. What serialization framework is used by default, and what else do you know?
Hessian serialization is recommended, as well as duddo, fastjason and Java’s own serialization.
7. What is the principle that service providers can implement failure kickout?
Service failure kicks out the temporary node principle based on zookeeper.
8. Why doesn’t the service launch affect the old version?
Adopt multi version development without affecting the old version.
9. How to solve the problem of long service call chain?
Distributed service tracking can be implemented in combination with Zipkin.
10. What are the core configurations?
Configuration description
Dubbo: service configuration
Dubbo: reference reference configuration
Dubbo: protocol configuration
dubbo:applicatio n
Application configuration
Page 72 of 485
Dubbo: module configuration
Dubbo: Registry registry configuration
Dubbo: monitor monitoring center configuration
Dubbo: provider configuration
Dubbo: consumer configuration
Dubbo: Method method configuration
Dubbo: argument parameter configuration
11. What protocol does Dubbo recommend?
 Dubbo: / / (recommended)  RMI: / /  Hessian: / /  http: / /  WebService: / /  thrift: / /  memcached: / /  redis: / /  rest://
12. Can a service be directly connected when there are multiple registrations for the same service?
You can connect directly from point to point, modify the configuration, or directly a service through telnet.
Page 73 of 485
13. Draw a flow chart of service registration and discovery?
14. How many schemes are there for Dubbo cluster fault tolerance?
Cluster fault tolerance scheme description
Failover cluster fails to automatically switch and automatically retry other servers (default)
The failfast cluster fails quickly. An error is reported immediately and only one call is initiated
Failsafe cluster fails. When an exception occurs, it is ignored directly
Failback cluster automatically recovers after failure, records failure requests and resends them regularly
Forking cluster calls multiple servers in parallel. As long as one is successful, it will return
Page 74 of 485
Broadcast cluster broadcasts call all providers one by one. If any one reports an error, it will report an error
15. Dubbo service is degraded and failed. How to retry?
You can set mock = “return null” in Dubbo: reference. The value of mock can also be modified to true, and then implement a mock class in the same path as the interface. The naming rule is “interface name + mock” suffix. Then implement your own degradation logic in the mock class
16. What problems have Dubbo encountered during its use?
The corresponding service cannot be found in the registry. Check whether the @ service annotation has been added to the service implementation class. You cannot connect to the registry. Check whether the corresponding test IP in the configuration file is correct
17. How does Dubbo monitor work?
The consumer side will go through the filter chain before initiating the call; When the provider receives the request, it also goes through the filter chain first, and then carries out the real business logic processing.
By default, there will be a monitorfilter in the filter chain of both consumer and provider.
1. Monitorfilter sends data to dubbomonitor. 2. After the data is aggregated by dubbomonitor (the statistics in 1min are aggregated by default), it is temporarily stored in concurrentmap < statistics, atomicreference > statisticsmap, and then a thread pool containing three threads (thread Name: dubbomonitorsendtimer) is used every 1min, Call simplemonitorservice to traverse the statistics in the sent statisticsmap. Every time the statistics are sent, reset the atomicreference of the current statistics. 3. Simplemonitorservice inserts these aggregate data into the BlockingQueue queue (the queue is 100000 in uppercase)
Page 75 of 485
4. Simplemonitorservice uses a background thread (thread Name: dubbomonitorasyncwritelogthread) to write the data in the queue to the file (this thread writes in the form of an endless loop). 5. Simplemonitorservice also uses a thread pool containing one thread (thread Name: dubbomonitortimer) to draw a chart of the statistical data in the file every 5min
18. What design patterns does Dubbo use?
Dubbo framework uses a variety of design patterns in the process of initialization and communication, which can flexibly control class loading, permission control and other functions.
When the factory mode provider exports the service, it will call the export method of serviceconfig. There are fields in serviceconfig:
private static final Protocol protocol = ExtensionLoader.getExtensionLoader(Protocol.class).getAdaptiveExtensi on();
There’s a lot of this code in Dubbo. This is also a factory mode, but the JDK SPI mechanism is used to obtain the implementation class. The advantage of this implementation is strong scalability. If you want to extend the implementation, you only need to add a file under the classpath, with zero code intrusion. In addition, like the above adaptive implementation, it can dynamically decide which implementation to call when calling. However, due to the use of dynamic proxy, code debugging will be more troublesome, so it is necessary to analyze the implementation class actually called.
Decorator mode Dubbo makes extensive use of decorator mode in both startup and invocation phases. Take the call chain provided by the provider as an example. The specific call chain code is completed in the buildinvokerchain of the protocolfilterwrapper. Specifically, the filter with group = provider in the annotation is implemented and sorted according to order. The final call order is:
Page 76 of 485
EchoFilter -> ClassLoaderFilter -> GenericFilter -> ContextFilter -> ExecuteLimitFilter -> TraceFilter -> TimeoutFilter -> MonitorFilter -> ExceptionFilter
More specifically, here is the mixed use of decorator and responsibility chain mode. For example, EchoFilter is used to judge whether it is an echo test request. If yes, it will directly return the content, which is an embodiment of the responsibility chain. For example, classloaderfilter only adds a function to the main function to change the classloader of the current thread, which is a typical decorator mode.
When the observer mode Dubbo provider is started, it needs to interact with the registry. First register its own services, and then subscribe to its own services. When subscribing, it adopts the observer mode and starts a listener. The registry will regularly check whether there are service updates every 5 seconds. If there are updates, it will send a notify message to the service provider. After the provider receives the notify message, it will run the notify method of notifylistener and execute the listener method.
Dynamic proxy mode the adaptive implementation of the class extensionloader of Dubbo extension JDK SPI is a typical dynamic proxy implementation. Dubbo needs to flexibly control the implementation class, that is, in the calling stage, it dynamically determines which implementation class to call according to the parameters, so the method of forming a proxy class can be called flexibly. The code that generates the proxy class is the createadapteextensionclasscode method of extensionloader. The main logic of the proxy class is to obtain the value of the specified parameter in the URL parameter as the key to obtain the implementation class.
19. How is the Dubbo configuration file loaded into spring?
When the spring container is started, it will read some spring default schemas and Dubbo customized schemas. Each schema will correspond to its own namespacehandler. The namespacehandler parses the configuration information through the beandefinitionparser and converts it into the bean object to be loaded!
Page 77 of 485
20. What is the difference between Dubbo SPI and Java SPI?
JDK SPI JDK standard SPI will load all extension implementations at one time. If some extensions are time-consuming, they are useless and waste resources.
Therefore, it is unrealistic to only want to load an implementation
Dubbo SPI 1 extends Dubbo without changing Dubbo’s source code. 2. Delay loading. You can only load the extension implementation you want to load at a time. 3. Support for IOC and AOP of extension points is added. One extension point can directly inject setter into other extension points. 3. Dubbo’s extension mechanism can well support third-party IOC containers. Spring beans are supported by default.
21. Does Dubbo support distributed transactions?
Currently, it is not supported. It can be implemented through the TCC transaction framework
Introduction: TCC transaction is an open source TCC compensatory distributed transaction framework
Git address: https://github.com/changmingx…
TCC transaction avoids its own intrusion into business code through Dubbo’s implicit parameter passing function.
22. Can Dubbo cache the results?
Page 78 of 485
In order to improve the speed of data access. Dubbo provides declarative caching to reduce the workload of users
<dubbo:reference cache=”true” />
In fact, there is one more label cache = “true” than an ordinary configuration file
23. How is the service online compatible with the old version?
You can use version number to transition. Multiple services with different versions are registered in the registry. Services with different version numbers do not refer to each other. This is similar to the concept of service grouping.
24. What packages does Dubbo have to rely on?
Dubbo must rely on JDK, others are optional.
25. What can the Dubbo telnet command do?
After Dubbo service is released, we can use telnet command for debugging and management. Dubbo 2.0.5 and above provide ports and support telnet commands
Connect the service telnet localhost 20880. / / press enter to enter the Dubbo command mode.
View service list
dubbo>ls com.test.TestService
Page 79 of 485
dubbo>ls com.test.TestService create delete query
 LS (list services and methods): displays a list of services.  LS -l: displays a list of service details.  LS xxservice: displays a list of service methods.  LS -l xxservice: displays a list of service method details.
26. Does Dubbo support service degradation?
To set mock = “return null” in Dubbo: reference. The value of mock can also be modified to true, and then implement a mock class in the same path as the interface. The naming rule is “interface name + mock” suffix. Then implement your own degradation logic in the mock class
27. How does Dubbo stop gracefully?
Dubbo completes the graceful shutdown through the shutdown hook of JDK. Therefore, if the forced shutdown command such as kill – 9 PID is used, the graceful shutdown will not be executed. It will only be executed through the kill PID.
28. What is the difference between Dubbo and dubbox?
Dubbox is an extension project made by Dangdang based on Dubbo after Dubbo stopped maintenance, such as adding services that can be called restful, updating open source components, etc.
Page 80 of 485
29. What is the difference between Dubbo and spring cloud?
According to the elements of micro service architecture in various aspects, see what support spring cloud and Dubbo provide.
Dubbo Spring Cloud
Service registry zookeeper
Spring Cloud Netflix Eureka
Service call mode RPC rest API
Service gateway without spring cloud Netflix zuul
Imperfect circuit breaker spring cloud Netflix hystrix
Distributed configuration without spring cloud config
Service tracking without spring cloud Sleuth
Message bus without spring cloud bus
No spring cloud stream for data stream
Batch tasks without spring cloud tasks
…… …… ……
The microservice architecture built by Dubbo is like assembling a computer. We have a high degree of freedom of choice in all links, but the final result is likely to fail because of the poor quality of a memory, which is always worrying. However, if you are an expert, these are not problems; Spring cloud is like a brand machine. Under the integration of spring source, it has done a lot of compatibility tests to ensure that the machine has higher stability. However, if you want to use something other than the original components, you need to have a sufficient understanding of its foundation.
Page 81 of 485
30. Do you know any other distributed frameworks?
In addition, there are spring cloud of spring, thrift of Facebook, finagle of twitter, etc
Elasticsearch interview questions
1. How much do you know about elastic search? Tell me about the cluster architecture of your company es
Index data size, number of slices, and some tuning methods.
Interviewer: I’d like to know the ES usage scenario and scale contacted by the company before the candidate. Have you done a large-scale index design, planning and tuning. Answer: answer truthfully in combination with your own practice scene. For example, the ES cluster architecture has 13 nodes, and the index has 20 + indexes according to different channels. The index increases by 20 + every day according to the date. The index is divided into 10 pieces, increasing by 100 million + data every day. The daily index size of each channel is controlled within 150gb.
Index level tuning only:
1.1 optimization in design stage
1. According to the business increment requirements, the index is created based on the date template, and the index is scrolled through the roll over API;
2. Index management using alias;
3. Force the index regularly every morning_ Merge operation to free up space;
Page 82 of 485
4. The thermal data is stored in SSD to improve the retrieval efficiency; Shrink cold data regularly to reduce storage;
5. Adopt curator for life cycle management of index;
6. Only for the fields requiring word segmentation, set the word splitter reasonably;
7. The mapping stage fully combines the attributes of each field, whether it needs to be retrieved, whether it needs to be stored, etc
1.2. Write tuning
1. Set the number of copies before writing to 0;
2. Close refresh before writing_ If interval is set to – 1, the refresh mechanism is disabled;
3. During writing: bulk batch writing is adopted;
4. Number of copies recovered after writing and refresh interval;
5. Try to use automatically generated IDs.
1.3 query tuning
1. Disable wildcard;
2. Disable batch terms (hundreds of scenarios);
3. Make full use of the inverted index mechanism to optimize the keyword type as much as possible;
4. When the amount of data is large, the index can be finalized based on time before retrieval;
Page 83 of 485
5. Set up a reasonable routing mechanism.
1.4 other tuning
Deployment optimization, business optimization, etc.
As mentioned above, the interviewer has basically evaluated your previous practice or operation and maintenance experience.
2. What is the inverted index of elasticsearch
Interviewer: I want to know your understanding of basic concepts. Answer: just explain it.
Traditionally, our retrieval is to find the position of the corresponding keyword through the article one by one. The inverted index forms the mapping relationship table between words and articles through word segmentation strategy. This dictionary + mapping table is the inverted index. With inverted index, the efficiency of O (1) time complexity can be realized, and the retrieval efficiency can be greatly improved.
Page 84 of 485
Academic solutions:
Inverted index, on the contrary to which words an article contains, starts from words and records which documents the word has appeared in. It consists of two parts – dictionary and inverted table.
Bonus item: the underlying implementation of inverted index is based on FST (final state translator) data structure. The data structure that Lucene has used since version 4 + is FST. FST has two advantages:
1. Small space occupation. By reusing word prefixes and suffixes in the dictionary, the storage space is compressed;
2. Fast query speed. Query time complexity of O (len (STR)).
3. What to do if there are too many elasticsearch index data, how to optimize and deploy
Interviewer: I want to know the operation and maintenance capability of large amount of data. Answer: the planning of index data should be done well in the early stage, which is the so-called “design first, coding later”, so as to effectively avoid the impact of online customer retrieval or other services caused by the lack of cluster processing capacity caused by the sudden surge of data. How to tune? As mentioned in question 1, here are some details:
3.1 dynamic index level
Create an index by scrolling based on the template + time + rollover API. For example, the template format of blog index is defined as blog in the design stage_ index_ In the form of timestamp, the data is incremented every day.
Benefits of doing so: as for the surge of data volume, the data volume of a single index is very large, which is close to the 32nd power-1 of online 2, and the index storage reaches TB + or even more.
Once a single index is large, various risks such as storage will follow, so it should be considered in advance + avoided as soon as possible.
Page 85 of 485
3.2 storage level
The cold and hot data are stored separately, the hot data (such as the data of the last 3 days or a week), and the rest are cold data. For cold data, new data will not be written again, and regular force can be considered_ Merge plus shrink compression operation saves storage space and retrieval efficiency.
3.3 deployment level
Once there is no previous planning, this is an emergency strategy. Combined with the characteristics of ES supporting dynamic expansion, the way of dynamically adding machines can alleviate the pressure of the cluster. Note: if the previous master node planning is reasonable, the dynamic addition can be completed without restarting the cluster.
4. How does elasticsearch implement master election
Interviewer: if you want to understand the underlying principle of ES cluster, you no longer only focus on the business level. Answer: preconditions:
1. Only the node of the candidate master node (Master: true) can become the master node.
2. The purpose of min_master_nodes is to prevent brain fissure.
I’ve read all kinds of online analysis versions and source code analysis books. After checking the code, the core entry is findmaster. If the master node is selected successfully, the corresponding master will be returned, otherwise null will be returned. The election process is roughly described as follows:
Step 1: confirm that the number of candidate main nodes meets the standard, and set the value discovery.zen.minimum of elasticsearch.yml_ master_ nodes;
Page 86 of 485
Step 2: compare: first determine whether you have master qualification, and those with candidate master node qualification will be returned first; If both nodes are candidate primary nodes, the value with a smaller ID will the primary node. Note that the ID here is of type string.
Aside: how to get the node ID.
1GET /_cat/nodes?v&h=ip,port,heapPercent,heapMax,id,name 2ip port heapPercent heapMax id name
5. Describe in detail the process of elasticsearch indexing documents
Interviewer: if you want to understand the underlying principles of ES, you no longer focus on the business level. Answer: the index document here should be understood as the process of writing a document to es and creating an index. Document writing includes single document writing and batch bulk writing. Only the single document writing process is explained here.
Remember this figure in the official document.
Step 1: the customer writes data to a node of the cluster and sends a request. (if no routing / coordination node is specified, the requesting node plays the role of routing node.)
Page 87 of 485
Step 2: after receiving the request, node 1 uses the document_ ID to determine that the document belongs to fragment 0. The request will be forwarded to another node, assuming node 3. Therefore, the primary partition of partition 0 is assigned to node 3.
Step 3: node 3 performs a write operation on the primary partition. If successful, it forwards the request to the replica partition of node 1 and node 2 in parallel and waits for the result to be returned. All replica shards report success, node 3 will report success to the coordination node (node 1), and node 1 will report success to the requesting client.
If the interviewer asks again: what is the process of document acquisition and segmentation in step 2? Answer: get with the help of routing algorithm, which is the process of calculating the partition ID of the target according to the routing and document ID.
1shard = hash(_routing) % (num_of_primary_shards)
6. Describe the elasticsearch search process in detail?
Interviewer: if you want to understand the underlying principle of ES search, you no longer only focus on the business level. Answer: the search is divided into two stages: query then fetch. The purpose of the query phase: locate the location, but do not retrieve it. The disassembly steps are as follows:
1. Suppose that an index data has 5 primary + 1 replica, a total of 10 fragments, and one request will hit one of the primary or replica fragments.
2. Each partition is queried locally, and the results are returned to the local ordered priority queue.
3. The result of step 2) is sent to the coordination node, which generates a global sorting list.
The purpose of the fetch phase is to fetch data. The routing node obtains all documents and returns them to the client.
Page 88 of 485
7. When elasticsearch is deployed, what are the optimization methods for Linux settings
Interviewer: I want to know the operation and maintenance capability of ES cluster. answer:
1. Close cache swap;
2. Heap memory is set to min (node memory / 2, 32GB);
3. Set the maximum number of file handles;
4. The size of thread pool + queue is adjusted according to business needs;
5. Raid mode of disk storage – raid 10 can be used conditionally to increase single node performance and avoid single node storage failure.
8. What is the internal structure of Lucence?
Interviewer: I want to know the breadth and depth of your knowledge. answer:
Page 89 of 485
Lucene has two processes of index and search, including index creation, index and search. We can expand some based on this context.
Recently, I interviewed some companies, asked questions about elasticsearch and search engines, and my summarized answers.
9. How does elasticsearch achieve master election?
1. The selection of elasticsearch is the responsibility of zendiscovery module, which mainly includes Ping (nodes discover each other through this RPC) and unicast (unicast module contains a host list to control which nodes need Ping);
Page 90 of 485
2. All nodes that can become a master (node. Master: true) are sorted according to the nodeid dictionary. Each time, each node is elected, the nodes they know are ranked in order, and then the first (bit 0) node is selected. For the time being, it is considered a master node.
3. If the number of votes on a node reaches a certain value (it can become the number of master nodes n / 2 + 1) and the node elects itself, the node is the master. Otherwise, re-election will continue until the above conditions are met.
4. Supplement: the responsibilities of the master node mainly include the management of clusters, nodes and indexes, and are not responsible for document level management; The data node can turn off the HTTP function *.
10. Nodes in elasticsearch (for example, a total of 20), including 10
What if you choose one master and the other 10 choose another master?
1. When the number of cluster master candidates is not less than 3, the brain crack problem can be solved by setting the minimum number of votes (discovery. Zen. Minimum_master_nodes) to exceed more than half of all candidate nodes; 2. When the number of candidates is two, only one master candidate can be modified, and the others can be used as data nodes to avoid brain crack.
11. When the client connects to the cluster, how to select a specific node to execute the request?
1. The transportclient uses the transport module to remotely connect to an elasticsearch cluster. It does not join the cluster, but simply obtains one or more initialized transport addresses and communicates with these addresses in a polling manner.
12. Describe in detail the process of elasticsearch indexing documents.
Page 91 of 485
By default, the coordination node uses the document ID to participate in the calculation (also supports routing), so as to provide appropriate fragmentation for routing.
shard = hash(document_id) % (num_of_primary_shards)
1. When the node where the partition is located receives the request from the coordination node, it will write the request to the memory buffer, and then write it to the filesystem cache regularly (every 1 second by default). This process from the moery buffer to the filesystem cache is called refresh;
2. Of course, in some cases, the data with momery buffer and filesystem cache may be lost. Es ensures the reliability of the data through the translog mechanism. The implementation mechanism is that after receiving the request, it will also be written to the translog. When the data in the filesystem cache is written to the disk, it will be cleared. This process is called flush;
3. In the flush process, the buffer in memory will be cleared and the content will be written to a new segment. The fsync of the segment will create a new commit point and refresh the content to disk. The old translog will be deleted and a new translog will be started.
4. The timing of flush trigger is timed trigger (30 minutes by default) or when the translog becomes too large (512M by default);
Page 92 of 485
Supplement: segement about Lucene:
1. Lucene index is composed of multiple segments. The segment itself is a fully functional inverted index.
2. Segments are immutable, allowing Lucene to incrementally add new documents to the index without rebuilding the index from scratch.
3. For each search request, all segments in the index are searched, and each segment consumes CPU clock cycles, file handles, and memory. This means that the more segments, the lower the search performance.
4. To solve this problem, elasticsearch will merge small segments into a larger segment, submit new merged segments to disk, and delete those old segments.
13. Describe in detail the process of updating and deleting documents in elasticsearch.
1. Deletion and update are also write operations, but the documents in elasticsearch are immutable, so they cannot be deleted or changed to show their changes;
2. Each segment on disk has a corresponding. Del file. When the delete request is sent, the document is not really deleted, but marked for deletion in the. Del file. The document can still match the query, but it will be filtered out in the results. When segments are merged, documents marked for deletion in the. Del file will not be written to the new segment.
3. When a new document is created, elasticsearch will specify a version number for the document. When the update is performed, the old version of the document is marked for deletion in the. Del file, and the new version of the document is indexed to a new segment. The old version of the document can still match the query, but it will be filtered out in the results.
14. Describe the elasticsearch search process in detail.
Page 93 of 485
1. Search is performed as a two-stage process, which we call query then fetch;
2. During the initial query phase, the query is broadcast to each shard copy (primary shard or replica shard) in the index. Each fragment performs a search locally and builds a priority queue with the size of matching documents as from + size.
PS: the file system cache will be queried during the search, but some data is still in the memory buffer, so the search is near real-time.
3. Each fragment returns the IDS and sorting values of all documents in its own priority queue to the coordination node. It combines these values into its own priority queue to produce a global sorted result list.
4. The next step is the retrieval phase. The coordination node identifies which documents need to be retrieved and submits multiple get requests to the relevant fragments. Each fragment loads and enriches the document, and then returns the document to the coordination node if necessary. Once all the documents are retrieved, the coordination node returns the results to the client.
5. Add: the search type of query then fetch refers to the data of this segment when scoring the relevance of documents, which may not be accurate when the number of documents is small. DFS query then fetch adds a pre query processing to query term and document frequency. This score is more accurate, but the performance will deteriorate*
Page 94 of 485
15. In elastic search, how to find the corresponding inverted string according to a word
Lead?
SEE:
 Lucene index file format (1)  Lucene index file format (2)
16. When elasticsearch is deployed, what are the optimization methods for Linux settings
Law?
1. 64 GB memory machines are ideal, but 32 GB and 16 GB machines are also common. Less than 8 GB is counterproductive.
2. If you want to choose between faster CPUs and more cores, it’s better to choose more cores. The extra concurrency provided by multiple cores is far better than a slightly faster clock frequency.
3. If you can afford SSD, it will go far beyond any rotating media. SSD based nodes have improved query and index performance. SSD is a good choice if you can afford it.
4. Even if data centers are close at hand, avoid clustering across multiple data centers. It is absolutely necessary to avoid clusters spanning large geographical distances.
5. Make sure that the JVM running your application is exactly the same as the JVM of the server. In several places in elasticsearch, Java’s native serialization is used.
Page 95 of 485
6. By setting gateway.recover_ after_ nodes、gateway.expected_ nodes、 gateway.recover_ after_ Time can avoid too much fragmentation switching when the cluster is restarted, which may shorten the data recovery from hours to seconds.
7. Elasticsearch is configured to use unicast discovery by default to prevent nodes from inadvertently joining the cluster. Only nodes running on the same machine will automatically form a cluster. It is better to use unicast instead of multicast.
8. Do not arbitrarily modify the size of the garbage collector (CMS) and each thread pool.
9. Give Lucene (but no more than 32 GB!) half of your memory through es_ HEAP_ The size environment variable is set.
10. Memory swapping to disk is fatal to server performance. If memory is swapped to disk, a 100 microsecond operation may become 10 milliseconds. Think again that so many 10 microseconds of operation delay add up. It’s not hard to see how terrible switching is for performance.
11. Lucene used
A large number of documents. At the same time, elasticsearch uses a large number of sockets to communicate between nodes and HTTP clients. All this requires sufficient file descriptors. You should increase your file descriptor and set a large value, such as 64000.
Supplement: performance improvement method in index stage
1. Use batch requests and resize them: 5 – 15 MB of data per batch is a good starting point.
2. Storage: using SSDs
3. Segment and merge: elasticsearch defaults to 20 MB / s, which should be a good setting for mechanical disks. If you use SSD, you can consider increasing it to 100 – 200 MB / s. If you are doing batch import and don’t care about the search, you can completely turn off the merge current limit. In addition, it can be added
Page 96 of 485
index.translog.flush_ threshold_ Set the size from the default 512 MB to a larger value, such as 1 GB, which can accumulate larger segments in the transaction log when an empty trigger is triggered.
4. If your search results don’t need near real-time accuracy, consider putting the index.refresh of each index_ The interval is changed to 30s.
5. If you are doing mass import, consider setting index.number_ of_ Replicas: 0 close replica.
17. For GC, what should I pay attention to when using elasticsearch?
1、SEE:https://elasticsearch.cn/arti…
2. The index of inverted dictionary needs resident memory and cannot be GC. It is necessary to monitor the growth trend of segment memory on data node.
3. All kinds of caches, such as field cache, filter cache, indexing cache, bulk queue, etc., should be set to a reasonable size, and whether the heap is sufficient according to the worst case, that is, when all kinds of caches are full, is there any heap space that can be allocated to other tasks? Avoid using “self deception” such as clear cache to free memory.
4. Avoid searches and aggregations that return a large number of result sets. Scenarios that really need to pull a large amount of data can be implemented using the scan & scroll API.
5. Cluster stats resident memory cannot be expanded horizontally. Super large clusters can be divided into multiple clusters and connected through triple nodes.
6. If you want to know whether heap is enough or not, you must combine with the actual application scenario and continuously monitor the heap usage of the cluster.
Page 97 of 485
18. How does elasticsearch realize the aggregation of large amounts of data (hundreds of millions of magnitude)?
The first approximate aggregation provided by elastic search is the cardinality measure. It provides the cardinality of a field, that is, the number of distinct or unique values of the field. It is based on HLL algorithm. HLL will hash our input first, and then estimate the probability according to the bits in the hash result to obtain the cardinality. It is characterized by configurable accuracy, which is used to control the use of memory (more accuracy = more memory); The accuracy of small data sets is very high; We can configure parameters to set the fixed memory usage required for de duplication. Whether thousands or billions of unique values, memory usage is only related to the accuracy of your configuration.
19. In the case of concurrency, if elasticsearch ensures consistent reading and writing?
1. Optimistic concurrency control can be used through the version number to ensure that the new version will not be overwritten by the old version, and the application layer will handle specific conflicts;
2. In addition, for write operations, the consistency level supports quorum / one / all, and the default is quorum, that is, write operations are allowed only when most shards are available. However, even if most of them are available, there may be failure to write to the replica due to network and other reasons. In this way, the replica is considered to be faulty, and the fragment will be rebuilt on a different node.
3. For read operations, you can set replication to sync (default), which makes the operation return only after the primary partition and replica partition are completed; If replication is set to async, you can also set the search request parameters_ The preference is the primary node to query the primary partition and ensure that the document is the latest version.
20. How to monitor elasticsearch cluster status?
Marvel allows you to easily monitor elasticsearch through kibana. You can view the health status and performance of your cluster in real time, and analyze the past cluster, index and node indicators.
Page 98 of 485
21. Introduce the overall technical architecture of your e-commerce search.
22. Introduce your personalized search scheme?
See realizes personalized search based on word2vec and elasticsearch
23. Do you know the dictionary tree?
Common dictionary data structures are as follows:
Page 99 of 485
The core idea of trie is to exchange space for time, and use the public prefix of string to reduce the overhead of query time, so as to improve efficiency. It has three basic properties:
1. The root node does not contain characters. Except for the root node, each node contains only one character.
2. From the root node to a node, the characters passing through the path are connected to the corresponding string of the node.
3. All child nodes of each node contain different characters.
Page 100 of 485
1. You can see that the number of nodes in each layer of the trie tree is at the level of 26 ^ I. Therefore, in order to save space, we can also use dynamic linked list or array to simulate dynamic. The cost of space will not exceed the number of words × Word length.
2. Implementation: open an array of alphabet size for each node, hang a linked list for each node, and record the tree using the representation of left son and right brother;
3. For the Chinese dictionary tree, the child nodes of each node are stored in a hash table, so there is no waste of space, and the hash complexity O (1) can be preserved in the query speed.
24. How is spelling correction implemented?
1. Spelling correction is based on editing distance; Edit distance is a standard method, which is used to represent the minimum number of operation steps to convert from one string to another through insertion, deletion and replacement operations;
2. Calculation process of editing distance: for example, to calculate the editing distance of batyu and beauty, first create a 7 × 8 (batyu length is 5, coffee length is 6, plus 2 each), and then fill in black numbers in the following positions. The calculation process of other grids is to take the minimum of the following three values:
If the uppermost character is equal to the leftmost character, it is the upper left digit. Otherwise, it is the number + 1 in the upper left. (0 for 3,3) left digit + 1 (2 for 3,3 cells) upper digit + 1 (2 for 3,3 cells)
Finally, the value in the lower right corner is the editing distance value 3.
Page 101 of 485
For spelling correction, we consider constructing a metric space where any relationship satisfies the following three basic conditions:
D (x, y) = 0 — if the distance between X and Y is 0, then x = y, D (x, y) = D (y, x) — the distance from X to y is equal to the distance from y to x, D (x, y) + D (Y, z) > = D (x, z) — trigonometric inequality
1. According to the trigonometric inequality, another character whose distance from query is in the range of n is converted to B, and the maximum distance from a is d + N and the minimum is d-n.
2. The construction process of BK tree is as follows: each node has any child node, and each edge has a value to represent the editing distance. The annotation n on the edge of all child nodes to the parent node indicates that the editing distance is exactly n. For example, we have a tree
Page 102 of 485
The parent node of the tree is “book” and two child nodes “cake” and “books”. The edge label of “book” to “books” is 1, and the edge label of “book” to “cake” is 4. After constructing the tree from the dictionary, whenever you want to insert a new word, calculate the editing distance between the word and the root node, and find the edge with the value of D (neweord, root). Recursively compare with each child node until there is no child node, you can create a new child node and save the new word there. For example, insert “boo” into the tree in the above example, we first check the root node, find the edge with D (“book”, “boo”) = 1, and then check the child nodes of the edge labeled 1 to get the word “books”. If we calculate the distance D (“books”, “boo”) = 2, insert the new word after “books” and the side label is 2.
3. Query similar words as follows: calculate the editing distance d between the word and the root node, and then recursively find the edge labeled D-N to D + n (inclusive) of each child node. If the distance d between the checked node and the search word is less than N, return the node and continue the query. For example, if you enter cape and the maximum tolerance distance is 1, first calculate the editing distance D (“book”, “Cape”) = 4 from the root node, and then find the one with an editing distance of 3 to 5 from the root node. Then you find the cake node, and calculate D (“cake”, “Cape”) = 1. If the conditions are met, return to cake, and then find the one with an editing distance of 0 to 2 from the cake node, Find the Cape and cart nodes respectively, so as to get the result that meets the condition of Cape.
Memcached interview questions
1. What is memcached and what is its function?
Page 103 of 485
Memcached is an open source, high-performance memory software. From the name, MEM means memory, and cache means cache. Role of memcached: by temporarily storing all kinds of data in the database in the pre planned memory space, so as to reduce the direct and high concurrent access of business to the database, so as to improve the access performance of the database and accelerate the dynamic application service of the website cluster.
What are the application scenarios of memcached service in enterprise cluster architecture?
1、 As the front-end cache application of the database, A. full cache (easy), static cache, such as commodity classification (jd.com) and commodity information, can be placed in memory in advance, and then provide external data access. This kind of putting in memory first is called preheating (storing data in cache first). Users can only read memcached cache when accessing, Don’t read the database. b. The execution point caching (difficult) needs the cooperation of the front-end web program, and only caches the hot data, that is, caches the frequently accessed data. First preheat the basic data in the database, and then select read cache during dynamic update. If there is no corresponding data in the cache, the program will read the database, and then the program will put the read new data into the cache for storage.
Special instructions:
 if you encounter high concurrency businesses such as e-commerce spike, you must warm up in advance or realize other ideas. For example, claiming to kill is just to obtain qualifications, not to kill the goods in hand in an instant.
So what is qualification_  mark 0 as 1 in the database to qualify. Then slowly get the goods order. Because the second kill process is too long, it will occupy server resources.  if the data is updated, the cache update is triggered at the same time to prevent expired data to the user.  for the persistent cache storage system, such as redis, it can replace the storage of some databases, some simple data services, voting, statistics, friends’ attention, commodity classification, etc. nosql= not only sql
Page 104 of 485
2、 The session session shared storage of the job cluster.
 workflow of memcached service in different enterprise business application scenarios  when web programs need to access the back-end database to obtain data, they will give priority to accessing the memcached memory cache. If there is data in the cache, they will directly obtain it and return it to the front-end service and users. If there is no data (no hit), they will be sent to the back-end database server requested by the program, After obtaining the corresponding data, in addition to returning it to the front-end service and user data, it will also put the data into memcached memory for caching and wait for the next request to be accessed. Memcache memory is always the shield of the database, which greatly reduces the access pressure of the database, improves the response speed of the whole website architecture and improves the user experience.  when the program updates, modifies or deletes the existing data in the database, it will also send a request to notify memcached that the old data of the same ID content cached is invalid, so as to ensure the consistency between the data in Memcache and the data in the database.  in case of high concurrency, in addition to notifying the cache failure of memcached process, relevant mechanisms will be used to push the updated data to Memcache through the program in advance before users access new data, so as to reduce the pressure on database access and improve the cache hit rate in memcached.  the database plug-in can write to the updated database and automatically throw it to the MC for caching. It does not cache itself
2. How to implement memcached service distributed cluster?
Special note: the memcached cluster is different from the web service cluster. The sum of all memcached data is the data of the database. Each memcached is part of the data. (the data of a memcached is part of the data of the MySQL database)
a. The program side implements the program to load the IP list of all MCS by hashing the key (consistent hash algorithm)
Page 105 of 485
For example, web1 (key) = = = > corresponds to several servers a, B, C, D, e, F, G. (implemented by hash algorithm)
b. The load balancer hashes the key (consistency hash algorithm)
The purpose of consistent hash algorithm is not only to ensure that each object requests only one corresponding server, but also to minimize the update reallocation proportion of cache server when the node goes down.
3. What are the features and working principles of memcached service?
a. Completely based on memory cache, B and nodes are independent of each other, C and C / S mode architecture, written in C language, with a total of 2000 lines of code. d. Asynchronous I / O model, using libevent as the event notification mechanism. e. The cached data exists in the form of key / value pairs. f. All data is stored in memory. For the design of no persistent storage, restart the server and the data in memory will be lost. g. When the data capacity cached in memory reaches the memory value set at startup, the LRU algorithm is automatically used to delete the expired cached data. h. The expiration time can be set for the stored data, so that the expired data will be cleared automatically. The service itself will not monitor the expiration, but check the time stamp of the key when accessing to determine whether it is expired. j. Memcache will block the set memory, then group the blocks, and then provide services.
4. Briefly describe the principle of memcached memory management mechanism?
The early memcached memory management method is to use the memory allocated by malloc and recover the memory through free after use. This method is easy to generate memory fragments and reduce the efficiency of memory management by the operating system. Increase the burden on the operating system memory manager. In the worst case, it will lead to the operating system ratio
Page 106 of 485
The memcached process itself is still slow. In order to solve this problem, the slab allocation memory allocation mechanism is delayed.
Now memcached uses the slab allocation mechanism to allocate and manage memory.
The principle of slab allocation mechanism is to divide the memory allocated to memcached into chunks of specific length according to the predetermined size, and then divide the memory blocks with the same size into chunks slab classes. These memory blocks will not be released and can be reused.
Moreover, slab allocator has the purpose of reusing allocated memory. In other words, the allocated memory will not be released, but reused.
Main terms of slab allocation
Page
The memory space allocated to slab is 1MB by default. After allocation to slab, it is divided into chunks according to the size of slab.
Chunk
Memory space used to cache records.
SlabClass
A group of chunks of a specific size.
Cluster architecture issues
Page 107 of 485
5. How does memcached work?
The magic of memcached comes from two-stage hash. Memcached is like a huge hash table that stores many < key, value > pairs. Through the key, you can store or query any data.
The client can store data on multiple memcached servers. When querying data, the client first calculates the hash value of the key (phase I hash) with reference to the node list, and then selects a node; The client sends the request to the selected node, and then the memcached node finds the real data (item) through an internal hash algorithm (phase 2 hash).
6. What is the biggest advantage of memcached?
The biggest advantage of memcached is that it brings excellent horizontal scalability, especially in a huge system. Since the client does a hash by itself, it is easy to add a large number of memcached to the cluster. Memcached does not communicate with each other, so it will not increase the load of memcached; Without multicast protocol, network traffic will not explode. Memcached clusters are easy to use. Not enough memory? Add a few memcached; Not enough CPU? Add a few more; Extra memory? Add a few more. Don’t waste it.
Based on the basic principles of memcached, it is quite easy to build different types of cache architectures. In addition to this FAQ, it is easy to find detailed information elsewhere.
7. Memcached and MySQL query
What are the advantages and disadvantages of cache?
Page 108 of 485
Introducing memcached into applications still requires a lot of work. MySQL has a convenient query cache, which can automatically cache the results of SQL queries, and the cached SQL queries can be executed repeatedly and quickly. How does memcached compare? MySQL’s query cache is centralized, and MySQL servers connected to the query cache will benefit.
 when you modify a table, MySQL’s query cache will be flushed immediately. It only takes a little time to store a memcached item, but when write operations are frequent, MySQL query cache will often invalidate all cached data.  on multi-core CPUs, MySQL’s query cache will encounter scalability issues. On multi-core CPUs, querycache will add a global lock. Because more cached data needs to be refreshed, the speed will become slower.  in the query cache of MySQL, we cannot store any data (only SQL query results). With memcached, we can build various efficient caches. For example, you can execute multiple independent queries, build a user object, and then cache the user object in memcached. Querycache is SQL statement level, which is impossible. In small websites, query cache will help, but with the increase of website size, the disadvantages of query cache will outweigh the advantages.  the memory capacity that querycache can utilize is limited by the free memory space of MySQL server. It is good to add more memory to the database server to cache data. However, with memcached, as long as you have free memory, it can be used to increase the size of the memcached cluster, and then you can cache more data.
8. Memcached and the server’s local cache (such as PHP’s APC
What are the advantages and disadvantages compared with MMAP file, etc?
First, local cache has many of the same problems as the above (query cache). The memory capacity that local cache can utilize is limited by the free memory space of (a single) server. However, local
Page 109 of 485
Cache is one thing better than memcached and query cache, that is, it can not only store arbitrary data, but also has no network access delay.
 the data query of localcache is faster. Consider putting the data of highlycommon in the local cache. If each page needs to load a small amount of data, consider putting them in local
Cached.  the local cache lacks group invalidation
Invalidation). In the memcached cluster, deleting or updating a key will be noticed by all observers. However, in the local cache, we can only notify all servers to refresh the cache (very slow and non scalable), or only rely on the cache timeout failure mechanism.  local cache faces severe memory constraints, which has been mentioned above.
9. What is the cache mechanism of memcached?
The main cache mechanism of memcached is LRU (least recently used) algorithm + timeout failure. When you save data to memcached, you can specify how long the data can stay in the cache, which is forever, or some time in the future. If memcached memory is not enough, expired slabs will be replaced first, and then the oldest unused slabs will be replaced.
10. How does memcached implement redundancy mechanism?
No! We are surprised at this problem. Memcached should be the application’s cache layer. Its design itself does not have any redundancy mechanism. If a memcached node loses all data, you should be able to retrieve data from a data source (such as a database). You should pay special attention that your application should be able to tolerate node failure. Don’t write some bad query code, hope memcached
Page 110 of 485
To guarantee everything! If you are worried that node failure will greatly increase the burden on the database, you can take some measures. For example, you can add more nodes (to reduce the impact of losing one node), hot standby nodes (take over IP when other nodes are down), and so on.
11. How does memcached handle fault tolerance?
No processing! When the memcached node fails, the cluster does not need to do any fault tolerance. In case of node failure, the response measures completely depend on the user. When a node fails, the following options are available:
 ignore it! Before the failed node is restored or replaced, there are many other nodes that can deal with the impact of node failure.  remove the failed node from the node list. Be careful when doing this! By default (remainder hash algorithm), adding or removing nodes by the client will make all cached data unavailable! Because the node list referenced by the hash has changed, most keys will be mapped to different nodes (from the original) due to the change of hash value.  start the hot standby node and take over the IP occupied by the failed node. This prevents hashing chaos.  if you want to add and remove nodes without affecting the original hash results, you can use consistent hashing. You can baidu consistency hash algorithm. Clients that support consistent hashing are mature and widely used. Try it!  hashing twice. When the client accesses data, if it finds a node down, it hashes again (the hashing algorithm is different from the previous one) and reselects another node (it should be noted that the client does not remove the down node from the node list, and it may be hashed next time). If a node is good and bad, the method of twice hashing is risky. There may be dirty data on both good and bad nodes.
12. How to batch import and export items from memcached?
Page 111 of 485
You shouldn’t do that! Memcached is a non blocking server. Any operation that may cause memcached suspension or transient denial of service should be considered carefully. Batch importing data into memcached is often not what you really want! Imagine that if the cached data changes between export and import, you need to deal with dirty data;
13. What do you do if the cached data expires between export and import
What about the data?
Therefore, bulk exporting and importing data is not as useful as you think. But it’s useful in a scene. If you have a large amount of unchanging data and want the cache to warm up soon, batch import of cached data is very helpful. Although this scenario is not typical, it often occurs. Therefore, we will consider implementing the function of batch export and import in the future.
If a memcached node goes down, you will get into many other troubles. Your system is too fragile. You need to do some optimization. For example, deal with the “surprise group” problem (for example, the memcached nodes fail, and repeated queries make your database overwhelmed… This problem has been mentioned in other FAQs), or poorly optimized queries. Remember, memcached is not an excuse to avoid optimizing queries.
14. How does memcached do authentication?
No identity authentication mechanism! Memcached is the software running on the lower layer of the application (authentication should be the responsibility of the upper layer of the application). Part of the reason why the client and server sides of memcached are lightweight is that there is no authentication mechanism at all. In this way, memcached can quickly create new connections without any configuration on the server side.
Page 112 of 485
If you want to restrict access, you can use a firewall or let memcached listen to UNIX domain sockets.
15. What is memcached multithreading? How do I use them?
Threads rule! With the efforts of Steven Grimm and Facebook, memcached 1.2 and later versions have multi-threaded mode. Multithreading mode allows memcached to make full use of multiple CPUs and share all cached data among CPUs. Memcached uses a simple locking mechanism to ensure the mutual exclusion of data update operations. This method can handle multi gets more effectively than running multiple memcached instances on the same physical machine.
If your system is not heavily loaded, you may not need to enable multithreading. If you are running a large website with large-scale hardware, you will see the benefits of multithreading.
To sum up, command parsing (memcached spent most of the time here) can run in multithreaded mode. The internal data operations of memcached are based on many global locks (so this part of the work is not multi-threaded). Future improvements to the multithreading mode will remove a large number of global locks and improve the performance of memcached in extremely high load scenarios.
16. What is the maximum length of a key that memcached can accept?
The maximum length of a key is 250 characters. It should be noted that 250 is an internal limitation of the memcached server. If the client you use supports “key prefix” or similar features, the maximum length of the key (prefix + original key) can exceed 250 characters. We recommend using a shorter key because it can save memory and bandwidth.
What restrictions does memcached have on the expiration time of items?
Page 113 of 485
The maximum expiration time can reach 30 days. After memcached interprets the incoming expiration time (time period) as a time point, once it reaches this time point, memcached sets the item to the invalid state. This is a simple but obscure mechanism.
17. How large can memcached store a single item?
1MB。 If your data is larger than 1MB, you can consider compressing or splitting it into multiple keys on the client.
Why is the size of a single item limited to 1m bytes?
Ah… This is a question we often ask!
Simple answer: because this is the algorithm of the memory allocator.
Detailed answer: memcached’s memory storage engine (the engine will be pluggable…) uses slabs to manage memory. The memory is divided into slabs chunks of different sizes (first into slabs of equal sizes, and then each slab is divided into chunks of equal sizes. The chunks of different slabs are not equal). The size of chunks starts from a minimum number and increases by a certain factor until it reaches the maximum possible value.
18. Can memcached use memory more efficiently?
The Memcache client only determines the node on which a key is stored according to the hash algorithm, regardless of the memory size of the node. Therefore, you can use caches of varying sizes on different nodes. However, this is generally done: multiple memcached instances can be run on nodes with more memory, and each instance uses the same memory as the instances on other nodes.
19. What is binary protocol? Should I pay attention to it?
Page 114 of 485
The best information about binary is, of course, the binary protocol specification:
Binary protocol tries to provide a more effective and reliable protocol for the client / server, and reduce the CPU time caused by processing the protocol on the client / server.
According to Facebook’s test, parsing ASCII protocol is the most CPU consuming link in memcached. So why don’t we improve the ASCII protocol?
20. How does the memcached memory allocator work? Why not
malloc/free!? Why use slabs?
In fact, this is a compile time option. The internal slab allocator is used by default. You really should use the built-in slab allocator. In the earliest days, memcached only used malloc / free to manage memory. However, this approach does not work well with OS memory management. Malloc / free repeatedly causes memory fragmentation. The OS finally spends a lot of time looking for continuous memory blocks to meet malloc’s requests, rather than running memcached processes. If you disagree, of course you can use malloc! only
Don’t complain on the mailing list
The slab allocator is designed to solve this problem. Memory is allocated and divided into chunks, which are reused all the time. Because the memory is divided into different sizes of slabs, if the size of the item is not very suitable for the slab selected to store it, some memory will be wasted. Steven Grimm has made effective improvements in this regard.
21. Is memcached atomic?
Page 115 of 485
All single commands sent to memcached are completely atomic. If you send a set command and a get command for the same data at the same time, they will not affect each other. They will be serialized and executed successively. Even in multithreading mode, all commands are atomic unless there is a bug in the program:)
The command sequence is not atomic. If you obtain an item through the get command, modify it, and then want to set it back to memcached, we do not guarantee that the item has not been operated by other processes (processes, not necessarily processes in the operating system). In the case of concurrency, you may also overwrite an item set by other processes.
Memcached 1.2.5 and later versions provide gets and CAS commands, which can solve the above problems. If you use the get command to query the item of a key, memcached will return you the unique ID of the current value of the item. If you overwrite this item and want to write it back to memcached, you can send the unique ID to memcached through CAS command. If the unique ID of the item stored in memcached is consistent with that provided by you, your write operation will succeed. If another process also modifies the item during this period, the unique ID of the item stored in memcached will change and your write operation will fail
22. How to implement session shared storage in the cluster?
A session runs on one server, and all accesses will reach our unique server. In this way, we can obtain a session according to the sessionid transmitted from the client, or create a new session when the corresponding session does not exist (the session life cycle is up / the user logs in for the first time); However, if we are in a cluster environment, suppose we have two servers a and B, and the user’s request will be forwarded by the nginx server (the same is true for other schemes). When the user logs in, nginx forwards the request to server a, and a creates a new session and returns the sessionid to the client. When the user browses other pages, The client verifies the login status, and nginx forwards the request to server B. since there is no session corresponding to the sessionid sent by the client on B, a new session will be created and the new sessionid will be returned to the client. In this way, we can imagine that every operation of the user has a 1 / 2 probability of logging in again, This will not only have a particularly poor user experience, but also cause a surge in sessions on the server and increase the running pressure of the server.
Page 116 of 485
In order to solve the problem of seesion sharing in the cluster environment, there are four solutions:
1. Sticky session
Sticky session means that ngnix forwards all requests of the same user to the same server every time, that is, bind the user to the server.
2. Server session replication
That is, each time a session is created or modified, it is broadcast to all servers in the cluster to make the sessions on all servers the same.
3. Session sharing
The session is cached using redis and memcached.
4. Session persistence
Store the session in the database, and do the session just like operating the data.
23. What is the difference between memcached and redis?
1. Redis not only supports simple K / V data, but also provides storage of list, set, Zset, hash and other data structures. Memcache only supports simple data types and requires clients to handle complex objects themselves
2. Redis supports data persistence. It can keep the data in memory on disk and can be loaded again for use when restarting (PS: persistent in RDB and AOF).
Page 117 of 485
3. Because Memcache does not have a persistence mechanism, all cached data will become invalid after downtime. Redis is configured to be persistent. After downtime and restart, the data at the time of downtime will be automatically loaded into the cache system. Better disaster recovery mechanism.
4. Memcache can use magent to perform consistent hash on the client for distributed. Redis supports distributed implementation on the server (PS: tweetproxy / CODIS / redis cluster)
5. The simple limitation of memcached is the limitation of key and value. The maximum key length is 250 characters. The acceptable storage data cannot exceed 1MB (the modifiable configuration file becomes larger), because this is the maximum value of a typical slab and is not suitable for virtual machines. The key length of redis supports up to 512k.
6. Redis uses a single thread model to ensure that data is submitted in order. For Memcache, CAS is required to ensure data consistency. CAS (check and set) is a mechanism to ensure concurrency consistency, which belongs to the category of “optimistic lock”; The principle is very simple: take the version number, operate, compare the version number, operate if it is consistent, and give up any operation if it is inconsistent
CPU utilization. Since redis only uses a single core and memcached can use multiple cores, redis has higher performance than memcached in storing small data on average on each core. Among the data above 100k, the performance of memcached is higher than that of redis.
7. Memcache memory management: use slab allocation. The principle is quite simple. A series of fixed size groups are allocated in advance, and then the most appropriate block storage is selected according to the data size. Memory fragmentation is avoided. (disadvantages: it cannot be longer, wasting some space) memcached by default, the maximum value of a slab is 1.25 times that of the previous one.
8. Redis memory management: redis records all memory allocation by defining an array. Redis uses packaged malloc / free, which is much simpler than memcached memory management. Malloc first searches the available space allocation in the managed memory in the form of linked list, resulting in more memory fragments
Page 118 of 485
Redis interview questions
1. What is redis?
Redis is completely open source and free, complies with the BSD protocol, and is a high-performance key value database.
Redis and other key – value caching products have the following three characteristics:
Redis supports data persistence. It can save the data in memory on disk and can be loaded again for use when restarting. Redis not only supports simple key value data, but also provides storage of list, set, Zset, hash and other data structures. Redis supports data backup, that is, data backup in master slave mode.
Redis advantages
Extremely high performance – redis can read 110000 times / s and write 81000 times / s. Rich data types – redis supports string, lists, hashes, sets and ordered sets data type operations for binary cases. Atomic – all redis operations are atomic, which means that they are either executed successfully or not executed at all. A single operation is atomic. Multiple operations also support transactions, that is, atomicity, which are packaged through multi and exec instructions. Rich features – redis also supports publish / subscribe, notification, key expiration and other features.
How is redis different from other key value stores?
Redis has more complex data structures and provides atomic operations on them, which is an evolutionary path different from other databases. Redis’s data types are based on basic data structures and are transparent to programmers without additional abstraction.
Page 119 of 485
Redis runs in memory but can be persisted to disk. Therefore, it is necessary to weigh the memory when reading and writing different data sets at high speed, because the amount of data cannot be greater than the hardware memory. Another advantage of in memory database is that compared with the same complex data structure on disk, it is very simple to operate in memory, so redis can do many things with strong internal complexity. At the same time, in terms of disk format, they are compact and generated in an additional way, because they do not need random access.
2. Redis data type?
Answer: redis supports five data types: string, hash, list, set and zsetsorted set.
String and hash are commonly used in our actual projects. If you are an advanced user of redis, you also need to add the following data structures: hyperlog, geo and Pub / sub.
If you say you have played redis module, such as bloomfilter, redissearch and redis ml, the interviewer’s eyes will start to shine.
3. What are the benefits of using redis?
1. The speed is fast because the data is stored in memory, which is similar to HashMap. The advantage of HashMap is that the time complexity of search and operation is O1)
2. Support rich data types, string, list, set, Zset, hash, etc
3. Transactions are supported, and operations are atomic. The so-called atomicity means that changes to data are either executed or not executed
4. Rich features: it can be used to cache messages. Press key to set the expiration time. After expiration, it will be deleted automatically
Page 120 of 485
4. What are the advantages of redis over memcached?
1. All memcached values are simple strings. Redis, as its replacement, supports richer data classes. 2. Redis is faster than memcached. 3. Redis can persist its data
5. What are the differences between Memcache and redis?
1. Memecache stores all data in memory. It will hang up after power failure. The data cannot exceed the size of memory. Some of redis is stored on the hard disk, which can ensure the persistence of data.
2. Data types Memcache supports data types relatively simply. Redis has complex data types.
3. Using different underlying models, their underlying implementation methods and application protocols for communication with clients are different. Redis has directly built its own VM mechanism, because the general system will waste some time to move and request when calling system functions.
6. Is redis single process and single thread?
A: redis is single process and single thread. Redis uses queue technology to change concurrent access into serial access, eliminating the overhead of traditional database serial control.
7. What is the maximum storage capacity of a string type value?
Page 121 of 485
Answer: 512M
8. What is the persistence mechanism of redis? Their advantages and disadvantages?
Redis provides two persistence mechanisms RDB and AOF:
1. Rdbredis database) persistence mode: it refers to recording all key value pairs of redis database in the way of dataset snapshot (semi persistence mode), writing data to a temporary file at a certain time point, and replacing the last persistent file with this temporary file after persistence, so as to achieve data recovery.
advantage:
1. There is only one file dump.rdb, which is convenient for persistence.
2. Disaster tolerance is good. A file can be saved to a safe disk.
3. To maximize performance, fork sub processes to complete write operations and let the main process continue to process commands, so IO is maximized. A single child process is used for persistence, and the main process will not perform any IO operations, which ensures the high performance of redis) 4. Compared with the large data set, the startup efficiency is higher than that of AOF.
Disadvantages:
1. Low data security. RDB is persistent at intervals. If redis fails between persistence, data loss will occur. So this method is more suitable when the data requirements are not strict)
2. Aofappend only file) persistence mode: it means that all command line records are saved as AOF files in the format of redis command request protocol.
advantage:
Page 122 of 485
1. For data security and AOF persistence, the appendfsync attribute can be configured. There are always. Every command operation is recorded in the AOF file.
2. Write files through the append mode. Even if the server goes down halfway, the data consistency problem can be solved through the redis check AOF tool.
3. Rewrite mode of AOF mechanism. Before the AOF file is rewritten (the commands will be merged and rewritten when the file is too large), some commands can be deleted (such as misoperated flush)
Disadvantages:
1. Aof files are larger than RDB files, and recovery is slow. 2. When the data set is large, it is less efficient than RDB startup.
9. Redis common performance problems and solutions:
1. The master should not write a memory snapshot. If the master writes a memory snapshot, the Save command schedules the rdbsave function, which will block the work of the main thread. When the snapshot is large, it will have a great impact on the performance, and the service will be suspended intermittently. 2. If the data is important, a slave starts AOF backup data, and the policy is set to synchronize once per second. 3 For the speed of master-slave replication and the stability of the connection, the master and slave should be in the same LAN. 4. Try to avoid adding slaves to the master database under great pressure. 5. The master-slave replication should not use the graph structure, but the one-way linked list structure is more stable, that is, the structure of master < – slave1 < – slave2 < – slave3… Is convenient to solve the single point of failure problem, The master is replaced by slave. If the master hangs up, you can immediately enable slave1 as the master, and the others remain unchanged.
10. Redis expiration key deletion policy?
Page 123 of 485
1. Scheduled deletion: while setting the expiration time of the key, create a timer. Let the timer immediately delete the key when the expiration time of the key comes.
2. Lazy delete: allow the key to expire, but check whether the obtained key expires every time you get the key from the key space. If it expires, delete the key; If it does not expire, return to the key.
3. Regular deletion: the program checks the database every other period of time to delete the expired key. How many expired keys to delete and how many databases to check are determined by the algorithm.
11. Redis’s recycling strategy (elimination strategy)?
Volatile LRU: select the least recently used data from the data set with expiration time (server. DB [i]. Expires)
Volatile TTL: select the data to expire from the data set (server. DB [i]. Expires) with the expiration time set
Volatile random: select any data set (server. DB [i]. Expires) from which the expiration time has been set
Allkeys LRU: select the least recently used data from the data set (server. DB [i]. Dict)
All keys random: select any data from the data set (server. DB [i]. Dict)
No expulsion: no expulsion data
Note the six mechanisms here. Volatile and allkeys specify whether to eliminate data from data sets with expiration time or from all data sets. The following LRU, TTL and random are three different elimination strategies, plus a no invitation never recycling strategy.
Page 124 of 485
Use policy rules:
1. If the data presents a power-law distribution, that is, some data access frequency is high and some data access frequency is low, use allkeys LRU
2. If the data is equally distributed, that is, all data access frequencies are the same, all keys random is used
12. Why does EDIS need to put all the data in memory?
A: in order to achieve the fastest reading and writing speed, redis reads data into memory and writes data to disk asynchronously. Therefore, redis has the characteristics of fast and data persistence. If the data is not placed in memory, the disk I / O speed will be slow, which will seriously affect the performance of redis. With memory getting cheaper and cheaper today, redis will become more and more popular. If the maximum memory used is set, the new value cannot be inserted after the number of existing data records reaches the memory limit.
13. Do you understand the synchronization mechanism of redis?
A: redis can use master-slave synchronization and slave-slave synchronization. During the first synchronization, the master node makes a bgsave and records the subsequent modifications to the memory buffer. After the synchronization is completed, the master node synchronizes the full amount of RDB files to the replication node. After the replication node accepts it, it loads the RDB image into memory. After loading, notify the master node to synchronize the modified operation records to the replication node for replay to complete the synchronization process.
14. What are the advantages of pipeline? Why use pipeline?
A: the time of multiple IO round trips can be reduced to one, provided that there is no causal correlation between the instructions executed by pipeline. When using redis benchmark for pressure measurement, it can be found that an important factor affecting the peak QPS of redis is the number of pipeline batch instructions.
Page 125 of 485
15. Have you ever used redis cluster? What is the principle of cluster?
1) Redis Sentinal focuses on high availability. When the Master goes down, it will automatically promote the slave to the master and continue to provide services.
2) . redis cluster focuses on scalability. When a single redis is short of memory, it uses cluster for fragment storage.
16. Under what circumstances will the redis cluster scheme make the whole cluster unavailable?
A: for a cluster with three nodes a, B and C, if node B fails without replication model, the whole cluster will be considered unavailable due to the lack of slots in the range of 5501-11000.
17. What are the Java clients supported by redis? Which is the official recommendation?
A: redisson, jedis, lettuce, etc. are officially recommended.
18. What are the advantages and disadvantages of jedis compared with redisson?
A: jedis is the Java implementation client of redis, and its API provides comprehensive redis command support; Redisson implements a distributed and extensible Java data structure. Compared with jedis, redison has simpler functions and does not support string operation, sorting, transaction, pipeline, partition and other redis features. Redisson aims to promote the separation of users’ attention to redis, so that users can focus more on processing business logic.
Page 126 of 485
19. How does redis set and verify passwords?
Set password: config set requirepass 123456
Authorization password: auth 123456
20. Talk about the concept of redis hash slot?
A: the redis cluster does not use consistent hash, but introduces the concept of hash slot. The redis cluster has 16384 hash slots. Each key takes the module of 16384 after CRC16 verification to determine which slot to place. Each node of the cluster is responsible for part of the hash slot.
21. What is the master-slave replication model of redis cluster?
A: in order to make the cluster still available when some nodes fail or most nodes cannot communicate, the cluster uses the master-slave replication model, and each node will have n-1 replicas
22. Will write operations be lost in redis cluster? Why?
A: redis cannot guarantee strong data consistency, which means that in practice, the cluster may lose write operations under specific conditions.
23. How are redis clusters replicated?
Answer: asynchronous replication
Page 127 of 485
24. What is the maximum number of nodes in redis cluster?
Answer: 16384.
25. How do redis clusters select databases?
A: the redis cluster cannot select a database at present. The default database is 0.
26. How to test the connectivity of redis?
Answer: use the ping command.
27. How to understand redis transactions?
Answer:
1) A transaction is a separate isolation operation: all commands in the transaction are serialized and executed sequentially. During the execution of the transaction, it will not be interrupted by command requests sent by other clients.
2) A transaction is an atomic operation: all commands in the transaction are either executed or not executed at all.
28. What are the commands related to redis transactions?
Answer: multi, exec, discard, watch
Page 128 of 485
29. How to set the expiration time and permanent validity of redis key respectively?
Answer: execute and persist commands.
30. How does redis optimize memory?
A: use hashes as much as possible. The memory used by hash tables (that is, the number stored in hash tables is small) is very small, so you should abstract your data model into a hash table as much as possible. For example, if you have a user object in your web system, do not set a separate key for the user’s name, last name, mailbox and password, but store all the user’s information in a hash table
31. How does the redis recycling process work?
A: a client runs a new command and adds new data. RedI checks the memory usage. If it is greater than the maxmemory limit, it will recycle according to the set policy. A new command is executed, and so on. So we constantly cross the boundary of memory limit, reach the boundary, and then recycle back below the boundary. If the result of a command causes a large amount of memory to be used (for example, the intersection of a large set is saved to a new key), the memory limit will soon be exceeded by this memory usage.
32. What are the ways to reduce the memory usage of redis?
A: if you use a 32-bit redis instance, you can make good use of hash, list, sorted set, set and other collection type data, because usually many small key values can be stored together in a more compact way.
33. What happens when redis runs out of memory?
Page 129 of 485
A: if the set upper limit is reached, redis’s write command will return an error message (but the read command can return normally.) or you can use redis as a cache to use the configuration elimination mechanism. When redis reaches the upper limit of memory, it will flush out the old content.
34. How many keys can a redis instance store at most? List、Set、
Sorted set how many elements can they store at most?
A: theoretically, redis can handle up to 232 keys, and has been tested in practice. Each instance stores at least 250 million keys. We are testing some larger values. Any list, set, and sorted set can hold 232 elements. In other words, the storage limit of redis is the available memory value in the system.
35. There are 2000W data in MySQL and only 20W data in redis, such as
How to ensure that all data in redis is hot data?
A: when the size of redis memory data set rises to a certain size, the data elimination strategy will be implemented.
Related knowledge: redis provides six data elimination strategies:
Volatile LRU: select the least recently used data from the data set with expiration time (server. DB [i]. Expires)
Volatile TTL: select the data to expire from the data set (server. DB [i]. Expires) with the expiration time set
Page 130 of 485
Volatile random: select any data set (server. DB [i]. Expires) from which the expiration time has been set
Allkeys LRU: select the least recently used data from the data set (server. DB [i]. Dict)
All keys random: select any data from the data set (server. DB [i]. Dict)
No expulsion: no expulsion data
36. What is the most suitable scenario for redis?
1. Session cache
The most commonly used scenario for redis is session cache. The advantage of using redis to cache sessions over other storage (such as memcached) is that redis provides persistence. When maintaining a cache that does not strictly require consistency, most people will be unhappy if all users’ shopping cart information is lost. Will they still be like this now? Fortunately, with the improvement of redis over the years, it is easy to find out how to properly use redis to cache session documents. Even Magento, a well-known business platform, provides redis plug-ins.
2. Full page cache (FPC)
In addition to the basic session token, redis also provides a very simple FPC platform. Returning to the consistency problem, even if the redis instance is restarted, users will not see a decrease in page loading speed because of disk persistence. This is a great improvement, similar to PHP local FPC. Again, take Magento as an example. Magento provides a plug-in to use redis as the back-end of full page caching. In addition, for WordPress users, Pantheon has a very good plug-in WP redis, which can help you load the pages you have visited as quickly as possible.
3. Queue
Page 131 of 485
One of the advantages of IDS in the field of memory storage engine is that it provides list and set operations, which makes redis a good message queuing platform. The operation of redis as a queue is similar to the push / pop operation of a local program language (such as Python) on a list. If you quickly search “redis queues” in Google, you can immediately find a large number of open source projects. The purpose of these projects is to use redis to create very good back-end tools to meet various queue requirements. For example, a background of celery uses redis as a broker. You can check it here.
4. Leaderboard / counter
Redis implements the operation of increasing or decreasing numbers in memory very well. Set and sorted set also make it very simple for us to perform these operations. Redis just provides these two data structures. Therefore, we need to get the top 10 users from the sorting set – we call it “user_scores”. We just need to perform the following: of course, this assumes that you do an incremental sorting according to your user’s score. If you want to return the user and the user’s score, you need to do this: zrange user_ Scores 0 10 with scores Agora games is a good example. It is implemented in ruby. Its ranking uses redis to store data, as you can see here.
5. Publish / subscribe
Last (but certainly not least) is redis’s publish / subscribe function. There are indeed many usage scenarios for publish / subscribe. I’ve seen people use it in social network connections, as a script trigger based on publish / subscribe, and even use redis’s publish / subscribe function to establish a chat system!
37. If there are 100 million keys in redis, 10W of them are
What if you find all of them at the beginning of a fixed known prefix?
A: use the keys command to scan out the key list of the specified mode.
Page 132 of 485
The other party then asked: if this redis is providing services to online businesses, what would be the problem with using the keys command?
At this time, you should answer a key feature of redis: the single thread of redis. The keys instruction will cause the thread to block for a period of time, and the online service will stop until the instruction is executed. At this time, the scan instruction can be used. The scan instruction can extract the key list of the specified mode without blocking, but there will be a certain repetition probability. It is OK to remove the duplicate once on the client, but the overall time will be longer than using the keys instruction directly.
38. If a large number of keys need to expire at the same time, you should pay attention to it
what?
A: if the expiration time of a large number of keys is set too intensively, there may be a short-term jam in redis at the time point of expiration. It is generally necessary to add a random value to the time to disperse the expiration time.
39. Have you ever used redis as an asynchronous queue? How do you use it?
A: generally, the list structure is used as the queue, rpush production messages and lpop consumption messages. When there is no message in lpop, sleep properly and try again later.
If the other party asks, can I not use sleep?
List also has an instruction called blpop. When there is no message, it will block until the message arrives. If the other party asks whether it can produce once and consume many times? Using the pub / sub topic subscriber mode, a 1: n message queue can be implemented.
Page 133 of 485
If the other party asks what are the disadvantages of pub / sub?
When consumers go offline, production messages will be lost, and professional message queues such as rabbitmq have to be used.
If the other party asks how redis implements the delay queue?
I guess now you want to beat the interviewer to death. If you have a baseball bat in your hand, how can you ask so detailed. But you are very restrained, and then you reply with a calm look: use sortedset, take the timestamp as the score, and call zadd as the message content as the key to produce the message. The consumer uses the zrangebyscore instruction to obtain the data polling before N seconds for processing. Here, the interviewer has secretly given you a thumbs up. But what he doesn’t know is that now you put up your middle finger behind the chair.
40. Have you ever used redis distributed locks? What’s going on?
First take setnx to scramble for the lock, and then add an expiration time to the lock with expire to prevent the lock from forgetting to release.
At this time, the other party will tell you that your answer is good, and then ask what happens if the process crashes unexpectedly or needs to restart maintenance before expire is executed after setnx?
At this time, you should give surprising feedback: Alas, yes, this lock will never be released. Next, you need to grab your head and pretend to think for a moment. It seems that you think about the next result on your own initiative, and then answer: I remember that the set instruction has very complex parameters. This should be used to synthesize setnx and expire into one instruction at the same time! At this time, the other party will show a smile and begin to Meditate: Press, this boy is not bad.
MySQL interview questions
Page 134 of 485
1. What kinds of locks are there in MySQL?
1. Table level lock: low overhead and fast locking; No deadlock; The locking granularity is large, the probability of lock conflict is the highest, and the concurrency is the lowest.
2. Row level lock: high overhead and slow locking; Deadlock will occur; The locking granularity is the smallest, the probability of lock conflict is the lowest, and the concurrency is the highest.
3. Page lock: the overhead and locking time are bounded between table lock and row lock; Deadlock will occur; The locking granularity is between table lock and row lock, and the concurrency is general.
2. What are the different tables in MySQL?
There are 5 types of tables:
1、MyISAM
2、Heap
3、Merge
4、INNODB
5、ISAM
3. Briefly describe the difference between MyISAM and InnoDB in MySQL database
MyISAM:
Page 135 of 485
Transaction is not supported, but each query is atomic;
Table level locks are supported, that is, the whole table is locked for each operation;
Total rows of the storage table;
A MyISAM table has three files: index file, table structure file and data file;
The Philippine clustered index is adopted, and the data field of the index file stores the pointer to the data file. The secondary index is basically the same as the primary index, but the uniqueness of the secondary index is not guaranteed.
InnoDb:
Support acid transactions and four isolation levels of transactions;
Support row level locks and foreign key constraints: therefore, write concurrency can be supported;
Total rows not stored:
An InnoDB engine is stored in a file space (shared table space, the table size is not controlled by the operating system, and a table may be distributed in multiple files), or multiple (set as independent table space, the table size is limited by the operating system file size, generally 2G), which is limited by the operating system file size;
The primary key index adopts the clustered index (the data field of the index stores the data file itself), and the data field of the secondary index stores the value of the primary key; Therefore, to find data from the secondary index, first find the primary key value through the secondary index, and then access the secondary index; It is best to use self incrementing primary keys to prevent large file adjustments to maintain the B + tree structure when inserting data.
Page 136 of 485
4. Four transaction isolation level names supported by InnoDB in mysql, and
The difference between levels?
The four isolation levels defined by the SQL standard are:
1. Read uncommitted: uncommitted data is read
2. Read committed: dirty, non repeatable
3. Repeatable read: repeatable
4. Serializable: serializable
5. What is the difference between char and varchar?
1. Char and varchar types differ in storage and retrieval
2. The char column length is fixed to the length declared when the table is created. The length value range is 1 to 255. When char values are stored, they are filled with spaces to a specific length. When retrieving char values, trailing spaces need to be deleted.
6. What is the difference between a primary key and a candidate key?
Each row of a table is uniquely identified by a primary key. A table has only one primary key.
Primary keys are also candidate keys. By convention, candidate keys can be specified as primary keys and can be used for any foreign key reference.
Page 137 of 485
7. What is myisamchk used for?
It is used to compress MyISAM tables, which reduces disk or memory usage.
What is the difference between MyISAM static and MyISAM dynamic?
All fields on MyISAM static have a fixed width. The dynamic MyISAM table will have fields such as text and blob to adapt to data types of different lengths.
MyISAM static is easier to recover in case of damage.
8. What happens if a table has a column defined as timestamp?
Whenever a row is changed, the timestamp field gets the current timestamp.
When the column is set to auto increment, what happens if the maximum value is reached in the table?
It will stop incrementing and any further insertion will produce an error because the key has been used.
How can I find out which automatic increment was allocated at the last insertion?
LAST_ INSERT_ The ID will be returned by auto_ Increment is the last value assigned, and there is no need to specify a table name.
9. How do you see all the indexes defined for the table?
Indexes are defined for tables in the following ways:
Page 138 of 485
SHOW INDEX FROM <tablename>;
10. % and% in like statement_ What do you mean?
%Corresponding to 0 or more characters_ Just a character in the like statement.
How to convert between UNIX and MySQL timestamps?
UNIX_ Timestamp is the command from MySQL timestamp to UNIX timestamp_ Unixtime is a command that converts a UNIX timestamp to a MySQL timestamp
11. What is the column comparison operator?
Use the =, < >, < =, <, <, > =, >, >, <, >, < = >, and, or, or like operators in the column comparison of a select statement.
12. What’s the difference between blob and text?
Blob is a binary object that can hold a variable amount of data. Text is a case insensitive blob.
The only difference between blob and text types is that blob values are case sensitive when sorted and compared, and text values are not case sensitive.
13、MySQL_ fetch_ Array and MySQL_ fetch_ The difference between objects is
what?
Page 139 of 485
The following is MySQL_ fetch_ Array and MySQL_ fetch_ Differences between objects:
MySQL_ fetch_ Array () – returns the result row as an associative array or a regular array from the database.
MySQL_ fetch_ Object – returns the result row from the database as an object.
14. Where will the MyISAM table be stored and provide its storage format?
Each MyISAM table is stored on disk in three formats:
·The “. Frm” file stores table definitions
·The data file has a “. MyD” (mydata) extension
Index files have a “. MYI” (myindex) extension
15. How does MySQL optimize distinct?
Distinct is converted to group by on all columns and used in conjunction with the order by clause.
SELECT DISTINCT t1.a FROM t1,t2 where t1.a=t2.a;
16. How do I display the first 50 lines?
In mysql, use the following code to query and display the first 50 lines:
SELECT*FROM
Page 140 of 485
LIMIT 0,50;
17. How many columns can be used to create an index?
Any standard table can create up to 16 index columns.
18. Now () and current_ What’s the difference between date()?
The now() command displays the current year, month, date, hour, minute, and second.
CURRENT_ Date() displays only the current year, month, and date.
19. What is a nonstandard string type?
1、TINYTEXT
2、TEXT
3、MEDIUMTEXT
4、LONGTEXT
20. What are general SQL functions?
1. Concat (a, b) – concatenates two string values to create a single string output. Typically used to combine two or more fields into one field.
Page 141 of 485
2. Format (x, d) – format numbers x to D significant numbers.
3. Current date(), current time() – returns the current date or time.
4. Now () – returns the current date and time as a value.
5. Month(), day(), year(), week(), weekday() – extract the given data from the date value.
6. Hour(), minute(), second() – extracts the given data from the time value.
7. DateDiff (a, b) – determines the difference between two dates, usually used to calculate age
8. Subtomes (a, b) – determine the difference between the two.
9. Fromdays (int) – converts an integer number of days to a date value.
21. Does MySQL support transactions?
In the default mode, MySQL is in autocommit mode, and all database updates will be submitted immediately. Therefore, MySQL does not support transactions by default.
However, if your MySQL table type is InnoDB tables or BDB tables, you can use transaction processing. Using set autocommit = 0 allows Mysql to be in non autocommit mode. In non autocommit mode, you must use commit to submit your changes or rollback your changes.
Page 142 of 485
22. What is the best field type for recording currency in MySQL
Numeric and decimal types are implemented as the same type by mysql, which is allowed in the SQL92 standard. They are used to save values whose accuracy is extremely important, such as data related to money. When declaring a class as one of these types, the precision and scale can be (and usually) specified.
For example:
salary DECIMAL(9,2)
In this example, 9 (precision) represents the total number of decimal places that will be used to store the value, and 2 (scale) represents the number of decimal places that will be used to store the value.
Therefore, in this case, the range of values that can be stored in the salary column is – 9999999.99 to 9999999.99.
23. What are the MySQL permission tables?
The MySQL server controls users’ access to the database through the permission table, which is stored in the MySQL database and controlled by mysql_ install_ DB script initialization. These permission tables are user, DB and table respectively_ priv, columns_ Priv and host.
24. What can be the string type of the column?
The string type is:
1、SET
Page 143 of 485
2、BLOB
3、ENUM
4、CHAR
5、TEXT
25. MySQL database is used as the storage of the publishing system, with an increment of more than 50000 pieces a day,
It is estimated that the operation and maintenance will last three years. How to optimize it?
1. A well-designed database structure allows partial data redundancy, avoids join queries as much as possible, and improves efficiency.
2. Select the appropriate table field data type and storage engine, and add the appropriate index.
3. Mysql database master-slave read-write separation.
4. Find regular tables to reduce the amount of data in a single table and improve the query speed.
5. Add caching mechanisms, such as memcached and APC.
6. Generate static pages for infrequently changed pages.
7. Write efficient SQL. For example, select * from table is changed to select field_ 1, field_ 2, field_ 3 FROM TABLE.
26. Lock optimization strategy
Page 144 of 485
1. Read write separation
2. Sectional locking
3. Reduce lock holding time
4. Multiple threads try to obtain resources in the same order
The granularity of locks should not be too refined, otherwise threads may have too many times of locking and releasing, but the efficiency is not as good as adding a large lock at a time.
27. Underlying implementation principle and optimization of index
B + tree, optimized B + tree
The main reason is that the pointer to the next leaf node is added to all leaf nodes. Therefore, InnoDB recommends using the default self incremented primary key as the primary index for most tables.
28. When the index is set but cannot be used
1. Like statements starting with ‘%’, fuzzy matching
2. Indexes are not used before and after or statements
3. Implicit conversion of data type (if varchar is not quoted, it may be automatically converted to int)
29. How to optimize MySQL in practice
Page 145 of 485
It is best to optimize in the following order:
1. Optimization of SQL statement and index
2. Optimization of database table structure
3. Optimization of system configuration
4. Hardware optimization
For details, see the summary of MySQL slow query optimization, index optimization, and table optimization talked about by Alibaba P8 architects
30. Methods of optimizing database
1. Select the most applicable field attribute, reduce the defined field width as much as possible, and set the field not null as much as possible. For example, ‘province’ and ‘gender’ are best applicable to enum
2. Use joins instead of subqueries
3. Union is applicable to replace the temporary table 4 and transaction created manually
5. Locking tables, optimizing transactions
6. Applicable to foreign keys, optimize locking table
7. Indexing
8. Optimize query statements
Page 146 of 485
31. Briefly describe the indexes, primary keys, unique indexes, and union indexes in MySQL
What impact does the difference have on the performance of the database (in terms of reading and writing)
An index is a special file (the index on the InnoDB data table is a part of the table space), which contains reference pointers to all records in the data table.
The only task of a normal index (an index defined by the keyword key or index) is to speed up access to data.
A normal index allows the indexed data column to contain duplicate values. If you can determine that a data column will only contain different values from each other, you should define it as a unique index with the keyword unique when creating an index for this data column. In other words, a unique index can ensure the uniqueness of data records.
Primary key is a special unique index. Only one primary key index can be defined in a table. The primary key is used to uniquely identify a record and is created using the keyword primary key.
An index can cover multiple data columns, such as an index (column a, column B), which is a federated index.
Index can greatly improve the query speed of data, but it will reduce the speed of inserting, deleting and updating tables, because the index file must be operated during these write operations.
32. What are the transactions in the database?
A transaction is an ordered set of database operations as a unit. If all operations in the group are successful, the transaction is considered successful. Even if only one operation fails, the transaction is not successful. If
Page 147 of 485
When the operation is completed, the transaction is committed, and its modification will act on all other database processes. If an operation fails, the transaction will be rolled back and the impact of the transaction on the operation will be cancelled.
Transaction characteristics:
1. Atomicity: that is, indivisibility. Transactions are either executed or not executed.
2. Consistency or serializability. The execution of transactions makes the database transition from one correct state to another
3. Isolation. Before the transaction is correctly committed, it is not allowed to provide any changes to the data of the transaction to any other transaction,
4. Persistence. After the transaction is correctly committed, its results will be permanently saved in the database. Even if there are other failures after the transaction is committed, the transaction processing results will be saved.
Or understand it this way:
A transaction is a grouping of SQL statements bound together as a logical unit of work. If any statement operation fails, the whole operation will fail. Later, the operation will be rolled back to the state before the operation, or there is a node on it. To ensure that you either execute or not, you can use transactions. To consider a group of statements as a transaction, you need to pass acid tests, namely atomicity, consistency, isolation and persistence.
33. What are the causes of SQL injection vulnerabilities? How to prevent?
Causes of SQL injection: in the process of program development, we do not pay attention to writing SQL statements and filtering special characters, so that the client can submit some SQL statements through global variables post and get for normal execution.
How to prevent SQL injection: turn on magic in the configuration file_ quotes_ GPC and Magic_ quotes_ Runtime settings
Page 148 of 485
When executing SQL statements, use addslashes to convert SQL statements
When writing SQL statements, try not to omit double quotation marks and single quotation marks.
Filter out some keywords in SQL statements: update, insert, delete, select, *.
Improve the naming skills of database tables and fields, name some important fields according to the characteristics of the program, and take those that are not easy to guess.
34. Select the appropriate data type for the fields in the table
Field type priority: integer > date, time > enum, char > varchar > blob, text gives priority to numeric type, followed by date or binary type, and finally string type. For data types of the same level, priority should be given to data types that occupy less space
35. Storage period
DataTime: store the period time in yyyy-mm-dd HH: mm: SS format, accurate to seconds, occupying 8 bytes of storage space. DataTime type is independent of time zone. Timestamp: store it in timestamp format, occupying 4 bytes. The range is small from 1970-1-1 to 2038-1-19. The display depends on the specified time zone, By default, the timestamp column value can be automatically modified when the data of the first column row is modified. Date: (birthday) takes less bytes than the string. Datatime.int. Only 3 bytes are required to use date to store the date and month, You can also use the date time function to calculate the time between dates. Time: store the data of the time part. Note: do not use the string type to store the date time data (usually it takes up less storage space than the string, and you can use the date function when searching and filtering). It is better to use the timestamp type to store the date time with int
Page 149 of 485
36. For relational database, index is a very important concept. Please answer
Some questions about index:
1. What is the purpose of the index?
Quickly access the specific information in the data table, improve the retrieval speed, create a unique index, and ensure the uniqueness of each row of data in the database table. Connection between accelerometer and
When using grouping and sorting clauses for data retrieval, the time of grouping and sorting in queries can be significantly reduced
2. What is the negative impact of indexes on database systems?
Negative effects:
Creating and maintaining indexes takes time, which increases with the increase of the amount of data; Indexes need to occupy physical space, not only tables need to occupy data space, but also each index needs to occupy physical space; When the table is added, deleted, modified, and, the index should also be maintained dynamically, which reduces the speed of data maintenance.
3. What are the principles for indexing data tables?
Index on the most frequently used fields to narrow the query.
Index frequently used fields that need to be sorted
4. Under what circumstances should an index not be established?
Page 150 of 485
For columns rarely involved in the query or columns with many duplicate values, it is not appropriate to establish an index.
For some special data types, indexes should not be established, such as text fields
37. Explain the difference between MySQL external connection, internal connection and self connection
First, what is cross connection: cross connection is also called Cartesian product. It refers to directly matching all records in one table with all records in another table without using any conditions.
Inner connection is a cross connection with only conditions. Qualified records are filtered according to a certain condition. Unqualified records will not appear in the result set, that is, inner connection only connects the matching rows. The result set of an external connection contains not only rows that meet the connection conditions, but also all data rows in the left table, the right table or both tables. These three cases are called left external connection, right external connection, and all external connections in turn.
The left outer join is also called left join. The left table is the main table. All records in the left table will appear in the result set. For those records that do not match in the right table, they still need to be displayed, and the field values corresponding to the right are filled with null. Right outer join is also called right join. The right table is the main table, and all records in the right table will appear in the result set. Left and right connections can be interchanged. MySQL does not support all external connections at present.
38. Overview of transaction rollback mechanism in myql
Transaction is a user-defined sequence of database operations. These operations are either all done or none done. It is an inseparable work unit. Transaction rollback refers to revoking the completed update operations to the database.
When you want to modify two different tables in the database at the same time, if they are not a transaction, when the first table is modified, there may be an exception in the process of modifying the second table and it cannot be modified. At this time, only the second table is still in the state before modification, and the first table has been modified. And when you set them to a
Page 151 of 485
During a transaction, when the first table is modified and the second table fails to be modified, the first table and the second table must return to the unmodified state, which is called transaction rollback
39. What parts does SQL language include? What are the operation keywords in each part?
SQL language includes data definition (DDL), data manipulation (DML), data control (DCL) and data query (DQL).
Data definitions: create table, alter table, drop table, craete / drop index, etc
Data manipulation: select, insert, update, delete,
Data control: Grant, revoke
Data query: Select
40. What are the integrity constraints?
Data integrity refers to the accuracy and reliability of data.
It is divided into the following four categories:
1. Entity integrity: Specifies that each row of the table is a unique entity in the table.
2. Domain integrity: refers to that the columns in the table must meet certain data type constraints, including value range, precision, etc.
3. Referential integrity: it means that the data of the primary and external keywords of two tables should be consistent, which ensures the consistency of data between tables and prevents data loss or meaningless data from spreading in the database.
Page 152 of 485
4. User defined integrity: different relational database systems often need some special constraints according to their application environments. User defined integrity is the constraint condition for a specific relational database, which reflects the semantic requirements that a specific application must meet.
Table related constraints: including column constraints (not null constraints) and table constraints (primary key, foreign key, check, unique).
41. What is a lock?
A: database is a shared resource used by multiple users. When multiple users access data concurrently, multiple transactions access the same data at the same time in the database. If you do not control concurrent operations, you may read and store incorrect data and destroy the consistency of the database.
Locking is a very important technology to realize database concurrency control. Before a transaction operates on a data object, it sends a request to the system to lock it. After locking, the transaction has certain control over the data object. Before the transaction releases the lock, other transactions cannot update the data object.
Basic lock type: locks include row level locks and table level locks
42. What is view? What is a cursor?
A: a view is a virtual table with the same functions as a physical table. Views can be added, modified, queried, and operated. Views are usually subsets of rows or columns with one table or multiple tables. Changes to the view do not affect the base table. It makes it easier for us to obtain data than multi table query.
Cursor: it is used to effectively process the queried result set as a unit. The cursor can be positioned on a specific row in the cell and retrieve one or more rows from the current row of the result set. You can modify the current row of the result set. Generally, cursors are not used, but cursors are very important when you need to process data item by item.
Page 153 of 485
43. What is a stored procedure? With what?
A: a stored procedure is a precompiled SQL statement. The advantage is that it allows modular design, that is, it only needs to be created once and can be called many times in the program later. If an operation needs to execute SQL multiple times, using stored procedures is faster than simple SQL statements. A stored procedure can be called with a command object.
44. How to understand the three paradigms?
A: the first paradigm: 1NF is an atomic constraint on attributes, which requires attributes to be atomic and non decomposable;
The second paradigm: 2NF is a constraint on the uniqueness of records, which requires records to have unique identification, that is, the uniqueness of entities;
The third paradigm: 3NF is a constraint on field redundancy, that is, any field cannot be derived from other fields. It requires that there is no redundancy in the field..
Advantages and disadvantages of paradigm design:
advantage:
Data redundancy can be reduced as much as possible, so that the update is fast and the volume is small
Disadvantages: for queries, multiple tables need to be associated, which reduces the writing efficiency, increases the reading efficiency, and makes it more difficult to optimize the index
Denormalization:
Advantages: it can reduce the association of tables and better optimize indexes
Page 154 of 485
Disadvantages: data redundancy and abnormal data, and data modification requires more cost
45. What is a basic table? What is a view?
A: a basic table is an independent table. In SQL, a relationship corresponds to a table. A view is a table exported from one or more base tables. The view itself is not stored independently in the database. It is a virtual table
46. What are the advantages of view?
Answer: (1) the view can simplify the user’s operation (2) the view enables the user to view the same data from multiple perspectives; (3) View provides a certain degree of logical independence for the database; (4) Views provide security for confidential data.
47. What does null mean
A: null this value means unknown: it does not mean “” (empty string). Any comparison of this value produces a null value. You cannot compare any value with a null value and logically want an answer.
Use is null for null determination
48. What is the difference between primary key, foreign key and index?
Differences between primary key, foreign key and index
definition:
Page 155 of 485
Primary key – uniquely identifies a record. There can be no duplicate records and cannot be empty
Foreign key – the foreign key of a table is the primary key of another table. The foreign key can be duplicate or null
Index – this field has no duplicate values, but can have a null value
effect:
Primary key – used to ensure data integrity
Foreign key – a key used to establish relationships with other tables
Index – is to improve the speed of query sorting
number:
Primary key – there can only be one primary key
Foreign keys – a table can have multiple foreign keys
Index – a table can have multiple unique indexes
49. What can you do to ensure that the fields in the table only accept values in a specific range?
A: check limit, which is defined in the database table to limit the input value of this column.
Triggers can also be used to limit the acceptable values of fields in database tables, but this method requires triggers to be defined in tables, which may affect performance in some cases.
Page 156 of 485
50. What are the methods to optimize SQL statements? (select several)
1. In the where clause: the connection between the where tables must be written before other where conditions, and the conditions that can filter out the maximum number of records must be written at the end of the where clause and at the end of. Having.
2. Replace in with exists and not in with not exists.
3. Avoid using calculations on index columns
4. Avoid using is null and is not null on index columns
5. To optimize the query, try to avoid full table scanning. First, consider establishing indexes on the columns involved in where and order by.
6. Try to avoid judging the null value of the field in the where clause, otherwise the engine will give up using the index and scan the whole table
7. Try to avoid expression operations on fields in the where clause, which will cause the engine to abandon the use of indexes and perform a full table scan
Java Concurrent Programming (I)
1. What is the difference between a daemon thread and a local thread in Java?
There are two kinds of threads in Java: daemon and user.
Page 157 of 485
Any thread can be set as a daemon thread and a user thread through the method thread. Setdaemon (bool on); True sets the thread as a daemon thread, otherwise it is a user thread. Thread.setDaemon () must be called before Thread.start (), otherwise the runtime throws an exception.
Difference between the two: the only difference is to judge when the virtual machine (JVM) leaves. Daemon provides services for other threads. If all user threads have been evacuated and daemon has no serviceable threads, the JVM will evacuate. It can also be understood that the daemon thread is a thread automatically created by the JVM (but not necessarily), and the user thread is a thread created by the program; For example, the JVM’s garbage collection thread is a daemon thread. When all threads have been evacuated and no garbage is generated, the daemon thread will naturally have nothing to do. When the garbage collection thread is the only thread left on the Java virtual machine, the Java virtual opportunity will leave automatically.
Extension: threads printed by thread dump are daemons, including service daemons, compilation daemons, Ctrl + break daemons under windows, finalizer daemons, reference processing daemons, and GC daemons.
2. What is the difference between threads and processes?
Process is the smallest unit that the operating system allocates resources, and thread is the smallest unit that the operating system schedules.
A program has at least one process, and a process has at least one thread.
3. What is context switching in multithreading?
Multithreading will jointly use the CPU on a group of computers. When the number of threads is greater than the number of CPUs allocated to the program, it is necessary to rotate the CPU in order to allow each thread to have the opportunity to execute. Context switching refers to the switching data using the CPU for different thread switching.
Page 158 of 485
4. The difference between deadlock and livelock, the difference between deadlock and hunger?
Deadlock: it refers to the phenomenon that two or more processes (or threads) wait for each other due to competing for resources during execution. If there is no external force, they will not be able to move forward. Necessary conditions for deadlock generation: 1. Mutually exclusive conditions: the so-called mutual exclusion is that the process monopolizes resources at a certain time. 2. Request and hold condition: when a process is blocked by requesting resources, it will hold on to the obtained resources. 3. Conditions of non deprivation: the process has obtained resources and cannot be forcibly deprived until it is used up at the end of the year. 4. Circular waiting condition: a circular waiting resource relationship is formed between several processes.
Livelock: the task or executor is not blocked. Because some conditions are not met, it leads to repeated attempts, failures, attempts, failures.
The difference between livelock and deadlock is that the entity in livelock is constantly changing state, the so-called “live”, while the entity in deadlock is waiting; A live lock may unlock itself, but a deadlock cannot.
Starvation: a state in which one or more threads are unable to obtain the required resources for various reasons, resulting in continuous inability to execute. Causes of hunger in Java: 1. High priority threads devour the CPU time of all low priority threads. 2. The thread is permanently blocked in a state waiting to enter the synchronization block because other threads can always continuously access the synchronization block before it. 3. A thread is waiting for an object that itself is permanently waiting for completion (such as calling the wait method of this object), because other threads are always continuously awakened.
5. What is the thread scheduling algorithm used in Java?
Time slice rotation is adopted. You can set the priority of the thread, which will be mapped to the priority above the lower system. If it is not specially needed, try not to use it to prevent thread hunger.
Page 159 of 485
6. What is a thread group and why is it not recommended in Java?
The ThreadGroup class can assign threads to a thread group. There can be thread objects, thread groups, and threads in the group. This organizational structure is somewhat similar to the form of a tree.
Why not recommend it? Because there are many security risks in using it, there is no specific investigation. If it needs to be used, it is recommended to use thread pool.
7. Why use the executor framework?
Each time a task is executed, creating a thread new thread() consumes performance. Creating a thread is time-consuming and resource consuming. The thread created by calling new thread() lacks management and is called wild thread, and can be created without limit. The competition between threads will lead to excessive occupation of system resources and system paralysis, and the frequent alternation between threads will also consume a lot of system resources. The thread started with new thread () is not conducive to expansion, such as timed execution, periodic execution, timed periodic execution, thread interrupt, etc.
8. What is the difference between executors and executors in Java?
Different methods of executors tool class create different thread pools according to our requirements to meet the business requirements. The executor interface object can execute our thread tasks. Executorservice interface inherits and extends the executor interface and provides more methods. We can obtain the execution status of the task and the return value of the task. Use ThreadPoolExecutor to create a custom thread pool.
Page 160 of 485
Future represents the result of asynchronous calculation. It provides a method to check whether the calculation is completed to wait for the calculation to be completed, and you can use the get () method to obtain the calculation result.
9. How to find the CPU used by which thread on windows and Linux
The longest room?
reference resources: http://daiguahub.com/2016/07/… Jstack finds the thread code that consumes the most CPU/
10. What is atomic operation? What are the primitives in the Java concurrency API
Atomic classes?
Atomic operation means “one or a series of operations that cannot be interrupted”. The processor implements atomic operations between multiprocessors based on cache locking or bus locking. In Java, atomic operations can be realized by locking and looping CAS. CAS operation – Compare & set or compare & swap. Almost all CPU instructions now support CAS atomic operation.
Atomic operation refers to an operation task unit that is not affected by other operations. Atomic operation is a necessary means to avoid data inconsistency in multi-threaded environment. Int + + is not an atomic operation, so when one thread reads its value and adds 1, another thread may read the previous value, which will cause an error. In order to solve this problem, we must ensure that the addition operation is atomic. Before JDK1.5, we can use synchronization technology to do this. To JDK 1.5, the java.util.concurrent.atomic package provides atomic wrapper classes of int and long types, which can automatically ensure that their operations are atomic and do not need to use synchronization.
Page 161 of 485
The Java. Util. Concurrent package provides a set of atomic classes. Its basic feature is that in a multithreaded environment, when multiple threads execute the methods contained in the instances of these classes at the same time, it is exclusive, that is, when a thread enters the method and executes its instructions, it will not be interrupted by other threads, and other threads are like spin locks until the method is executed, It is only a logical understanding that the JVM selects another thread from the waiting queue to enter.
Atomic class: atomicboolean, atomicinteger, atomiclong, atomicreference atomic array: atomicintegerarray, atomiclongarray, atomicreferencearray atomic attribute updater: atomiclongfieldupdater, atomicintegerfieldupdater, atomicreferencefieldupdater atomic class for solving ABA problems: atomicmarkablereference (by introducing a Boolean to reflect whether there has been any change in the middle), atomicstampedreference (by introducing an int to accumulate to reflect whether there has been any change in the middle)
11. Lock interface in Java concurrency API
What is it? What are its advantages over synchronization?
The lock interface provides more scalable lock operations than synchronization methods and synchronization blocks. They allow more flexible structures, can have completely different properties, and can support conditional objects of multiple related classes.
Its advantages are:
It can make the lock more fair. It can make the thread respond to an interrupt while waiting for the lock. It can make the thread try to acquire the lock and return immediately when the lock cannot be acquired or wait for a period of time. It can acquire and release the lock in different ranges and in different orders
Page 162 of 485
On the whole, lock is an extended version of synchronized. Lock provides unconditional, pollable (trylock method), timed (trylock parameter method), interruptible (lockinterruptible) and multi condition queue (newcondition method) lock operations. In addition, lock implementation classes basically support non fair locks (default) and fair locks. Synchronized only supports non fair locks. Of course, in most cases, non fair locks are an efficient choice.
12. What is the executors framework?
The executor framework is a framework that invokes, schedules, executes, and controls asynchronous tasks according to a set of execution policies.
Unlimited thread creation can cause application memory overflow. So creating a thread pool is a better solution because you can limit the number of threads and recycle them. Using the executors framework, it is very convenient to create a thread pool.
13. What is a blocking queue? What is the implementation principle of blocking queue? How to use
Blocking queues to implement the producer consumer model?
A blocking queue is a queue that supports two additional operations.
These two additional operations are: when the queue is empty, the thread getting the element will wait for the queue to become non empty. When the queue is full, the thread that stores the element waits for the queue to become available.
Blocking queues are often used in the scenario of producers and consumers. Producers are threads that add elements to the queue, and consumers are threads that take elements from the queue. A blocking queue is a container where producers store elements, and consumers only get elements from the container.
JDK7 provides seven blocking queues. namely:
Page 163 of 485
Arrayblockingqueue: a bounded blocking queue composed of an array structure. Linked blocking queue: a bounded blocking queue composed of a linked list structure. Priorityblockingqueue: an unbounded blocking queue that supports prioritization. Delayqueue: an unbounded blocking queue implemented using priority queue. Synchronous queue: a blocking queue that does not store elements. Linkedtransferqueue: an unbounded blocking queue composed of a linked list structure. Linked blocking deque: a bidirectional blocking queue composed of linked list structure.
When realizing synchronous access before Java 5, you can use a common set, and then use thread collaboration and thread synchronization to realize producer and consumer mode. The main technology is to use the keywords wait, notify, notifyAll and synchronized. After Java 5, it can be implemented by using blocking queue, which greatly simplifies the amount of code, makes multithreading programming easier and ensures security. BlockingQueue interface is a sub interface of queue. Its main purpose is not as a container, but as a tool for thread synchronization. Therefore, it has an obvious feature. When the producer thread attempts to put elements into BlockingQueue, if the queue is full, the thread will be blocked. When the consumer thread attempts to take an element out of it, if the queue is empty, The thread will be blocked. Because of this feature, multiple threads in the program alternately put elements into the BlockingQueue and take out elements. It can well control the communication between threads.
The most classic scenario of blocking queue is the reading and parsing of socket client data. The thread reading the data keeps putting the data into the queue, and then the parsing thread keeps fetching data from the queue for parsing.
14. What are callable and future?
The callable interface is similar to runnable, which can be seen from its name, but runnable does not return results and cannot throw exceptions. Callable is more powerful. After being executed by a thread, it can return a value. This return value can be obtained by future, that is, future can obtain the return value of asynchronous execution tasks. It can be considered runnable with callback.
Page 164 of 485
The future interface represents an asynchronous task, which is the future result of the unfinished task. Therefore, callable is used to generate results and future is used to obtain results.
15. What is futuretask? Start the task using executorservice.
In Java Concurrent Programs, futuretask represents an asynchronous operation that can be cancelled. It has the methods of starting and canceling the operation, querying whether the operation is completed and retrieving the operation results. The result can be retrieved only when the operation is completed. If the operation is not completed, the get method will be blocked. A futuretask object can wrap objects that call callable and runnable. Because futuretask also calls runnable interface, it can be submitted to the executor for execution.
16. What is the implementation of a concurrent container?
What is a synchronization container: it can be simply understood as a container that realizes synchronization through synchronized. If multiple threads call the method of the synchronization container, they will execute serially. Such as vector, hashtable, and containers returned by collections. Synchronizedset, synchronizedlist and other methods. You can view the implementation code of vector, hashtable and other synchronization containers. You can see that the way these containers implement thread safety is to encapsulate their states and add the keyword synchronized to the methods that need synchronization.
The concurrent container uses a completely different locking strategy from the synchronous container to provide higher concurrency and scalability. For example, a finer granularity locking mechanism is adopted in the concurrent HashMap, which can be called segmented lock. Under this locking mechanism, any number of read threads are allowed to access the map concurrently, Moreover, read and write threads can also access the map concurrently, and a certain number of write threads are allowed to modify the map concurrently, so it can achieve higher throughput in a concurrent environment.
17. There are several ways to implement multithread synchronization and mutex. What are they?
Page 165 of 485
Thread synchronization refers to a constraint relationship between threads. The execution of a thread depends on the message of another thread. When it does not get the message of another thread, it should wait until the message arrives. Thread mutex refers to the exclusivity of shared process system resources when accessed by a single thread. When several threads need to use a shared resource, at most one thread is allowed to use it at any time. Other threads that want to use the resource must wait until the resource occupier releases the resource. Thread mutex can be regarded as a special thread synchronization.
The synchronization methods between threads can be divided into two categories: user mode and kernel mode. As the name suggests, the kernel mode refers to the use of the singleness of the system kernel object for synchronization. When using, you need to switch between the kernel state and the user state, while the user mode does not need to switch to the kernel state and only completes the operation in the user state. The methods in user mode are atomic operation (such as a single global variable) and critical area. The methods in kernel mode are: event, semaphore and mutex.
18. What are the competitive conditions? How do you find and solve the competition?
When multiple processes attempt to process the shared data, and the final result depends on the running order of the processes, we think that a race condition occurs.
19. How will you use thread dump? How will you analyze thread
dump?
New status (New)
The thread created with the new statement is in the new state. At this time, like other Java objects, it is only allocated memory in the heap. Ready status (runnable)
Page 166 of 485
When a thread object is created, other threads call its start () method, the thread enters the ready state, and the Java virtual opportunity creates a method call stack and program counter for it. The thread in this state is in the runnable pool, waiting for CPU usage.
Running status
Threads in this state occupy CPU and execute program code. Only threads in the ready state have the opportunity to go to the running state. Blocked status
Blocking status means that a thread abandons the CPU for some reason and temporarily stops running. When a thread is blocked, the Java virtual machine does not allocate CPU to the thread. It is not until the thread re enters the ready state that it has a chance to go to the running state.
Blocking states can be divided into the following three types:
Blocked in object’s wait pool:
When the thread is running, if the wait () method of an object is executed, the Java virtual machine will put the thread into the waiting pool of the object, which involves the content of “thread communication”.
Blocked in object’s lock pool:
When a thread is running and trying to obtain the synchronization lock of an object, if the synchronization lock of the object has been occupied by other threads, the Java virtual machine will put the thread into the lock pool of the object, which involves the content of “thread synchronization”.
Otherwise blocked:
When the current thread executes the sleep () method, or calls the join () method of other threads, or makes an I / O request, it will enter this state.
Page 167 of 485
Dead state: when a thread exits the run () method, it enters the dead state, and the thread ends its life cycle. We run the previous deadlock code simpledeadlock. Java and try to output the information(
/Time, JVM information/ 2017-11-01 17:36:28 Full thread dump Java HotSpot(TM) 64-Bit Server VM (25.144-b01 mixed mode):
/Thread Name: destroyjavavm No.: #13 priority: 5 system priority: 0 JVM internal thread ID: 0x0000000001c88800 corresponding system thread ID (nativethread ID): 0x1c18 thread state: waitingoncondition [0x0000000000000000] (wait for a condition) thread detailed state: java.lang.thread.state: runnable and all subsequent/ “DestroyJavaVM” #13 prio=5 os_prio=0 tid=0x0000000001c88800 nid=0x1c18 waiting on condition [0x0000000000000000] java.lang.Thread.State: RUNNABLE
“Thread-1” #12 prio=5 os_ Prio = 0 TID = 0x0000000018d49000 NID = 0x17b8 waiting for monitor entry [0x0000000019d7f000] / * thread status: blocking (on object synchronization) code location: at com. Leo. Interview. Simpledeadlock $b.run (simpledeadlock. Java: 56) waiting for lock: 0x00000000d629b4d8 lock obtained: 0x00000000d629b4e8 * / Java. Lang. thread. State: blocked (on object monitor) at com.leo.interview.SimpleDeadLock$B.run(SimpleDeadLock.java:56)
Page 168 of 485

  • waiting to lock <0x00000000d629b4d8> (a java.lang.Object) – locked <0x00000000d629b4e8> (a java.lang.Object)

“Thread-0″#11prio=5os_prio=0 tid=0x0000000018d44000nid=0x1ebc waiting for monitor entry [0x000000001907f000] java.lang.Thread.State: BLOCKED (on object monitor) at com.leo.interview.SimpleDeadLock$A.run(SimpleDeadLock.java:34) – waiting to lock <0x00000000d629b4e8> (a java.lang.Object) – locked <0x00000000d629b4d8> (a java.lang.Object)
“Service Thread” #10 daemon prio=9 os_prio=0 tid=0x0000000018ca5000 nid=0x1264 runnable [0x0000000000000000] java.lang.Thread.State: RUNNABLE
“C1 CompilerThread2” #9 daemon prio=9 os_prio=2 tid=0x0000000018c46000 nid=0xb8c waiting on condition [0x0000000000000000] java.lang.Thread.State: RUNNABLE
“C2 CompilerThread1” #8 daemon prio=9 os_prio=2 tid=0x0000000018be4800 nid=0x1db4 waiting on condition [0x0000000000000000] java.lang.Thread.State: RUNNABLE
“C2 CompilerThread0” #7 daemon prio=9 os_prio=2 tid=0x0000000018be3800 nid=0x810 waiting on condition [0x0000000000000000] java.lang.Thread.State: RUNNABLE
Page 169 of 485
“Monitor Ctrl-Break” #6 daemon prio=5 os_prio=0 tid=0x0000000018bcc800 nid=0x1c24 runnable [0x00000000193ce000] java.lang.Thread.State: RUNNABLE at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.socketRead(SocketInputStream.java:116) at java.net.SocketInputStream.read(SocketInputStream.java:171) at java.net.SocketInputStream.read(SocketInputStream.java:141) at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284) at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326) at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178) – locked <0x00000000d632b928> (a java.io.InputStreamReader) at java.io.InputStreamReader.read(InputStreamReader.java:184) at java.io.BufferedReader.fill(BufferedReader.java:161) at java.io.BufferedReader.readLine(BufferedReader.java:324) – locked <0x00000000d632b928> (a java.io.InputStreamReader) at java.io.BufferedReader.readLine(BufferedReader.java:389) at com.intellij.rt.execution.application.AppMainV2$1.run(AppMainV2.java:6 4)
“Attach Listener” #5 daemon prio=5 os_prio=2 tid=0x0000000017781800 nid=0x524 runnable [0x0000000000000000] java.lang.Thread.State: RUNNABLE
“Signal Dispatcher” #4 daemon prio=9 os_prio=2 tid=0x000000001778f800 nid=0x1b08 waiting on condition [0x0000000000000000] java.lang.Thread.State: RUNNABLE
Page 170 of 485
“Finalizer” #3 daemon prio=8 os_prio=1 tid=0x000000001776a800 nid=0xdac in Object.wait() [0x0000000018b6f000] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) – waiting on <0x00000000d6108ec8> (a java.lang.ref.ReferenceQueue$Lock) at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:143) – locked <0x00000000d6108ec8> (a java.lang.ref.ReferenceQueue$Lock) at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:164) at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:209)
“Reference Handler” #2 daemon prio=10 os_prio=2 tid=0x0000000017723800 nid=0x1670 in Object.wait() [0x00000000189ef000] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) – waiting on <0x00000000d6106b68> (a java.lang.ref.Reference$Lock) at java.lang.Object.wait(Object.java:502) at java.lang.ref.Reference.tryHandlePending(Reference.java:191) – locked <0x00000000d6106b68> (a java.lang.ref.Reference$Lock) at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:153)
“VM Thread” os_prio=2 tid=0x000000001771b800 nid=0x604 runnable
“GC task thread#0 (ParallelGC)” os_prio=0 tid=0x0000000001c9d800 nid=0x9f0 runnable
“GC task thread#1 (ParallelGC)” os_prio=0 tid=0x0000000001c9f000 nid=0x154c runnable
Page 171 of 485
“GC task thread#2 (ParallelGC)” os_prio=0 tid=0x0000000001ca0800 nid=0xcd0 runnable
“GC task thread#3 (ParallelGC)” os_prio=0 tid=0x0000000001ca2000 nid=0x1e58 runnable
“VM Periodic Task Thread” os_prio=2 tid=0x0000000018c5a000 nid=0x1b58 waiting on condition
JNI global references: 33
/Here you can view the deadlock related information!/ Found one Java-level deadlock: ============================= “Thread-1”: waiting to lock monitor 0x0000000017729fc8 (object 0x00000000d629b4d8, a java.lang.Object), which is held by “Thread-0” “Thread-0”: waiting to lock monitor 0x0000000017727738 (object 0x00000000d629b4e8, a java.lang.Object), which is held by “Thread-1”
Java stack information for the threads listed above: ============================================== ===== “Thread-1”: at com.leo.interview.SimpleDeadLock$B.run(SimpleDeadLock.java:56)
Page 172 of 485

  • waiting to lock <0x00000000d629b4d8> (a java.lang.Object) – locked <0x00000000d629b4e8> (a java.lang.Object) “Thread-0”: at com.leo.interview.SimpleDeadLock$A.run(SimpleDeadLock.java:34) – waiting to lock <0x00000000d629b4e8> (a java.lang.Object) – locked <0x00000000d629b4d8> (a java.lang.Object)

Found 1 deadlock.
/Memory usage. See the JVM book for details/ Heap PSYoungGen total 37888K, used 4590K [0x00000000d6100000, 0x00000000d8b00000, 0x0000000100000000) eden space 32768K, 14% used [0x00000000d6100000,0x00000000d657b968,0x00000000d8100000) from space 5120K, 0% used [0x00000000d8600000,0x00000000d8600000,0x00000000d8b00000) to space 5120K, 0% used [0x00000000d8100000,0x00000000d8100000,0x00000000d8600000) ParOldGen total 86016K, used 0K [0x0000000082200000, 0x0000000087600000, 0x00000000d6100000) object space 86016K, 0% used [0x0000000082200000,0x0000000082200000,0x0000000087600000) Metaspace used 3474K, capacity 4500K, committed 4864K, reserved 1056768K class space used 382K, capacity 388K, committed 512K, reserved 1048576K
Page 173 of 485
20. Why do we execute the run () method when we call the start () method? Why
Can’t we call the run () method directly?
When you call the start () method, you will create a new thread and execute the code in the run () method. However, if you directly call the run () method, it will not create a new thread or execute the code of the calling thread. It will only execute the run method as an ordinary method.
21. How do you wake up a blocked thread in Java?
In the history of Java, suspend () and resume () methods have been used to wake up threads, but there are many problems, typically deadlock. The solution can use object-oriented blocking, that is, thread blocking can be realized by using the wait () and notify () methods of the object class. First, the wait and notify methods are targeted at objects. Calling the wait () method of any object will lead to thread blocking, and the lock of the object will be released at the same time. Accordingly, calling the notify () method of any object will randomly release the blocked thread of the object, but it needs to re acquire the lock of the modified object until it is successfully obtained; Secondly, the wait and notify methods must be called in the synchronized block or method, and ensure that the lock object of the synchronized block or method is the same as the object calling the wait and notify methods. In this way, the current thread has successfully obtained the lock of an object before calling the wait. After executing the wait blocking, the current thread will release the previously obtained object lock.
22. What are the areas of cyclibariar and countdownlatch in Java
No?
Cyclicbarrier can be reused, but countdownlatch cannot.
Page 174 of 485
The countdownlatch in Java’s concurrent package can actually be regarded as a counter, but the operation of this counter is atomic. At the same time, only one thread can operate this counter, that is, only one thread can reduce the value in this counter at the same time. You can set an initial number to the countdownlatch object as the count value. Any call to the await () method on this object will block until the count value of this counter is reduced to 0 by other threads. Therefore, the await method will be blocked until the current count reaches zero. After that, all waiting threads will be released and all subsequent calls to await will return immediately. This happens only once – the count cannot be reset. If you need to reset the count, consider using cyclicbarrier. A very typical application scenario of countdownlatch is that a task wants to be executed, but it can only be executed after other tasks are executed. If we want to continue to perform tasks that call a await () method of a CountDownLatch object, after the other tasks perform their own tasks, the countDown () method on the same CountDownLatch object is called, and the task of calling the await () method will remain blocked until the count value of the CountDownLatch object is reduced to 0.
Cyclicbarrier is a synchronization helper class that allows a group of threads to wait for each other until they reach a common barrier point. Cyclicbarrier is useful in programs involving a set of fixed size threads that must wait for each other from time to time. Because this barrier can be reused after releasing the waiting thread, it is called a circular barrier.
23. What is an immutable object and how does it help write concurrent applications?
Immutable objects means that once an object is created, its state (object data, i.e. object attribute value) cannot be changed. On the contrary, they are mutable objects. The class of an immutable object is immutable class. The Java platform class library contains many immutable classes, such as string, wrapper classes of basic types, BigInteger and BigDecimal. Immutable objects are inherently thread safe. Their constants (fields) are created in the constructor. Since their state cannot be modified, these constants will never change.
Page 175 of 485
Immutable objects are always thread safe. An object is immutable only when the following states are satisfied:; Its status cannot be modified after creation; All fields are of type final; Moreover, it is created correctly (there is no escape of this reference during creation).
24. What is context switching in multithreading?
During context switching, the CPU will stop processing the currently running program and save the specific location of the current program for further operation. From this perspective, context switching is a bit like reading several books at the same time. While switching back and forth, we need to remember the page numbers of each book. In the program, the “page number” information during context switching is stored in the process control block (PCB). PCBs are also often referred to as “switchframe”. The page number information is saved to the CPU’s memory until they are used again. Context switching is the process of storing and restoring CPU state, which enables thread execution to resume execution from the point of interruption. Context switching is a basic feature of multitasking operating system and multithreading environment.
25. What is the thread scheduling algorithm used in Java?
Computers usually have only one CPU and can only execute one machine instruction at any time. Each thread can execute instructions only when it obtains the right to use the CPU. The so-called multi-threaded concurrent operation actually means that from a macro point of view, each thread obtains the right to use the CPU in turn and executes its own tasks respectively. In the run pool, there will be multiple threads in ready state waiting for the CPU, A task of Java virtual machine is to be responsible for thread scheduling. Thread scheduling refers to allocating CPU usage rights to multiple threads according to a specific mechanism
There are two scheduling models: time-sharing scheduling model and preemptive scheduling model. The time-sharing scheduling model is to let all threads obtain the right to use CPU in turn, and evenly allocate the time slice of CPU occupied by each thread, which is also easier to understand.
Page 176 of 485
Java virtual machine adopts preemptive scheduling model, which means giving priority to the threads with high priority in the runnable pool to occupy the CPU. If the threads in the runnable pool have the same priority, then randomly select a thread to occupy the CPU. A running thread runs until it has to abandon the CPU.
26. What is a thread group and why is it not recommended in Java?
Thread group and thread pool are two different concepts. Their roles are completely different. The former is to facilitate thread management, and the latter is to manage thread life cycle, reuse threads and reduce the overhead of creating and destroying threads.
27. Why is it better to use the executor framework than to use applications to create and manage threads?
Why use the executor thread pool framework? 1. Creating a thread new thread() every time a task is executed consumes performance. Creating a thread is time-consuming and resource consuming. 2. The thread created by calling new thread() lacks management and is called wild thread, and can be created without limit. The competition between threads will lead to excessive occupation of system resources and system paralysis, and the frequent alternation between threads will also consume a lot of system resources. 3. Threads started directly with new thread () are not conducive to expansion, such as timed execution, periodic execution, timed periodic execution, thread interrupt, etc.
Advantages of using the executor thread pool framework
1. It can reuse existing and idle threads, so as to reduce the creation of Thread objects and reduce the overhead of dead threads. 2. It can effectively control the maximum number of concurrent threads, improve the utilization of system resources, and avoid excessive resource competition. 3. The framework already has timing, periodic, single thread, concurrency control and other functions. To sum up, using thread pool framework executor can better manage threads and provide system resource utilization.
Page 177 of 485
28. How many methods can you implement a thread in Java?
Inherit the thread class, implement the runnable interface, implement the callable interface, and implement the call () method
29. How to stop a running thread?
Shared variable is used. In this way, shared variable is introduced because it can be used by multiple threads executing the same task as a signal to inform the execution of interrupted threads.
Use the interrupt method to terminate the thread
If a thread is blocked by waiting for some event to happen, how to stop the thread? This happens frequently, for example, when a thread is blocked because it needs to wait for keyboard input, or calls the Thread.join () method or Thread.sleep () method, calling ServerSocket.accept () method in the network, or calling DatagramSocket.receive () method, it may cause thread blockage and make the thread in an inoperative state. Even if the shared variable of the thread is set to true in the main program, the thread cannot check the loop flag at this time, and of course, it cannot be interrupted immediately. Our suggestion here is not to use the stop () method, but the interrupt () method provided by thread, because although this method will not interrupt a running thread, it can cause a blocked thread to throw an interrupt exception, so that the thread can end the blocking state in advance and exit the blocking code.
30. What is the difference between notify() and notifyall()?
Page 178 of 485
When a thread enters the wait, it must wait for other threads to notify / notifyAll. Using notifyAll, it can wake up all threads in the wait state to re-enter the lock contention queue, while notify can wake up only one thread.
If you are not sure, notifyAll is recommended to prevent notigy from causing program exceptions due to signal loss.
31. What is daemon thread? What does it mean?
The so-called daemon thread refers to the thread that provides a general service in the background when the program is running, and this thread does not belong to an indispensable part of the program. Therefore, when all non background threads end, the program terminates and kills all background threads in the process. Conversely, as long as any non background thread is still running, the program will not terminate. The setDaemon () method must be invoked before the thread starts to set it as a background thread. Note: the background process terminates its run () method without executing the finally clause.
For example, the garbage collection thread of the JVM is the daemon thread, and the finalizer is also the daemon thread.
32. How does java realize the communication and cooperation between multithreads?
Interrupt and shared variables
33. What is a reentrant lock?
For example, the reentrancy of explicit locks
public class UnReentrant{ Lock lock = new Lock(); public void outer(){
Page 179 of 485
lock.lock(); inner(); lock.unlock();
} public void inner(){ lock.lock(); //do something lock.unlock(); }
}
Inner is called in outer, outer locks lock first, so inner can’t get lock again. In fact, the thread calling outer has acquired the lock lock, but the acquired lock resources cannot be reused in the inner. This lock is called non reentrant. Reentrant means that the thread can enter the code block synchronized with any lock it already owns.
Synchronized and reentrantlock are reentrant locks, which relatively simplifies the development of concurrent programming.
34. When a thread enters a synchronized instance of an object
Method, can other threads enter other methods of this object?
If other methods are not synchronized, other threads can enter.
Therefore, when you want to open a thread safe object, you must ensure that each method is thread safe.
35. Understanding and implementation of optimistic lock and pessimistic lock, and what are the implementation methods?
Page 180 of 485
Pessimistic lock: always assume the worst case. Every time you go to get the data, you think others will modify it, so you lock it every time you get the data, so others will block the data until they get the lock. Many such locking mechanisms are used in traditional relational databases, such as row lock, table lock, read lock and write lock, which are locked before operation. Another example is the implementation of the synchronized keyword of the synchronization primitive in Java.
Optimistic lock: as the name suggests, it is very optimistic. Every time you go to get the data, you think others will not modify it, so you won’t lock it. However, when updating, you will judge whether others have updated the data during this period. You can use mechanisms such as version number. Optimistic locking is applicable to multi read applications, which can improve throughput, such as the one provided by the database, which is similar to write_ The condition mechanism actually provides optimistic locks. In Java, the atomic variable class under the java.util.concurrent.atomic package is implemented using CAS, an implementation of optimistic locking.
Implementation of optimistic lock: 1. Use version ID to determine whether the read data is consistent with the data submitted. Modify the version ID after submission. In case of inconsistency, discard and retry strategies can be adopted. 2. Compare and swap in Java is CAS. When multiple threads try to use CAS to update the same variable at the same time, only one thread can update the value of the variable, while other threads fail. The failed thread will not be suspended, but will be told that it has failed in the competition and can try again. The CAS operation contains three operands — the memory location to be read and written (V), the expected original value to be compared (a), and the new value to be written (b). If the value of memory location V matches the expected original value a, the processor will automatically update the location value to the new value B. Otherwise, the processor does nothing.
CAS disadvantages:
1. ABA problem: for example, when one thread takes a from memory location V, another thread two also takes a from memory, and two performs some operations to turn it into B, and then two turns the data at location V into a. at this time, thread one performs CAS operation and finds that there is still a in memory, and then one operates successfully. Although the CAS operation of thread one is successful, there may be a hidden problem. Starting from Java 1.5, the JDK’s atomic package provides a class atomicstampedreference to solve the ABA problem.
Page 181 of 485
2. Long cycle time and high overhead: in the case of serious resource competition (serious thread conflict), the probability of CAS spin will be relatively large, thus wasting more CPU resources, and the efficiency is lower than that of synchronized. 3. Only atomic operation of one shared variable can be guaranteed: when operating on a shared variable, we can use cyclic CAS to ensure atomic operation. However, when operating on multiple shared variables, cyclic CAS cannot ensure the atomicity of the operation. At this time, locks can be used.
36. What are the areas of synchronized map and concurrent HashMap
No?
Synchronized map locks the entire table at a time to ensure thread safety, so only one thread can visit map at a time.
Concurrent HashMap uses segment locks to ensure performance under multithreading. In concurrenthashmap, one bucket is locked at a time. Concurrenthashmap divides the hash table into 16 buckets by default. Common operations such as get, put and remove only lock the buckets currently needed. In this way, only one thread can enter, but now 16 write threads can execute at the same time. The improvement of concurrency performance is obvious. In addition, concurrenthashmap uses a different iterative method. In this iterative method, when the iterator is created and the collection changes again, it will no longer throw a concurrentmodificationexception. Instead, it will new data when changing, so as not to affect the original data. After the iterator is completed, it will replace the header pointer with new data, so that the iterator thread can use the original old data, The write thread can also change concurrently.
37. What application scenarios can copyonwritearraylist be used for?
Page 182 of 485
One of the benefits of copyonwritearraylist (lock free container) is that when multiple iterators traverse and modify the list at the same time, they will not throw concurrentmodificationexception. In copyonwritearraylist, writing will result in the creation of a copy of the entire underlying array, while the source array will remain in place, so that the read operation can be performed safely when the copied array is modified.
1. When writing, the array needs to be copied, which will consume memory. If the contents of the original array are large, it may lead to young GC or full GC; 2. It cannot be used in real-time reading scenarios, such as copying arrays and adding new elements. Therefore, after calling a set operation, the read data may still be old. Although copyonwritearraylist can achieve final consistency, it still can not meet the real-time requirements;
The idea disclosed by copyonwritearraylist 1. Separation of reading and writing, and separation of reading and writing 2. Final consistency 3. Use the idea of opening up another space to solve concurrency conflicts
38. What is thread safety? Are servlets thread safe?
Thread safety is a term in programming, which means that when a function or function library is called in a multithreaded environment, it can correctly handle the shared variables between multiple threads and make the program function complete correctly.
Servlets are not thread safe. Servlets are single instance multithreaded. When multiple threads access the same method at the same time, the thread safety of shared variables cannot be guaranteed. The action of struts 2 is multi instance and multi-threaded and thread safe. Each request will be assigned a new action to the request, which will be destroyed after the request is completed. Is the controller of spring MVC thread safe? No, the process is similar to servlet.
The advantage of struts 2 is that thread safety is not considered; Servlet and spring MVC need to consider thread safety, but the performance can be improved. Instead of dealing with too many GC, ThreadLocal can be used to deal with multithreading.
Page 183 of 485
39. What’s the use of volatile? Can you explain the application of volatile in one sentence
Scene?
Volatile ensures memory visibility and prevents instruction rearrangement.
Volatile is used for a single operation (single read or write) in a multithreaded environment.
40. Why do codes reorder?
During program execution, in order to provide performance, the processor and compiler often reorder the instructions, but they cannot reorder at will. Instead of sorting as you want, it needs to meet the following two conditions:
The result of program running cannot be changed in the single thread environment; Reordering is not allowed if there are data dependencies. It should be noted that reordering will not affect the execution results in a single threaded environment, but will destroy the execution semantics of multiple threads.
41. What are the differences between wait and sleep methods in Java?
The biggest difference is that while waiting, wait releases the lock, while sleep always holds the lock. Wait is usually used for inter thread interaction, and sleep is usually used to pause execution.
Let’s get to know more directly:
In Java, thread states are divided into six types:
Page 184 of 485
Initial state: New
When a thread object is created, but start() has not been called to start the thread, the thread is in the initial state.
Running state: in Java, running state includes ready state and running state. Ready state: the thread in this state has obtained all the resources required for execution, and can run as long as the CPU allocates execution rights. All ready threads are stored in the ready queue. The running state obtains the CPU execution right and the executing thread. Since a CPU can only execute one thread at a time, each CPU has only one running thread at each time.
Blocking state
When an executing thread fails to request a resource, it will enter the blocking state. In Java, blocking state specifically refers to the state entered when the request for lock fails. A blocking queue stores all threads in blocking state. The thread in the blocking state will constantly request resources. Once the request is successful, it will enter the ready queue and wait for execution. PS: lock, IO, socket and other resources.
Waiting state
When the wait, join and park functions are invoked in the current thread, the current thread will enter the wait state. There is also a waiting queue to store all waiting threads. A thread in a waiting state indicates that it needs to wait for instructions from other threads to continue running. Threads entering the waiting state will release CPU execution rights and resources (such as locks)
Timeout waiting state
When the running thread calls sleep (time), wait, join, parknanos and parkuntil, it will enter this state; Like the waiting state, it is not because resources cannot be requested, but actively enters, and other threads need to wake up after entering; After entering this state, release the CPU execution rights and occupied resources. The difference from the waiting state: after the timeout, it automatically enters the blocking queue and starts competing for locks.
Page 185 of 485
Termination state
The state of a thread after its execution ends.
be careful:
The wait () method releases the CPU’s execution rights and locks. The sleep (long) method only releases the CPU usage, and the lock is still occupied; The thread is put into the timeout waiting queue. Compared with yield, it will prevent the thread from running for a long time. The yield () method only releases the CPU execution right, the lock is still occupied, and the thread will be put into the ready queue and executed again in a short time. Wait and notify must be used together, that is, they must be called with the same lock; Wait and notify must be placed in a synchronous block. The objects calling wait and notify must be the lock objects of the synchronized blocks they are in.
42. Implement blocking queue with Java
Refer to the content of blocking queue in Java. The direct implementation is a little annoying: http://www.infoq.com/cn/artic…
43. What happens if an exception occurs while a thread is running?
If the exception is not caught, the thread will stop execution. Thread.uncaughtexceptionhandler is an embedded interface used to handle sudden thread interruption caused by uncapped exceptions. When an uncapped exception will cause thread interruption, the JVM will use thread. Getuncaughtexceptionhandler() to query the uncaughtexceptionhandler of the thread and pass the thread and exception as parameters to the uncaughtexception() method of the handler for processing.
Page 186 of 485
44. How to share data between two threads?
Variables can be shared between two threads. Generally speaking, shared variables require that the variables themselves are thread safe. When used in threads, if there are composite operations on shared variables, the thread safety of composite operations must also be guaranteed.
45. What is the difference between notify and notifyAll in Java?
The notify () method cannot wake up a specific thread, so it is only useful when a thread is waiting. NotifyAll () wakes up all threads and allows them to compete for locks, ensuring that at least one thread can continue to run.
46. Why are the wait, notify and notifyAll methods not in the thread
Inside the class?
An obvious reason is that the locks provided by Java are object level rather than thread level. Each object has a lock and is obtained through threads. Since wait, notify and notifyAll are lock level operations, they are defined in the object class because the lock belongs to the object.
47. What is ThreadLocal variable?
ThreadLocal is a special variable in Java. Each thread has a ThreadLocal, that is, each thread has its own independent variable, and the competition condition is completely eliminated. It is a good way to obtain thread safety for creating expensive objects. For example, you can use ThreadLocal to make simpledateformat thread safe. Because the creation of that kind is expensive and different instances need to be created for each call, it is not worth using it in the local scope. If you provide a self for each thread
Page 187 of 485
Unique variable copy will greatly improve efficiency. First, through reuse, the number of expensive objects is reduced. Second, you gain thread safety without using expensive synchronization or invariance.
48. What is the difference between interrupted and isinterrupted methods in Java?
interrupt
The interrupt method is used to interrupt a thread. The state of the thread calling this method will be set to the “interrupt” state. Note: thread interrupt only sets the interrupt status bit of the thread and will not stop the thread. The user needs to monitor the status of the thread and handle it. The method that supports thread interrupt (that is, the method that throws interruptedexception after thread interrupt) is to monitor the interrupt state of the thread. Once the interrupt state of the thread is set to “interrupt state”, an interrupt exception will be thrown.
interrupted
Query the interrupt status of the current thread and clear the original status. If a thread is interrupted, the first call to interrupted returns true, and the second and subsequent calls return false.
isInterrupted
Just query the interrupt status of the current thread
49, what is the reason why wait and notify methods are invoked in synchronous blocks?
The Java API enforces this. If you don’t, your code will throw an illegalmonitorstateexception. Another reason is to avoid race conditions between wait and notify.
Page 188 of 485
50. Why should you check the waiting condition in the loop?
Threads in the waiting state may receive error alarms and false wakes. If the waiting conditions are not checked in the loop, the program will exit without meeting the end conditions.
51. What is the difference between synchronous sets and concurrent sets in Java?
Both synchronous collections and concurrent collections provide suitable thread safe collections for multithreading and concurrency, but concurrent collections have higher scalability. Before Java 1.5, programmers only used synchronous collections, which would lead to contention when multithreading was concurrent, which hindered the scalability of the system. Java 5 introduces concurrent collections, such as concurrent HashMap, which not only provides thread safety, but also improves scalability with modern technologies such as lock separation and internal partitioning.
52. What is a thread pool? Why use it?
Creating a thread takes expensive resources and time. If a task comes to create a thread, the response time will be longer, and the number of threads a process can create is limited. In order to avoid these problems, when the program starts, several threads are created to respond to processing. They are called thread pools, and the threads in them are called worker threads. Starting from JDK 1.5, the Java API provides the executor framework so that you can create different thread pools.
53. How to detect whether a thread has a lock?
In java.lang.thread, there is a method called holdslock (), which returns true if and only if the current thread owns the lock of a specific object.
54. How do you get the thread stack in Java?
Page 189 of 485
Kill – 3 [Java PID] will not be output at the current terminal, but will be output to the place where the code is executed or specified. For example, kill – 3 Tomcat PID outputs the stack to the log directory. Jstack [Java PID] is relatively simple. It is displayed on the current terminal and can also be redirected to the specified file- Jvisualvm: thread dump is not explained. After opening jvisualvm, it is an interface operation, and the process is still very simple. 55. Which parameter in the JVM is used to control the small stack of threads- XSS stack size per thread
56. What is the role of the yield method in the thread class?
Change the current thread from execution state (running state) to executable state (ready state).
The current thread has reached the ready state, so which thread will change from the ready state to the execution state? It may be the current thread or other threads. It depends on the allocation of the system.
57. What is the concurrency of concurrenthashmap in Java?
Concurrent HashMap divides the actual map into several parts to achieve its scalability and thread safety. This division is obtained by using concurrency. It is an optional parameter of the concurrenthashmap class constructor. The default value is 16, so that contention can be avoided in the case of multithreading.
After jdk8, it abandoned the concept of segment, but enabled a new way to implement it by using CAS algorithm. At the same time, more auxiliary variables are added to improve concurrency. For details, please check the source code.
Page 190 of 485
58. What is semaphore in Java?
Semaphore in Java is a new synchronization class, which is a counting signal. Conceptually, semaphores maintain a set of permissions. If necessary, each acquire () is blocked before the license is available, and then the license is obtained. Each release () adds a license, which may release a blocked acquirer. However, instead of using the actual license object, semaphore only counts the number of available licenses and takes corresponding action. Semaphores are often used in multithreaded code, such as database connection pools.
59. What is the difference between the submit () and execute () methods in the java thread pool?
Both methods can submit tasks to the thread pool. The return type of the execute () method is void, which is defined in the executor interface.
The submit () method can return the future object holding the calculation results. It is defined in the executorservice interface. It extends the executor interface. Other thread pool classes such as ThreadPoolExecutor and scheduledthreadpoolexecutor have these methods.
60. What is blocking method?
Blocking method means that the program will wait for the method to complete without doing anything else. The accept () method of ServerSocket is to wait for the client to connect. Blocking here means that the current thread will be suspended before the call result is returned and will not return until the result is obtained. In addition, there are asynchronous and non blocking methods that return before the task is completed.
61. What is readwritelock in Java?
Page 191 of 485
Read write lock is the result of lock separation technology used to improve the performance of concurrent programs.
62. What is the difference between volatile variables and atomic variables?
The volatile variable can ensure the antecedent relationship, that is, the write operation will occur before the subsequent read operation, but it does not guarantee atomicity. For example, if you modify the count variable with volatile, the count + + operation is not atomic.
The atomic method provided by atomicinteger class can make this operation atomic. For example, getandincrement () method will perform atomic incremental operation to add one to the current value. Other data types and reference variables can also perform similar operations.
63. Can I call the run () method of thread class directly?
Certainly. However, if we call the run () method of thread, its behavior will be the same as that of ordinary methods and will be executed in the current thread. In order to execute our code in the new thread, we must use the thread. Start () method.
64. How to pause a running thread for a period of time?
We can use the sleep () method of the thread class to pause the thread for a period of time. It should be noted that this does not terminate the thread. Once the thread is awakened from sleep, the state of the thread will be changed to runnable, and it will be executed according to the thread scheduling.
65. What is your understanding of thread priority?
Page 192 of 485
Each thread has priority. Generally speaking, high priority threads will have priority at runtime, but this depends on the implementation of thread scheduling, which is OS dependent. We can define the priority of threads, but this does not guarantee that high priority threads will execute before low priority threads. Thread priority is an int variable (from 1 to 10). 1 represents the lowest priority and 10 represents the highest priority.
Java thread priority scheduling will be entrusted to the operating system for processing, so it is related to the specific operating system priority. If it is not specially required, it is generally unnecessary to set thread priority.
66. What are thread scheduler and time slicing
Slicing )?
Thread scheduler is an operating system service, which is responsible for allocating CPU time for threads in runnable state. Once we create a thread and start it, its execution depends on the implementation of the thread scheduler. As above, thread scheduling is not controlled by the Java virtual machine, so it is better for the application to control it (that is, don’t let your program depend on the priority of threads).
Time slicing refers to the process of allocating available CPU time to available runnable threads. CPU time can be allocated based on thread priority or thread waiting time.
67. How do you ensure that the thread where the main () method is located is the last end of the Java program
Thread?
We can use the join () method of the thread class to ensure that all threads created by the program end before the main () method exits.
Page 193 of 485
68. How do threads communicate?
When resources can be shared among threads, inter thread communication is an important means to coordinate them. The wait() \ notify() \ notifyall() method in the object class can be used for inter thread communication about the lock status of resources.
69. Why are the thread communication methods wait(), notify() and notifyall() defined
Meaning in the object class?
Each object in Java has a lock (monitor, which can also be a monitor), and methods such as wait (), notify () are used to wait for the lock of the object or notify other threads that the monitor of the object is available. There are no locks and synchronizers available to any object in Java threads. This is why these methods are part of the object class, so that each Java class has basic methods for inter thread communication.
70. Why must wait(), notify() and notifyall() be in the synchronization method or
Called in synchronization block?
When a thread needs to call the wait () method of an object, the thread must have the lock of the object, and then it will release the lock of the object and enter the waiting state until other threads call the notify () method on the object. Similarly, when a thread needs to call the notify () method of an object, it will release the lock of the object so that other waiting threads can get the lock. Since all these methods require the thread to hold the lock of the object, they can only be implemented through synchronization, so they can only be called in the synchronization method or synchronization block.
71. Why are the sleep () and yield () methods of thread class static?
Page 194 of 485
The sleep () and yield () methods of the thread class will run on the thread currently executing. So it makes no sense to call these methods on other waiting threads. This is why these methods are static. They can work in the currently executing thread and avoid programmers mistakenly thinking that they can call these methods in other non running threads.
72. How to ensure thread safety?
In Java, there are many ways to ensure thread safety — synchronization, using atomic concurrent classes, implementing concurrent locks, using volatile keywords, using invariant classes and thread safe classes.
73. Which is the better choice of synchronization method and synchronization block?
Synchronization block is a better choice because it won’t lock the whole object (of course, you can let it lock the whole object). Synchronization methods lock the entire object, even if there are multiple unrelated synchronization blocks in this class, which usually causes them to stop execution and wait to obtain the lock on this object.
Synchronization blocks should comply with the principle of open call. Only the corresponding objects in the code blocks that need to be locked can be locked. In this way, deadlock can also be avoided from the side.
74. How to create a daemon thread?
Using the setDaemon (true) method of the Thread class, you can set the thread as a daemon thread. It is important to note that you need to call this method before calling the start () method, otherwise IllegalThreadStateException exception will be thrown.
Page 195 of 485
75. What is the Java timer class? How to create a with a specific time interval
Mission?
Java.util.timer is a tool class that can be used to schedule a thread to execute at a specific time in the future. The timer class can be used to schedule one-time tasks or periodic tasks. Java.util.timertask is an abstract class that implements the runnable interface. We need to inherit this class to create our own scheduled task and use timer to schedule its execution.
Java Concurrent Programming (2)
1. Three elements of concurrent programming?
1. Atomicity atomicity refers to one or more operations, either all of which are executed without being interrupted by other operations, or none of which are executed. 2. Visibility refers to that when multiple threads operate on a shared variable, after one thread modifies the variable, other threads can immediately see the modification results. 3. Order, that is, the execution order of the program is executed according to the order of the code.
2. What are the ways to achieve visibility?
Synchronized or lock: ensure that only one thread obtains the lock and executes the code at the same time. Before the lock is released, refresh the latest value to the main memory to achieve visibility.
Page 196 of 485
3. The value of multithreading?
1. By giving full play to the advantages of multi-core CPU, multithreading can really give full play to the advantages of multi-core CPU, achieve the purpose of making full use of CPU, and use multithreading to complete several things at the same time without interfering with each other. 2. From the perspective of program running efficiency, single core CPU will not give full play to the advantages of multithreading, but will reduce the overall efficiency of the program due to the switching of thread context caused by running multithreading on single core CPU. However, for single core CPU, we still need to apply multithreading to prevent blocking. Imagine that if a single core CPU uses a single thread, as long as the thread is blocked, such as reading a data remotely, and the opposite end does not return and does not set a timeout, your whole program will stop running before the data returns. Multithreading can prevent this problem. Multiple threads run at the same time. Even if the code execution and reading data of one thread are blocked, it will not affect the execution of other tasks. 3. Easy modeling is another advantage that is not so obvious. Suppose there is a large task a, single thread programming, then we have to consider a lot, and it is troublesome to establish the whole program model. However, it would be much easier to decompose this large task a into several small tasks, Task B, task C and task D, establish program models respectively, and run these tasks through multithreading.
4. What are the ways to create threads?
1. Inherit thread class to create thread class 2, create thread class through runnable interface 3, create thread through callable and future 4, and create thread class through thread pool
5. Comparison of three ways to create threads?
Page 197 of 485
1. Multithreading is created by implementing runnable and callable interfaces. The advantage is that the thread class only implements the runnable interface or callable interface, and can inherit other classes. In this way, multiple threads can share the same target object, so it is very suitable for multiple same threads to process the same resource, so it can separate CPU, code and data to form a clear model, which better reflects the object-oriented idea. The disadvantage is that the programming is slightly complex. If you want to access the current thread, you must use the thread. Currentthread () method. 2. The advantage of creating multithreads by inheriting the thread class is that it is easy to write. If you need to access the current thread, you do not need to use the thread. Currentthread() method, and you can directly use this to obtain the current thread. The disadvantage is that the thread class has inherited the thread class, so it can no longer inherit other parent classes. 3. The difference between runnable and callable 1. The method specified (rewritten) by callable is call (), and the method specified (rewritten) by runnable is run (). 2. Callable tasks can return values after execution, while runnable tasks cannot return values. 3. The call method can throw exceptions, but the run method cannot. 4. Running the callable task can get a future object that represents the result of asynchronous calculation. It provides a method to check whether the calculation is completed, wait for the calculation to complete, and retrieve the results of the calculation. Through the future object, you can understand the task execution, cancel the task execution, and obtain the execution results.
6. Thread state flow diagram
Thread life cycle and five basic states: IMG_ 2.png
7. Java threads have five basic states
Page 198 of 485
1. New state: after the thread object pair is created, it enters the new state, such as thread t = new mythread(); 2. Runnable: when the start() method (t.start();) of the thread object is called, the thread enters the ready state. A thread in the ready state only means that it is ready to wait for the CPU to schedule execution at any time, not that it will execute immediately after t.start() is executed; 3. Running: when the CPU starts scheduling the thread in the ready state, the thread can really execute, that is, enter the running state. Note: the ready state is the only entry to the running state, that is, if a thread wants to enter the running state for execution, it must first be in the ready state; 4. Blocked: a thread in the running state temporarily gives up the right to use the CPU and stops execution for some reason. At this time, it enters the blocking state. Until it enters the ready state, it has the opportunity to be called by the CPU again to enter the running state. According to the different causes of blocking, the blocking state can be divided into three types: 1. Waiting blocking: the thread in the running state executes the wait () method to make the thread enter the waiting blocking state; 2. Synchronization blocking: if a thread fails to obtain a synchronized synchronization lock (because the lock is occupied by other threads), it will enter the synchronization blocking state; 3. Other blocking: when a thread calls sleep () or join () or makes an I / O request, the thread will enter the blocking state. When the sleep () state times out, the join () waits for the thread to terminate or time out, or the I / O processing is completed, the thread will return to the ready state. 5. Dead: when the thread has finished executing or exited the run () method due to an exception, the thread ends its life cycle.
8. What is a thread pool? What are the creation methods?
Thread pool is to create several threads in advance. If there are tasks to be processed, the threads in the thread pool will process the tasks. After processing, the threads will not be destroyed, but wait for the next task. Since creating and destroying threads consume system resources, you can consider using thread pool to improve the performance of the system when you want to create and destroy threads frequently. Java provides an implementation of the Java. Util. Concurrent. Executor interface for creating thread pools.
Page 199 of 485
9. Creation of four thread pools:
1. Newcachedthreadpool creates a cacheable thread pool. 2. Newfixedthreadpool creates a fixed length thread pool, which can control the maximum concurrent number of threads. 3. Newscheduledthreadpool creates a fixed length thread pool to support scheduled and periodic task execution. 4. The newsinglethreadexecution creates a singleton thread pool that uses only a unique worker thread to execute tasks.
10. What are the advantages of thread pooling?
1. Reuse existing threads to reduce the overhead of object creation and destruction. 2. It can effectively control the maximum number of concurrent threads, improve the utilization of system resources, and avoid excessive resource competition and congestion. 3. It provides functions such as regular execution, regular execution, single thread, concurrency control, etc.
11. What are the commonly used concurrency tool classes?
1、CountDownLatch 2、CyclicBarrier 3、Semaphore 4、Exchanger
12. The difference between cyclicbarrier and countdownlatch
1. Countdownlatch simply means that a thread waits until all the other threads it is waiting for are completed and the countdown () method is called to give a notification, then the current thread can continue to execute.
Page 200 of 485
2. Cyclicbarrier means that all threads wait until all threads are ready to enter the await () method, and all threads begin to execute at the same time! 3. The counter for countdownlatch can only be used once. The cyclicbarrier counter can be reset using the reset () method. Therefore, cyclicbarrier can handle more complex business scenarios. For example, if a calculation error occurs, it can reset the counter and let the threads execute it again. 4. Cyclicbarrier also provides other useful methods, such as getnumberwaiting method, to obtain the number of threads blocked by cyclicbarrier. The isbroken method is used to know whether the blocked thread is interrupted. True if interrupted, false otherwise.
13. The role of synchronized?
In Java, the synchronized keyword is used to control thread synchronization, that is, in a multithreaded environment, to control that the synchronized code segment is not executed by multiple threads at the same time. Synchronized can be added to either a piece of code or a method.
14. Role of volatile keyword
For visibility, Java provides the volatile keyword to ensure visibility. When a shared variable is modified by volatile, it will ensure that the modified value will be updated to main memory immediately. When other threads need to read, it will read the new value from memory. From a practical point of view, volatile plays an important role in combining with CAS to ensure atomicity. For details, see the classes under java.util.concurrent.atomic package, such as atomicinteger.
15. What is CAS
CAS is the abbreviation of compare and swap, which is what we call comparative exchange. CAS is a lock based operation and optimistic lock. In Java, locks are divided into optimistic locks and pessimistic locks. Pessimistic locking is to lock resources. After a thread that previously obtained the lock releases the lock, the next thread can access it
Page 201 of 485
Ask. The optimistic lock takes a broad attitude and processes resources without locking in some way. For example, it obtains data by adding version to the record. The performance is greatly improved compared with the pessimistic lock. The CAS operation contains three operands — memory location (V), expected original value (a), and new value (b). If the value in the memory address is the same as that of a, update the value in the memory to B. CAS obtains data through an infinite loop. If the value in the address obtained by thread a is modified by thread B in the first round of loop, thread a needs to spin and may not be executed until the next cycle. Most classes in the java.util.concurrent.atomic package are implemented using CAS operations (atomicinteger, atomicboolean, atomiclong).
16. Problems with CAS
1. CAS is easy to cause ABA problems. A thread a changes the value to B, and then to A. at this time, CAS thinks that there is no change, but it has changed. The solution to this problem can be identified by version number, and version plus 1 for each operation. In Java 5, an atomicstampedreference has been provided to solve the problem. 2. The atomicity of code block cannot be guaranteed. The knowledge guaranteed by CAS mechanism is the atomicity operation of one variable, but the atomicity of the whole code block cannot be guaranteed. For example, if you need to ensure that three variables are updated atomically together, you have to use synchronized. 3. CAS increases CPU utilization. As mentioned earlier, CAS is a circular judgment process. If the thread does not get the state, the CPU resources will be occupied all the time.
17. What is future?
In concurrent programming, we often use the non blocking model. In the previous three implementations of multithreading, whether inheriting the thread class or implementing the runnable interface, we can not guarantee to obtain the previous execution results. By implementing the callback interface and using future, you can receive the execution results of multiple threads. Future represents the result of an asynchronous task that may not have been completed. For this result, you can add a callback to take corresponding actions after the task execution succeeds or fails.
Page 202 of 485
18. What is AQS
AQS is the abbreviation of abustadtqueuedsynchronizer. It is an underlying synchronization tool class improved by Java. It uses an int type variable to represent the synchronization state, and provides a series of CAS operations to manage the synchronization state. AQS is a framework used to build locks and synchronizers. Using AQS can easily and efficiently construct a large number of synchronizers widely used, such as reentrantlock and semaphore, and other synchronizers such as reentrantreadwritelock, synchronousqueue, futuretask, etc. are based on AQS.
19. AQS supports two synchronization modes:
1. Exclusive type 2. Shared type makes it convenient for users to realize different types of synchronization components. Exclusive type such as reentrantlock, shared type such as semaphore and countdownlatch, and combined type such as reentrantreadwritelock. In short, AQS provides the underlying support for use. Users can freely play how to assemble and implement it.
20. What is readwritelock
First of all, it’s not that reentrantlock is bad, but reentrantlock has limitations sometimes. If reentrantlock is used, it may be to prevent data inconsistency caused by thread a writing data and thread B reading data. However, if thread C is reading data and thread D is also reading data, reading data will not change the data. There is no need to lock, but it is locked, which reduces the performance of the program. Because of this, readwritelock was born. Readwritelock is a read-write lock interface. Reentrantreadwritelock is a specific implementation of readwritelock interface, which implements read-write
Page 203 of 485
The read lock is shared and the write lock is exclusive. Read and read are not mutually exclusive. Read and write, write and read, write and write are mutually exclusive, which improves the performance of read and write.
21. What is futuretask
As mentioned earlier, futuretask represents an asynchronous operation task. Futuretask can pass in a specific implementation class of callable, which can wait for the results of the asynchronous operation task, judge whether it has been completed, cancel the task, etc. Of course, because futuretask is also the implementation class of runnable interface, futuretask can also be put into the thread pool.
22. Difference between synchronized and reentrantlock
Synchronized is the same keyword as if, else, for and while, and reentrantlock is a class, which is the essential difference between the two. Since reentrantlock is a class, it provides more and more flexible features than synchronized. It can be inherited, have methods, and have a variety of class variables. The scalability of reentrantlock than synchronized is reflected in several points: 1. Reentrantlock can set the waiting time for obtaining locks, so as to avoid deadlocks. 2 Reentrantlock can obtain the information of various locks. 3. Reentrantlock can flexibly realize multi-channel notification. In addition, the locking mechanisms of the two are actually different. The underlying layer of reentrantlock calls the unsafe Park method to lock. The synchronized operation should be the mark word in the object header. I’m not sure about this.
23. What are optimistic locks and pessimistic locks
1. Optimistic lock: just like its name, it is optimistic about thread safety problems caused by concurrent operations. Optimistic lock believes that competition does not always occur, so it does not need to hold the lock, and compares and replaces the two actions
Page 204 of 485
Try to modify variables in memory for an atomic operation. If it fails, it indicates a conflict, and there should be corresponding retry logic. 2. Pessimistic lock: like its name, pessimistic lock holds a pessimistic state for thread safety problems caused by concurrent operations. Pessimistic lock believes that competition will always occur. Therefore, every time a resource is operated, it will hold an exclusive lock, just like synchronized. No matter how many times, seven or twenty-one, it will operate the resource directly.
24. How does thread B know that thread a has modified the variable
1. Volatile modifies variable 2. Synchronized modifies the method of modifying variable 3. Wait / notify 4. While polling
25. Synchronized, volatile, CAS comparison
1. Synchronized is a pessimistic lock, which is preemptive and will cause other threads to block. 2. Volatile provides multi-threaded shared variable visibility and prevents instruction reordering optimization. 3. CAS is an optimistic lock (non blocking) based on conflict detection
26. What is the difference between sleep method and wait method?
This problem is often asked. Both the sleep method and the wait method can be used to give up the CPU for a certain time. The difference is that if the thread holds the monitor of an object, the sleep method will not give up the monitor of the object, and the wait method will give up the monitor of the object
27. What is ThreadLocal? What’s the usage?
Page 205 of 485
ThreadLocal is a local thread copy variable utility class. It is mainly used to map the private thread and the replica object stored by the thread. The variables between threads do not interfere with each other. In high concurrency scenarios, stateless calls can be realized. It is especially suitable for scenarios where each thread depends on variable values that cannot be used to complete operations. In short, ThreadLocal is a method of exchanging space for time. In each thread, a threadlocal.threadlocalmap implemented by the open address method is maintained to isolate the data. If the data is not shared, there will be no thread safety problems.
28. Why do wait () and notify () / notifyAll () methods synchronize blocks
Called in
This is mandatory by JDK. The wait () method and notify () / notifyAll () method must obtain the lock of the object before calling
29. What are the methods of multithreading synchronization?
Synchronized keyword, lock implementation, distributed lock, etc.
30. Thread scheduling strategy
The thread scheduler selects the thread with the highest priority, but if the following happens, it will terminate the operation of the thread: 1, the yield method is called in the thread body to give up the right to occupy CPU 2, the thread body calls the sleep method to make the thread go to sleep state 3, and the thread is blocked by IO operation 4. Another higher priority thread appears 5) in a system that supports time slices, the time slice of this thread runs out
Page 206 of 485
31. What is the concurrency of concurrenthashmap
The concurrency of concurrenthashmap is the size of segment, which is 16 by default, which means that there can be up to 16 threads operating concurrenthashmap at the same time, which is also the greatest advantage of concurrenthashmap over hashtable. In any case, can two threads of hashtable obtain the data in hashtable at the same time?
32. How to find which thread uses the longest CPU in Linux environment
1. Get the PID, JPS or PS – EF | grep Java of the project. As mentioned earlier, 2. Top – H – P PID, the order cannot be changed
33. Java deadlock and how to avoid it?
Deadlock in Java is a programming situation in which two or more threads are permanently blocked. Java deadlock occurs in at least two threads and two or more resources. The root cause of Java deadlock is that a cross closed loop application occurs when applying for a lock.
34. Causes of deadlock
1. Multiple threads involve multiple locks. These locks intersect, which may lead to a closed loop of lock dependency. For example, if a thread applies for lock B when it has obtained lock a and has not released lock B, another thread has obtained lock B and needs to obtain lock a before releasing lock B. therefore, the closed loop occurs and falls into a deadlock cycle. 2. The default lock request operation is blocked.
Page 207 of 485
Therefore, in order to avoid deadlock, when multiple object locks are crossed, it is necessary to carefully review all methods in the classes of these objects to see if there is a possibility of a loop leading to lock dependency. In a word, try to avoid the delay method and synchronization method of calling other objects in a synchronous method.
35. How to wake up a blocked thread
If the thread is blocked due to calling the wait (), sleep () or join () methods, you can interrupt the thread and wake it up by throwing interruptedexception; If the thread encounters IO blocking, there is nothing to do, because IO is implemented by the operating system, and Java code has no way to directly contact the operating system.
36. What is the help of immutable objects to multithreading
As mentioned earlier, immutable objects ensure the memory visibility of objects. No additional synchronization means are required for reading immutable objects, which improves the efficiency of code execution.
37. What is multithreaded context switching
Context switching of multithreading refers to the process of switching CPU control from a running thread to another thread ready and waiting for CPU execution.
38. What happens if the thread pool queue is full when you submit a task
Here’s a distinction: 1. If the unbounded queue linkedblockingqueue is used, that is, unbounded queue, it doesn’t matter. Continue to add tasks to the blocking queue for execution, because linkedblockingqueue can be regarded as an infinite queue and can store tasks indefinitely
Page 208 of 485
2. If a bounded queue is used, such as arrayblockingqueue, the task will be added to the arrayblockingqueue first. If the arrayblockingqueue is full, the number of threads will be increased according to the value of maximumpoolsize. If the number of threads is increased or cannot be processed, the arrayblockingqueue will continue to be full, Then the rejection policy rejectedexecutionhandler will be used to process the full tasks. The default is abortpolicy
39. What is the thread scheduling algorithm used in Java
Preemptive. After a thread runs out of CPU, the operating system will calculate a total priority according to thread priority, thread hunger and other data, and assign the next time slice to a thread for execution.
40. What are thread scheduler and time slicing
Slicing)?
Thread scheduler is an operating system service, which is responsible for allocating CPU time for threads in runnable state. Once we create a thread and start it, its execution depends on the implementation of the thread scheduler. Time slicing refers to the process of allocating available CPU time to available runnable threads. CPU time can be allocated based on thread priority or thread waiting time. Thread scheduling is not controlled by the Java virtual machine, so it is better for the application to control it (that is, don’t let your program depend on the priority of threads).
41. What is spin
Many synchronized codes are only simple codes, and the execution time is very fast. At this time, locking the waiting threads may not be worth it, because thread blocking involves the switching between user mode and kernel mode. Since the code in synchronized executes very fast, you might as well let the line waiting for the lock
Page 209 of 485
Instead of being blocked, the process does a busy cycle at the synchronized boundary, which is called spin. If you do a busy cycle for many times and find that the lock has not been obtained, then blocking may be a better strategy.
42. Lock interface in javaconcurrency API
What is it? What are its advantages over synchronization?
The lock interface provides more scalable lock operations than synchronization methods and synchronization blocks. They allow more flexible structures, can have completely different properties, and can support conditional objects of multiple related classes. Its advantages are: 1. It can make the lock more fair. 2. It can make the thread respond to an interrupt while waiting for the lock. 3. It can make the thread try to obtain the lock and return immediately or wait for a period of time when it cannot obtain the lock. 4. It can obtain and release the lock in different ranges and in different orders
43. Thread safety of singleton mode
The first thing to say is that thread safety in singleton mode means that instances of a class will only be created once in a multithreaded environment. There are many ways to write singleton mode. Let me summarize: 1. Writing of hungry singleton mode: thread safety 2. Writing of lazy singleton mode: non thread safety 3. Writing of double check lock singleton mode: thread safety
44. What is the function of semaphore
Semaphore is a semaphore that limits the number of concurrent blocks of code. Semaphore has a constructor, which can pass in an int integer n, indicating that only n threads can access a piece of code at most. If it exceeds n, please wait until a thread finishes executing this code block and the next one
Page 210 of 485
Thread re-entry. It can be seen that if the int integer passed in the semaphore constructor n = 1, it becomes a synchronized.
45. What is the executors class?
Executors provides tools and methods for executor, executorservice, scheduledexecutorservice, threadfactory and callable classes. Executors can be used to easily create thread pools
46. Which thread is calling the construction method and static block of thread class
This is a very tricky and cunning question. Please remember: the constructor and static block of thread class are called by the thread of new, and the code in run method is called by the thread itself. If the above statement puzzles you, let me take an example. Suppose thread1 is new in thread2 and thread2 is new in the main function, then: 1. The construction method and static block of thread2 are called by the main thread, the run () method of thread2 is called by thread2 itself, 2. The construction method and static block of thread1 are called by thread2, and the run () of thread1 Method is called by thread1 itself
47. Which is the better choice of synchronization method and synchronization block?
Synchronous block, which means that the code outside the synchronous block is executed asynchronously, which improves the efficiency of the code more than synchronizing the whole method. Please know a principle: the smaller the scope of synchronization, the better.
48. What are the exceptions caused by too many Java threads?
Page 211 of 485
1. Thread life cycle overhead is very high. 2. It consumes too much CPU resources. If the number of runnable threads is more than the number of available processors, some threads will be idle. A large number of idle threads will occupy a lot of memory and put pressure on the garbage collector, and a large number of threads will also incur other performance overhead when competing for CPU resources. 3. To reduce stability, the JVM has a limit on the number of threads that can be created. This limit value will vary with different platforms, and is subject to many factors, including the startup parameters of the JVM, the size of the request stack in the thread constructor, and the restrictions of the underlying operating system on threads. If these restrictions are violated, an outofmemoryerror exception may be thrown.
Java interview questions (I)
1. What are the characteristics of object-oriented?
Answer:
Object oriented features mainly include the following aspects:
 abstraction: abstraction is the process of summarizing the common characteristics of a class of objects to construct a class, including data abstraction and behavior abstraction. Abstraction only focuses on the attributes and behaviors of objects, not the details of these behaviors.  inheritance: inheritance is the process of obtaining inheritance information from an existing class and creating a new class. Classes that provide inheritance information are called parent classes (superclasses and base classes); The class that gets inheritance information is called a subclass (derived class). Inheritance makes the changing software system have a certain continuity. At the same time, inheritance is also an important means to encapsulate the variable factors in the program (if you can’t understand it, please read the part about bridge pattern in Java and pattern or design pattern interpretation by Dr. Yan Hong).
Page 212 of 485
 encapsulation: encapsulation is generally considered to bind data and methods of operating data, and access to data can only be through defined interfaces. The essence of object-oriented is to describe the real world as a series of completely autonomous and closed objects. The method we write in the class is a kind of encapsulation of implementation details; We write a class that encapsulates data and data operations. It can be said that packaging is to hide all things that can be hidden and only provide the simplest programming interface to the outside world (think about the difference between ordinary washing machines and full-automatic washing machines. Obviously, full-automatic washing machines are better packaged and therefore easier to operate; the smart phones we use now are well packaged enough because they can handle everything with a few buttons).  polymorphism: polymorphism means that objects of different subtypes are allowed to respond differently to the same message. To put it simply, you call the same method with the same object reference, but do different things. Polymorphism is divided into compile time polymorphism and run-time polymorphism. If the object method is regarded as a service provided by the object to the outside world, the runtime polymorphism can be explained as: when system a accesses the services provided by system B, system B has many ways to provide services, but everything is transparent to system a (just as the electric razor is system a, its power supply system is system B. system B can use battery power or AC power, or even solar energy. System a only calls the power supply method through class B objects, but does not know what the underlying implementation of the power supply system is and how to obtain power). Method overload) It implements compile time polymorphism (also known as pre binding), while method override implements runtime polymorphism (also known as post binding). Runtime polymorphism is the quintessence of object-oriented. To realize polymorphism, two things need to be done: 1) method rewriting (subclasses inherit the parent class and rewrite the existing or abstract methods in the parent class); 2) object modeling (use the parent type reference to refer to the sub type object, so that the same reference calls the same method, which will show different behavior according to different sub class objects).
2. Access modifiers public, private, protected, and do not write (default)
What’s the difference?
Answer:
The modifier is the same as other packages of the current package subclass
public √ √ √ √
Page 213 of 485
protecte d
√ √ √ ×
default √ √ × ×
private √ × × ×
Class members do not write access modifiers. The default is default. By default, it is equivalent to public for other classes in the same package, and private for other classes not in the same package. A protected pair of subclasses is equivalent to being exposed, and a protected pair of subclasses that are not in the same package and have no parent-child relationship is equivalent to being private. In Java, the modifiers of external classes can only be public or default, and the modifiers of class members (including internal classes) can be the above four.
3. Is string the most basic data type?
Answer:
no There are only 8 basic data types in Java: byte, short, int, long, float, double, char, Boolean; In addition to the primitive type, the rest are reference types. The enumeration type introduced after Java 5 is also a special reference type.
4、float f=3.4; Is it correct?
Answer: incorrect. 3.4 is a double precision number. Assigning a double to a float is a down casting (also known as narrowing), which will cause precision loss. Therefore, it is necessary to force the type conversion float f = (float) 3.4; Or write as float f = 3.4f;.
Page 214 of 485
5、shorts1=1; s1=s1+1; Is there anything wrong? shorts1=1; s1+=1;
Is there anything wrong?
Answer:
For short, S1 = 1; s1 = s1 + 1; Since 1 is of type int, the result of S1 + 1 operation is also of type int. the type needs to be cast before it can be assigned to type short. While short S1 = 1; s1 += 1; It can be compiled correctly because S1 + = 1; Equivalent to S1 = (short) (S1 + 1); There is an implicit cast.
6. Does java have goto?
Answer:
Goto is a reserved word in Java, which is not used in the current version of Java. (according to the Java programming language written by James Gosling (the father of Java) The appendix of the book gives a list of Java keywords, including goto and const, which are currently unavailable keywords. Therefore, they are called reserved words in some places. In fact, the word reserved words should have a broader meaning, because programmers familiar with C language know that words or words with special meaning used in the system class library (all combinations are treated as reserved words)
7. What’s the difference between int and integer?
Answer:
Java is an almost pure object-oriented programming language, but basic data types are introduced for the convenience of programming. However, in order to operate these basic data types as objects, Java is for every basic data type
Page 215 of 485
The corresponding wrapper class is introduced for data types, and the wrapper class of int is integer. Since Java 5, the automatic boxing / unpacking mechanism has been introduced, so that the two can be converted to each other. Java provides wrapper types for each primitive type:
 original type: Boolean, char, byte, short, int, long, float, double  packing type: Boolean, character, byte, short, integer, long, float, double
class AutoUnboxingTest {
Public static void main (string [] args) {integer a = new integer (3); integer B = 3; / / automatically boxing 3 into integer type int c = 3; system.out.println (a = = b); / / false two references do not refer to the same pair
as
System.out.println(a==c); // Truea automatically unpacks into int type and C
compare
}
}
Recently, I also encountered an interview question, which is also related to automatic packing and unpacking. The code is as follows:
public class Test03 {
public static void main(String[] args) { Integer f1 = 100, f2 = 100, f3 = 150, f4 = 150;
System.out.println(f1 == f2); System.out.println(f3 == f4);
Page 216 of 485
}
}
If it is not clear, it is easy to think that both outputs are either true or false. First of all, it should be noted that the four variables F1, F2, F3 and F4 are integer object references, so the following = = operation compares not values but references. What is the essence of packing? When we assign an int value to an integer object, we will call the static method valueof of the integer class. If you look at the source code of valueof, you will know what happened.
public static Integer valueOf(int i) { if (i >= IntegerCache.low && i <= IntegerCache.high) return IntegerCache.cache[i + (-IntegerCache.low)]; return new Integer(i); }
Integercache is an internal class of integer. Its code is as follows:
/**

  • Cache to support the object identity semantics of autoboxing for values between -128 and 127 (inclusive) as required by JLS. The cache is initialized on first usage. The size of the cache may be controlled by the {@code -XX:AutoBoxCacheMax=<size>} option. During VM initialization, java.lang.Integer.IntegerCache.high property may be set and saved in the private system properties in the sun.misc.VM class. /

Page 217 of 485
private static class IntegerCache { static final int low = -128; static final int high; static final Integer cache[];
static { // high value may be configured by property int h = 127; String integerCacheHighPropValue =
sun.misc.VM.getSavedProperty(“java.lang.Integer.IntegerCache.high”); if (integerCacheHighPropValue != null) { try { int i = parseInt(integerCacheHighPropValue); i = Math.max(i, 127); // Maximum array size is Integer.MAX_VALUE h = Math.min(i, Integer.MAX_VALUE – (-low) -1); } catch( NumberFormatException nfe) { // If the property cannot be parsed into an int, ignore it. } } high = h;
cache = new Integer[(high – low) + 1]; int j = low; for(int k = 0; k < cache.length; k++) cache[k] = new Integer(j++);
// range [-128, 127] must be interned (JLS7 5.1.7) assert IntegerCache.high >= 127;
Page 218 of 485
}
private IntegerCache() {}
}
Simply put, if the value of the integer literal is between – 128 and 127, the new integer object will not be created, but the integer object in the constant pool will be directly referenced. Therefore, the result of f1f4 in the above interview question is false.
Reminder: the more seemingly simple interview questions, the more mysterious they are, and the interviewer needs to have quite deep skills.
8. What is the difference between & & and & &?
Answer:
&Operator has two uses: (1) bitwise AND; (2) Logic and&& The operator is a short circuit and operation. The difference between logic and short circuit and is very great, although both require the Boolean values at the left and right ends of the operator to be true, and the value of the whole expression is true&& It is called short circuit operation because if && the value of the expression on the left is false, the expression on the right will be directly short circuited and will not be operated. Many times we may need to use & & instead of &. For example, when verifying the user’s login, it is determined that the user name is not null and not an empty string. It should be written as: username= null &&! Username. Equals (“”), the order of the two cannot be exchanged, and the & operator cannot be used, because if the first condition is not true, the equals comparison of strings cannot be performed at all, otherwise a NullPointerException exception will be generated. Note: the same is true for the difference between logical or operator (|) and short circuit or operator (|).
Add: if you are familiar with JavaScript, you may feel the power of short-circuit operation more. If you want to be a master of JavaScript, start playing with short-circuit operation first.
Page 219 of 485
9. Explain the stack, heap, and method area in memory
Usage of.
Answer:
Generally, we define a variable of basic data type, a reference to an object, and the on-site saving of function calls all use the stack space in the JVM; The objects created through the new keyword and constructor are placed in the heap space. The heap is the main area managed by the garbage collector. Since the current garbage collectors adopt the generational collection algorithm, the heap space can also be divided into the new generation and the old generation. More specifically, it can be divided into Eden, survivor (also divided into from survivor and to survivor) and tenured; The method area and heap are memory areas shared by each thread, which are used to store class information, constants, static variables, JIT compiler compiled code and other data that have been loaded by the JVM; Literal values in the program, such as 100 written directly, “hello” and constants, are placed in the constant pool, which is a part of the method area,. The stack space is the fastest to operate, but the stack is very small. Usually, a large number of objects are placed in the heap space. The size of the stack and heap can be adjusted through the startup parameters of the JVM. When the stack space is used up, stackoverflowerror will be caused, while insufficient heap and constant pool space will cause outofmemoryerror.
String str = new String(“hello”);
In the above statement, the variable STR is placed on the stack, the string object created with new is placed on the heap, and the literal “hello” is placed in the method area.
Supplement 1: in the newer version of Java (starting with an update of Java 6), due to the development of JIT compiler and the gradual maturity of “escape analysis” technology, optimization technologies such as on stack allocation and scalar replacement make it not so absolutely right that objects must be allocated on the heap. Supplement 2: the runtime constant pool is equivalent to the class file constant pool, which is dynamic. The Java language does not require that constants must be generated only during compilation. New constants can also be put into the pool during runtime, such as the intern() method of string class.
Page 220 of 485
Take a look at the execution results of the following code and compare whether the previous and future running results of Java 7 are consistent.
String s1 = new StringBuilder(“go”) .append(“od”).toString(); System.out.println(s1.intern() == s1); String s2 = new StringBuilder(“ja”) .append(“va”).toString(); System.out.println(s2.intern() == s2);
10. What is math. Round (11.5) equal to? Math. Round (- 11.5) equals
How many?
Answer:
The return value of math. Round (11.5) is 12, and the return value of math. Round (- 11.5) is – 11. The principle of rounding is to add 0.5 to the parameter and then round it down.
11. Whether switch can act on byte and long,
Can it act on string?
Answer:
Before Java 5, in switch (expr), expr can only be byte, short, char and int. Starting from Java 5, enumeration types have been introduced into Java. Expr can also be enum type. Starting from Java 7, expr can also be string, but long is not allowed in all current versions.
Page 221 of 485
12. Use the most efficient method to calculate 2 times 8?
Answer:
2 < < 3 (shifting 3 bits left is equivalent to multiplying 2 to the power of 3, and shifting 3 bits right is equivalent to dividing 2 to the power of 3).
Add: when we rewrite the hashcode method for the written class, we may see the code shown below. In fact, we don’t quite understand why we use such multiplication to generate hash code (hash code), and why this number is a prime number and why 31 is usually selected? You can Baidu the answers to the first two questions yourself. Choose 31 because you can use shift and subtraction instead of multiplication to get better performance. At this point, you may have thought: 31 * num is equivalent to (Num < < 5) – num, shifting 5 bits left is equivalent to multiplying 2 to the 5th power, and subtracting itself is equivalent to multiplying 31. Now VM can automatically complete this optimization.
public class PhoneNumber { private int areaCode; private String prefix; private String lineNumber;
@Override public int hashCode() { final int prime = 31; int result = 1; result = prime result + areaCode; result = prime result + ((lineNumber == null) ? 0 : lineNumber.hashCode()); result = prime * result + ((prefix == null) ? 0 : prefix.hashCode()); return result; }
Page 222 of 485
@Override public boolean equals(Object obj) { if (this == obj) return true; if (obj == null) return false; if (getClass() != obj.getClass()) return false; PhoneNumber other = (PhoneNumber) obj; if (areaCode != other.areaCode) return false; if (lineNumber == null) { if (other.lineNumber != null) return false; } else if (!lineNumber.equals(other.lineNumber)) return false; if (prefix == null) { if (other.prefix != null) return false; } else if (!prefix.equals(other.prefix)) return false; return true; }
}
13. Does the array have a length () method? Does string have a length () method?
Answer:
Page 223 of 485
The array has no length () method and has a length attribute. String has a length () method. In JavaScript, the length of the string is obtained through the length attribute, which is easy to be confused with Java.
14. In Java, how to jump out of the current multi nested loop?
Answer:
Add a mark such as a before the outermost cycle, and then use break a; You can jump out of multiple loops. (Java supports labeled break and continue statements, which are a bit similar to goto statements in C and C + +, but just like goto, you should avoid using labeled break and continue because it won’t make your program more elegant and even has the opposite effect in many cases, so this syntax is actually not better.)
15. Can a constructor be overridden?
Answer:
Constructors cannot be inherited, so they cannot be overridden, but they can be overloaded.
16. Two objects have the same value (x.equals (y) = = true), but can have different values
Hash code, is that right?
Answer:
No, if two objects X and y satisfy x.equals (y) = = true, their hash codes should be the same. Java specifies eqauls method and hashcode method as follows: (1) if two
Page 224 of 485
If the objects are the same (the equals method returns true), their hashcode values must be the same; (2) If the hashcodes of two objects are the same, they are not necessarily the same. Of course, you don’t have to follow the requirements, but if you violate the above principles, you will find that when using the container, the same object can appear in the set set, and the efficiency of adding new elements will be greatly reduced (for systems using hash storage, frequent hash code conflicts will lead to a sharp decline in access performance).
Add: many Java programs know about the equals and hashcode methods, but many people just know them. In Joshua Bloch’s masterpiece effective Java (many software companies, effective Java, Java programming ideas, and Refactoring: improving the quality of existing code) It’s a must read book for Java programmers. If you haven’t read it, go to Amazon to buy one.) it introduces the equals method in this way: first, the equals method must meet reflexivity (x.equals (x) must return true), symmetry (when x.equals (y) returns true, y.equals (x) must also return true), transitivity (x.equals (y) and y.equals (z) When both return true, x.equals (z) must also return true) and consistency (when the object information referenced by X and Y is not modified, calling x.equals (y) multiple times should get the same return value), and for any reference x with non null value, x.equals (null) must return false. The tips to implement high-quality equals methods include: 1. Use the = = operator to check whether the parameter is a reference to this object; 2. Use the instanceof operator to check whether the parameter is of the correct type; 3. For the key attribute in the class, check whether the attribute of the parameter passed in object matches it; 4. After writing the equals method, ask yourself whether it meets symmetry, transitivity and consistency; 5. Always rewrite hashcode when rewriting equals; 6. Do not replace the object object in the equals method parameter with other types, and do not forget the @ override annotation when overriding.
17. Can I inherit string class?
Answer:
String class is final and cannot be inherited.
Add: Inheriting string itself is a wrong behavior. The best way to reuse string types is Association (has-a) and dependency (Use-A) rather than inheritance (is-a).
Page 225 of 485
18. When an object is passed as a parameter to a method, the method can change
The properties of this object and can return the changed results, so what is the value transfer here
Pass or pass by reference?
Answer:
Is value passing. Method calls in the Java language only support parameter value passing. When an object instance is passed to a method as a parameter, the value of the parameter is a reference to the object. The properties of the object can be changed during the call, but the change of the object reference will not affect the caller. In C + + and C #, you can change the value of the passed in parameter by passing reference or output parameter. The following code can be written in c# but not in Java.
using System;
namespace CS01 {
class Program { public static void swap(ref int x, ref int y) { int temp = x; x = y; y = temp; }
public static void Main (string[] args) { int a = 5, b = 10; swap (ref a, ref b); // a = 10, b = 5;
Page 226 of 485
Console.WriteLine (“a = {0}, b = {1}”, a, b);
}
}
}
Note: it’s very inconvenient not to pass references in Java. This is still not improved in Java 8. That’s why a large number of wrapper classes appear in the code written in Java (put the references that need to be modified through method calls into a wrapper class, and then pass the wrapper object into the method). This will only make the code bloated, In particular, it is intolerable for developers who have transformed from C and C + + to Java programmers.
19. What is the difference between string, StringBuilder and StringBuffer?
Answer:
The Java platform provides two types of strings: string and StringBuffer / StringBuilder, which can store and manipulate strings. Where string is a read-only string, which means that the string content referenced by string cannot be changed. The string object represented by the StringBuffer / StringBuilder class can be modified directly. StringBuilder is introduced in Java 5. Its method is exactly the same as that of StringBuffer. The difference is that it is used in a single threaded environment. Because all aspects of it are not synchronized, its efficiency is also higher than that of StringBuffer.
Interview question 1 – under what circumstances is the performance of string connection with + operator better than calling the append method of StringBuffer / StringBuilder object? Interview question 2 – please say the output of the following program.
class StringEqualTest {
public static void main(String[] args) { String s1 = “Programming”;
Page 227 of 485
String s2 = new String(“Programming”); String s3 = “Program”; String s4 = “ming”; String s5 = “Program” + “ming”; String s6 = s3 + s4; System.out.println(s1 == s2); System.out.println(s1 == s5); System.out.println(s1 == s6); System.out.println(s1 == s6.intern()); System.out.println(s2 == s2.intern());
}
}
Supplement: to answer the above interview questions, you need to clear two points: 1. The intern method of string object will get the reference of the corresponding version of string object in the constant pool (if there is a string in the constant pool and the equals result of string object is true), if there is no corresponding string in the constant pool, the string will be added to the constant pool, Then return the reference of the string in the constant pool; 2. The essence of string + operation is to create a StringBuilder object for append operation, and then process the spliced StringBuilder object into a string object with toString method. This can be seen by obtaining the JVM bytecode instruction corresponding to the class file with javap – C stringequaltest.class command.
20. The difference between overload and override. Overloaded
Can methods be distinguished by return type?
Answer:
Method overloading and rewriting are ways to realize polymorphism. The difference is that the former realizes compile time polymorphism, while the latter realizes run-time polymorphism. Overloading occurs in a class. Methods with the same name are considered overloaded if they have different parameter lists (different parameter types, different number of parameters, or both); rewrite
Page 228 of 485
It occurs between a child class and a parent class. Rewriting requires that the child class rewritten method and the parent class rewritten method have the same return type, which is better accessible than the parent class rewritten method, and cannot declare more exceptions than the parent class rewritten method (Richter substitution principle). Overloads have no special requirements for return types.
Interview question: Huawei once asked such a question in the interview question – “why can’t we distinguish overloads according to the return type”. Tell your answer quickly!
21. Describe the principle and mechanism of loading class files by JVM?
Answer:
Class loading in JVM is implemented by class loader and its subclasses. Class loader in Java is an important Java runtime system component, which is responsible for finding and loading classes in class files at runtime. Due to the cross platform nature of Java, the compiled Java source program is not an executable program, but one or more class files. When a java program needs to use a class, the JVM ensures that the class has been loaded, connected (verified, prepared, and parsed), and initialized. Class loading refers to reading the data in the class’s. Class file into memory, usually creating a byte array to read into the. Class file, and then generating the class object corresponding to the loaded class. After loading, the class object is not complete, so the class is not available at this time. After the class is loaded, it enters the connection phase, which includes three steps: verification, preparation (allocating memory for static variables and setting the default initial value) and parsing (replacing symbolic references with direct references). Finally, the JVM initializes the class, including: 1) if the class has a direct parent and the class has not been initialized, initialize the parent class first; 2) If there are initialization statements in the class, they are executed in turn. Class loading is completed by class loader, which includes root loader (bootstrap), extension loader (extension), system loader (system) and user-defined class loader (subclass of java.lang.classloader). Starting from Java 2 (JDK 1.2), the class loading process adopts the parent delegation mechanism (PDM). PDM better ensures the security of the Java platform. In this mechanism, the bootstrap provided by the JVM is the root loader, and other loaders have only one parent loader. Class loading first requests the parent class loader to load. Only when the parent class loader is powerless can it be loaded by its child class
Page 229 of 485
The loader loads itself. The JVM does not provide a reference to bootstrap to a java program. The following is a description of several class loaders:
 bootstrap: it is generally implemented with local code and is responsible for loading the JVM basic core class library (rt.jar);  extension: load the class library from the directory specified by the java.ext.dirs system attribute, and its parent loader is bootstrap;  system: also known as application class loader, its parent class is extension. It is the most widely used class loader. It records classes from the directory specified by the environment variable classpath or the system attribute java.class.path. It is the default parent loader of the user-defined loader.
22. Can a Chinese character be stored in a char variable? Why?
Answer:
Char type can store a Chinese character, because the encoding used in Java is Unicode (without selecting any specific encoding, it is the only way to directly use the number of characters in the character set), and a char type takes up 2 bytes (16 bits), so it is no problem to put a Chinese character.
Supplement: using Unicode means that characters have different expressions inside and outside the JVM. They are Unicode inside the JVM. When this character is transferred from inside the JVM to outside (for example, stored in the file system), encoding conversion is required. Therefore, there are byte stream and character stream in Java, as well as conversion streams that convert between character stream and byte stream, such as inputstreamreader and outputstreamreader. These two classes are adapter classes between byte stream and character stream, which undertake the task of encoding conversion; For C programmers, to complete such encoding conversion, I’m afraid it depends on the shared memory feature of Union (Consortium / community).
Page 230 of 485
23. What is the difference between an abstract class and an interface
Same?
Answer:
Abstract classes and interfaces cannot be instantiated, but references to abstract classes and interface types can be defined. If a class inherits an abstract class or implements an interface, it needs to implement all the abstract methods in it, otherwise the class still needs to be declared as an abstract class. Interfaces are more abstract than abstract classes, because constructors, abstract methods and concrete methods can be defined in abstract classes, while constructors cannot be defined in interfaces, and all methods are abstract methods. Members in an abstract class can be private, default, protected, or public, while all members in an interface are public. Member variables can be defined in abstract classes, while member variables defined in interfaces are actually constants. Classes with abstract methods must be declared as abstract classes, and abstract classes do not necessarily have abstract methods.
24. Staticnested class and innerclass
Different?
Answer:
Static nested class is an internal class declared static. It can be instantiated independent of external class instances. The internal class usually needs to be instantiated after the external class is instantiated. Its syntax looks strange, as shown below.
/* Poker (a pair of poker)@Author Luo Hao
Page 231 of 485
*/Public Class Poker {private static string [] suits = {spades “,” hearts “,” grass flowers “,” squares “}; private static int [] faces = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13};
private Card[] cards;
/* constructor / public Poker() { cards = new Card[52]; for(int i = 0; i < suites.length; i++) { for(int j = 0; j < faces.length; j++) { cards[i * 13 + j] = new Card(suites[i], faces[j]); } } }
/* Shuffle (random disorder) / public void shuffle() { for(int i = 0, len = cards.length; i < len; i++) { int index = (int) (Math.random() * len); Card temp = cards[index]; cards[index] = cards[i]; cards[i] = temp; }
Page 232 of 485
}
/* Licensing@Param index licensing location */ public Card deal(int index) { return cards[index]; }
/* Cards (one poker)[internal class]@Author Luo Hao /Public class card {private string suite; / / decor private int face; / / points
public Card(String suite, int face) { this.suite = suite; this.face = face; }
@Override public String toString() { String faceStr = “”; switch(face) { case 1: faceStr = “A”; break;
Page 233 of 485
case 11: faceStr = “J”; break; case 12: faceStr = “Q”; break; case 13: faceStr = “K”; break; default: faceStr = String.valueOf(face); } return suite + faceStr;
}
}
}
Test code:
class PokerTest {
Public static void main (string [] args) {poker poker = new poker(); poker. Shuffle(); / / shuffle poker.card C1 = poker.deal (0); / / deal the first card / / for the non static internal class card / / only the external Class Poker object can create a card object poker.card C2 = poker.new card (“red heart”, 1); / / create a card yourself
System.out.println(c1); // The first system. Out. Println (C2) after shuffling// Print: red heart a
}
}
Interview question – where does the following code produce compilation errors?
class Outer {
Page 234 of 485
class Inner {}
public static void foo() { new Inner(); }
public void bar() { new Inner(); }
public static void main(String[] args) { new Inner(); }
}
Note: the creation of non static internal class objects in Java depends on their external class objects. In the above interview questions, Foo and main methods are static methods. There is no this in the static method, that is, there is no so-called external class object, so it is impossible to create internal class objects. If you want to create internal class objects in static methods, you can do this:
new Outer().new Inner();
25. Is there a memory leak in Java? Please briefly describe it.
Answer:
Theoretically, Java will not have memory leakage because of its garbage collection mechanism (GC) (which is also an important reason why Java is widely used in server-side programming); However, in actual development, there may be useless but reachable objects, which can not be recycled by GC, so it will also lead to memory leakage. For example, the objects in Hibernate’s session (first level cache) are persistent, and the garbage collector will not recycle these objects. However, there may be useless garbage objects in these objects. If the first level cache is not closed or flushed in time, memory leakage may be caused. The code in the following example can also cause memory leaks.
Page 235 of 485
import java.util.Arrays; import java.util.EmptyStackException;
public class MyStack<T> { private T[] elements; private int size = 0;
private static final int INIT_CAPACITY = 16;
public MyStack() { elements = (T[]) new Object[INIT_CAPACITY]; }
public void push(T elem) { ensureCapacity(); elements[size++] = elem; }
public T pop() { if(size == 0) throw new EmptyStackException(); return elements[–size]; }
private void ensureCapacity() { if(elements.length == size) { elements = Arrays.copyOf(elements, 2 * size + 1); } }
}
Page 236 of 485
The above code implements a stack (first in, last out (Filo)) structure. At first glance, there seems to be no obvious problem. It can even pass various unit tests you write. However, the pop method has the problem of memory leakage. When we pop up an object in the stack with the pop method, the object will not be garbage collected, even if the program using the stack no longer references these objects, because the stack maintains an obsolete reference to these objects. In languages that support garbage collection, memory leakage is very hidden. This memory leakage is actually unconscious object retention. If an object reference is kept unconsciously, the garbage collector will not process this object or other objects referenced by this object. Even if there are only a few such objects, many objects may be excluded from garbage collection, which will have a significant impact on performance. In extreme cases, disk paging will be triggered (the physical memory exchanges data with the virtual memory of the hard disk), and even causes outofmemoryerror.
26. Can abstract methods be static at the same time,
Can it be both native and synchronized
modification?
Answer:
Not at all. Abstract methods need subclass rewriting, while static methods cannot be rewritten, so the two are contradictory. Local methods are implemented by local code (such as C code), while abstract methods are not implemented, which is also contradictory. Synchronized is related to the implementation details of methods. Abstract methods do not involve implementation details, so they are also contradictory.
27. Explain the difference between static variables and instance variables.
Answer:
Page 237 of 485
Static variable is a variable modified by static modifier, also known as class variable. It belongs to a class and does not belong to any object of the class. No matter how many objects a class creates, the static variable has and only has one copy in memory; The instance variable must depend on an instance. You need to create an object first, and then access it through the object. Static variables allow multiple objects to share memory.
Supplement: in java development, there are usually a large number of static members in context classes and tool classes.
28. Is it possible to issue a non static response from within a static method
(non static) method call?
Answer:
No, static methods can only access static members, because the object must be created before calling non static methods. When calling static methods, the object may not be initialized.
29. How to implement object cloning?
Answer:
There are two ways: 1) implement the clonable interface and override the clone () method in the object class; 2) . implement the serializable interface and realize cloning through object serialization and deserialization, which can realize real deep cloning. The code is as follows.
import java.io.ByteArrayInputStream; import java.io.ByteArrayOutputStream; import java.io.ObjectInputStream; import java.io.ObjectOutputStream;
Page 238 of 485
import java.io.Serializable;
public class MyUtil {
private MyUtil() { throw new AssertionError(); }
@SuppressWarnings(“unchecked”) public static <T extends Serializable> T clone(T obj) throws Exception { ByteArrayOutputStream bout = new ByteArrayOutputStream(); ObjectOutputStream oos = new ObjectOutputStream(bout); oos.writeObject(obj);
ByteArrayInputStream bin = new ByteArrayInputStream(bout.toByteArray()); ObjectInputStream ois = new ObjectInputStream(bin); return (T) ois.readObject();
//Note: calling the close method of bytearrayinputstream or bytearrayoutputstream objects makes no sense. / / these two memory based streams can release resources as long as the garbage collector cleans up the objects, which is different from the release of external resources (such as file streams)}}
Here is the test code:
import java.io.Serializable;
Page 239 of 485
/* human beings@Author Luo Hao */ class Person implements Serializable { private static final long serialVersionUID = -9102017020286042305L;
private String name; // Name private int age// Age private car// Car
public Person(String name, int age, Car car) { this.name = name; this.age = age; this.car = car; }
public String getName() { return name; }
public void setName(String name) { this.name = name; }
public int getAge() { return age; }
public void setAge(int age) {
Page 240 of 485
this.age = age;
}
public Car getCar() { return car; }
public void setCar(Car car) { this.car = car; }
@Override public String toString() { return “Person [name=” + name + “, age=” + age + “, car=” + car + “]”; }
}
/* Cars@Author Luo Hao */ class Car implements Serializable { private static final long serialVersionUID = -5713945027627603702L;
private String brand; // Brand private int Maxspeed// Top speed
public Car(String brand, int maxSpeed) {
Page 241 of 485
this.brand = brand; this.maxSpeed = maxSpeed;
}
public String getBrand() { return brand; }
public void setBrand(String brand) { this.brand = brand; }
public int getMaxSpeed() { return maxSpeed; }
public void setMaxSpeed(int maxSpeed) { this.maxSpeed = maxSpeed; }
@Override public String toString() { return “Car [brand=” + brand + “, maxSpeed=” + maxSpeed +
“]”;
}
}
class CloneTest {
public static void main(String[] args) {
Page 242 of 485
try {
Person p1 = new Person(“Hao LUO”, 33, new Car(“Benz”,
300));
Person p2 = MyUtil.clone(p1); // Deep clone P2. Getcar(). Setbrand (“BYD”)// Modify the brand attribute of the car object associated with the cloned person object P2. / / the car associated with the original person object P1 will not be affected. / / when the person object is cloned, its associated car object is also cloned system. Out. Println (P1);} catch (Exception e) { e.printStackTrace(); }
}
}
Note: cloning based on serialization and deserialization is not only deep cloning, but more importantly, it can check whether the object to be cloned supports serialization through generic restriction. This check is completed by the compiler rather than throwing an exception at runtime. This scheme is obviously better than cloning objects using the clone method of object class. It’s always better to expose problems at compile time than to leave them at run time.
30. What is GC? Why GC?
Answer:
GC means garbage collection. Memory processing is a place where programmers are prone to problems. Forgetting or wrong memory recycling will lead to program or system instability or even crash. The GC function provided by java can automatically monitor whether the object exceeds the scope, so as to achieve the purpose of automatic memory recycling, The Java language does not provide a display operation method to free allocated memory. Java programmers don’t have to worry about memory management because the garbage collector manages it automatically. To request garbage collection, you can call one of the following methods: system. Gc() or runtime. Getruntime(). Gc(), but the JVM can mask the displayed garbage collection calls.
Page 243 of 485
Garbage collection can effectively prevent memory leakage and effectively use available memory. The garbage collector usually runs as a separate low priority thread to clear and recycle the dead or unused objects in the memory heap under unpredictable circumstances. Programmers cannot call the garbage collector to garbage collect an object or all objects in real time. In the early days of the birth of Java, garbage collection was one of the biggest highlights of Java, because server-side programming needed to effectively prevent memory leakage. However, over time, Java’s garbage collection mechanism has become something criticized. Mobile intelligent terminal users usually think that IOS system has a better user experience than Android system. One of the deep-seated reasons is the unpredictability of garbage collection in Android system.
Add: there are many garbage collection mechanisms, including generational replication garbage collection, marked garbage collection, incremental garbage collection, etc. Standard Java processes have both stacks and heaps. The stack holds the original local variables, and the heap holds the objects to be created. The basic algorithm for heap memory recycling and reuse in the Java platform is called marking and clearing, but Java has improved it by adopting “generational garbage collection”. This method will divide the heap memory into different areas according to the life cycle of Java objects. During garbage collection, objects may be moved to different areas:
 Eden: This is the area where objects were originally born, and for most objects, this is the only area where they have existed.  survivor: objects that survive from Eden will be moved here.  tenured: This is the home of surviving objects old enough. The minor GC process will not touch this place. In those years, when the young generation collection could not put objects into the lifelong care garden, a complete collection (major GC) would be triggered. Compression may also be involved here to make enough space for large objects.
JVM parameters related to garbage collection:
 XMS / – Xmx – initial size of heap / maximum size of heap  XMN – size of younger generation in heap  XX: – disableexplicitgc – make system. GC () have no effect  XX: + printgcdetails – print GC details
Page 244 of 485
 XX: + printgcdatestamps – print the timestamp of GC operation  XX: newsize / XX: maxnewsize – set the size of the new generation / the maximum size of the new generation  XX: newratio – you can set the proportion of the old generation and the new generation  XX: printtenuringdistribution – set the distribution of object ages in the survivor park after each new generation GC  XX: initialtenuringthreshold / – x 10: Maxtenuringthreshold: set the initial value and maximum value of the threshold value of the elderly generation  XX: targetsurvivorratio: set the target utilization rate of the survival area
31、Strings=newString(“xyz”); How many string objects were created?
Answer:
Two objects, one is the “XYZ” of the static area, and the other is the object created on the heap with new.
32. Is the interface extendable? Is the abstract class implementable
(implements) interface? Can abstract classes inherit concrete classes
class)?
Answer:
Interfaces can inherit interfaces and support multiple inheritance. Abstract classes can implement interfaces, and abstract classes can inherit concrete classes or abstract classes.
Page 245 of 485
33. Can a “Java” source file contain multiple classes (not internal classes)?
What are the restrictions?
Answer:
Yes, but there can be at most one public class in a source file, and the file name must be completely consistent with the class name of the public class.
34. Can anonymous innerclass inherit other classes
Class? Can I implement the interface?
Answer:
You can inherit other classes or implement other interfaces. This method is commonly used in swing programming and Android development to realize event listening and callback.
35. Can an inner class reference the members of its containing class (outer class)? Is there any
What restrictions?
Answer:
An internal class object can access the members of the external class object that created it, including private members.
36. What are the usages of the final keyword in Java?
Page 246 of 485
Answer:
(1) Decorated class: indicates that this class cannot be inherited; (2) Modifier method: indicates that the method cannot be overridden; (3) Modified variable: indicates that the variable can only be assigned once, and the value cannot be modified (constant).
37. Indicate the running results of the following programs
class A {
static { System.out.print(“1”); }
public A() { System.out.print(“2”); }
}
class B extends A{
static { System.out.print(“a”); }
public B() { System.out.print(“b”); }
}
Page 247 of 485
public class Hello {
public static void main(String[] args) { A ab = new B(); ab = new B(); }
}
Answer:
Execution result: 1a2b2b. When the object is created, the calling sequence of the constructor is to initialize the static member first, then call the parent class constructor, then initialize the non static member, and finally call the self constructor.
Tip: if you can’t give the correct answer to this question, it means that you haven’t fully understood the Java class loading mechanism in question 21. Take a look again quickly.
38. Conversion between data types:
 how to convert a string to a basic data type?  how to convert basic data types to strings?
Answer:
 call the method parsexxx (string) or valueof (string) in the wrapper class corresponding to the basic data type to return the corresponding basic type;
Page 248 of 485
 one method is to connect (+) the basic data type with the empty string (“”) to obtain its corresponding string; the other method is to call the valueof() method in the string class to return the corresponding string
39. How to reverse and replace strings?
Answer:
There are many methods. You can write and implement them yourself, or use the methods in string or StringBuffer / StringBuilder. A very common interview question is to use recursion to realize string inversion. The code is as follows:
public static String reverse(String originStr) { if(originStr == null || originStr.length() <= 1) return originStr; return reverse(originStr.substring(1)) + originStr.charAt(0); }
40. How to convert a GB2312 encoded string to an iso-8859-1 encoded string
character string?
Answer:
The code is as follows:
String S1 = “hello”; String s2 = new String(s1.getBytes(“GB2312”), “ISO-8859-1”);
Page 249 of 485
41. Date and time:
 how to obtain month, day, hour, minute and second?  how to obtain the number of milliseconds from 0:0:0 on January 1, 1970 to now?  how to get the last day of a month?  how to format the date?
Answer:
Problem 1: create a java.util.calendar instance, call its get () method, and pass in different parameters to obtain the corresponding values of the parameters. In Java 8, you can use java.time.localdatetimer to get the. The code is as follows.
public class DateTimeTest { public static void main(String[] args) { Calendar cal = Calendar.getInstance(); System.out.println(cal.get(Calendar.YEAR)); System.out.println(cal.get(Calendar.MONTH)); // 0 – 11 System.out.println(cal.get(Calendar.DATE)); System.out.println(cal.get(Calendar.HOUR_OF_DAY)); System.out.println(cal.get(Calendar.MINUTE)); System.out.println(cal.get(Calendar.SECOND));
// Java 8 LocalDateTime dt = LocalDateTime.now(); System.out.println(dt.getYear()); System.out.println(dt.getMonthValue()); // 1 – 12 System.out.println(dt.getDayOfMonth()); System.out.println(dt.getHour());
Page 250 of 485
System.out.println(dt.getMinute()); System.out.println(dt.getSecond());
}
}
Question 2: the number of milliseconds can be obtained by the following methods.
Calendar.getInstance().getTimeInMillis(); System.currentTimeMillis(); Clock.systemDefaultZone().millis(); // Java 8
Question 3: the code is shown below.
Calendar time = Calendar.getInstance(); time.getActualMaximum(Calendar.DAY_OF_MONTH);
Question 4: use the format (date) method in the subclass of Java. Text. Dataformat (such as the simpledateformat class) to format the date. In Java 8, you can use java.time.format.datetimeformatter to format the time and date. The code is as follows.
import java.text.SimpleDateFormat; import java.time.LocalDate; import java.time.format.DateTimeFormatter; import java.util.Date;
class DateFormatTest {
public static void main(String[] args) { SimpleDateFormat oldFormatter = new SimpleDateFormat(“yyyy/MM/dd”); Date date1 = new Date();
Page 251 of 485
System.out.println(oldFormatter.format(date1));
// Java 8 DateTimeFormatter newFormatter = DateTimeFormatter.ofPattern(“yyyy/MM/dd”); LocalDate date2 = LocalDate.now(); System.out.println(date2.format(newFormatter)); } }
Add: Java’s time and date API has always been criticized. In order to solve this problem, Java 8 has introduced a new time and date API, including localdate, Localtime, localdatetime, clock, instant and other classes. The design of these classes uses invariant patterns, so they are thread safe. If you don’t understand these contents, please refer to my other article “summary and Thinking on Java Concurrent Programming”.
42. Print the current time of yesterday.
Answer:
import java.util.Calendar;
class YesterdayCurrent { public static void main(String[] args){ Calendar cal = Calendar.getInstance(); cal.add(Calendar.DATE, -1); System.out.println(cal.getTime()); } }
Page 252 of 485
In Java 8, you can use the following code to achieve the same function.
import java.time.LocalDateTime;
class YesterdayCurrent {
public static void main(String[] args) { LocalDateTime today = LocalDateTime.now(); LocalDateTime yesterday = today.minusDays(1);
System.out.println(yesterday);
}
}
43. Compare Java and Java sciprt.
Answer:
JavaScript and Java are two different products developed by two companies. Java is an object-oriented programming language launched by the original Sun Microsystems company, which is especially suitable for Internet application development; JavaScript is a product of Netscape company. In order to expand the function of Netscape browser, it is an object-based and event driven explanatory language that can be embedded in web pages. The predecessor of JavaScript is livescript; The predecessor of Java is oak language. The similarities and differences between the two languages are compared as follows:
 object-based and object-oriented: Java is a real object-oriented language. Even when developing simple programs, objects must be designed; JavaScript is a scripting language, which can be used to make complex software that has nothing to do with the network and interacts with users. It is an object-based and
Page 253 of 485
Event driven programming language, so it itself provides very rich internal objects for designers to use.  interpretation and compilation: Java source code must be compiled before execution. JavaScript is an interpretative programming language. Its source code does not need to be compiled and is interpreted and executed by the browser. (at present, almost all browsers use JIT (just in time compilation) technology to improve the operation efficiency of JavaScript)  strongly typed variables and weakly typed variables: Java adopts strongly typed variable checking, that is, all variables must be declared before compilation; Variables in JavaScript are weakly typed, and can not be declared even before using variables. The JavaScript interpreter checks and infers their data types at run time.  different code formats.
Add: the four points listed above are the so-called standard answers circulated on the Internet. In fact, the most important difference between Java and JavaScript is that one is a static language and the other is a dynamic language. At present, the development trend of programming language is functional language and dynamic language. In Java, class is a first-class citizen, while in JavaScript, function is a first-class citizen. Therefore, JavaScript supports functional programming and can use lambda functions and closures. Of course, Java 8 also supports functional programming and provides support for lambda expressions and functional interfaces. For such questions, it is better to answer them in your own language during the interview, which will be more reliable. Don’t recite the so-called standard answers on the Internet.
44. When to use assertion?
Answer:
Assertion is a common debugging method in software development, which is supported by many development languages. Generally speaking, assertions are used to ensure the most basic and critical correctness of programs. Assertion checking is usually turned on during development and testing. In order to ensure the efficiency of program execution, assertion checking is usually turned off after software release. An assertion is a statement containing a Boolean expression, which is assumed to be true when executed; If the value of the expression is false, an assertion error is reported. The use of assertions is shown in the following code:
Page 254 of 485
assert(a > 0); // throws an AssertionError if a <= 0
Assertions can take two forms: assert expression1; assert Expression1 : Expression2 ; Expression1 should always produce a Boolean value. Expression2 can be any expression that yields a value; This value is used to generate a string message that displays more debugging information.
To enable assertions at run time, you can use the – enableassertions or – EA tag when starting the JVM. To choose to disable assertions at runtime, you can use the – Da or – disableassertions flag when starting the JVM. To enable or disable assertions in a system class, use the – ESA or – DSA tag. Assertions can also be enabled or disabled on a package basis.
Note: assertions should not change the state of the program in any way. In short, if you want to prevent code execution when certain conditions are not met, you can consider using assertions to prevent it.
45. What is the difference between error and exception?
Answer:
Error refers to system level errors and exceptions that need not be handled by the program. It is a serious problem when recovery is not impossible but difficult; For example, memory overflow, it is impossible to expect the program to handle such a situation; Exception refers to the exception that needs to be caught or handled by the program. It is a design or implementation problem; In other words, it means that if the program works normally, it will never happen.
Interview question: in the interview of Motorola in 2005, I once asked the question “if a process reports a stack overflow run-time error, what’s the most possible cause?”, giving four options A. pack of memory; b. write on an invalid memory space; c. recursive function calling; d. Array index out of boundary. The Java program is running
Page 255 of 485
You may also encounter stackoverflowerror, which is an unrecoverable error. You can only modify the code again. The answer to this interview question is C. If you write a recursion that cannot converge quickly, it is likely to cause a stack overflow error, as shown below:
class StackOverflowErrorTest {
public static void main(String[] args) { main(null); }
}
Tip: when writing programs with recursion, we must keep two points in mind: 1. Recursion formula; 2. Convergence condition (when to stop recursion).
46. There is a return statement in try {}, so the one immediately after the try
Whether the code in finally {} will be executed and when will it be executed? Return
Before or after?
Answer:
Is executed before the method returns to the caller.
Note: it is not good to change the return value in finally, because if there is a finally code block, the return statement in try will not immediately return the caller, but record the return value. After the finally code block is executed, return its value to the caller, and then if the return value is modified in finally, the modified value will be returned. Obviously, returning or modifying the return value in finally will cause great trouble to the program. C # directly uses compilation errors to prevent programmers from doing such dirty things. Java can also be improved
Page 256 of 485
The syntax check level of the compiler to generate warnings or errors. It can be set in eclipse as shown in the figure. It is strongly recommended to set this item as a compilation error.
47. How to handle exceptions in Java language. Keywords: throws, throw
How to use try, catch and finally?
Answer:
Java handles exceptions through object-oriented methods, classifies various exceptions, and provides a good interface. In Java, each exception is an object that is an instance of the throwable class or its subclass. When a method has an exception, it throws an exception object, which contains exception information,
Page 257 of 485
Calling the method of this object can catch this exception and handle it. Java exception handling is implemented through five Keywords: try, catch, throw, throws and finally. Generally, try is used to execute a program. If the system throws an exception object, it can be caught by its type or handled by always executing code blocks (finally); Try is used to specify a program to prevent all exceptions; The catch clause immediately follows the try block to specify the type of exception you want to catch; The throw statement is used to explicitly throw an exception; Throws is used to declare various exceptions that may be thrown by a method (of course, it is allowed to moan when declaring exceptions); Finally, to ensure that a piece of code will be executed no matter what exception occurs; Try statements can be nested. Whenever a try statement is encountered, the exception structure will be put into the exception stack until all try statements are completed. If the try statement at the next level does not handle an exception, the exception stack will perform an out of stack operation until a try statement handling this exception is encountered or the exception is finally thrown to the JVM.
48. What are the similarities and differences between runtime exceptions and detected exceptions?
Answer:
Exception indicates the abnormal state that may occur during program operation. Runtime exception indicates the exception that may be encountered in the normal operation of virtual machine. It is a common operation error. It will not occur as long as the program is designed without problems. The detected exception is related to the context in which the program runs. Even if the program design is correct, it may still be caused by problems in use. The java compiler requires that methods must declare that they throw possible checked exceptions, but it does not require that they declare that they throw uncapped runtime exceptions. Exceptions, like inheritance, are often abused in object-oriented programming. The following guidelines are given for the use of exceptions in effective Java:
 do not use exception handling for normal control flow (a well-designed API should not force its callers to use exceptions for normal control flow)  use detected exceptions for recoverable situations and run-time exceptions for programming errors  avoid unnecessary use of detected exceptions (exceptions can be avoided by some state detection means)
Page 258 of 485
 give priority to standard exceptions  document exceptions thrown by each method  keep the atomicity of exceptions  don’t ignore caught exceptions in catch
49. List some common runtime exceptions?
Answer:
 arithmeticexception (arithmetic exception)  ClassCastException (class conversion exception)  illegalargumentexception (illegal parameter exception)  indexoutofboundsexception (subscript out of bounds exception)  NullPointerException (null pointer exception)  SecurityException (security exception)
50. Explain the difference between final, finally and finalize.
Answer:
 final: the modifier (keyword) has three uses: if a class is declared final, it means that it cannot derive new subclasses, that is, it cannot be inherited. Therefore, it is the opposite of abstract. Declaring variables as final ensures that they will not be changed in use. Variables declared as final must be given an initial value at the time of declaration, and can only be read and unmodified in future references. Methods declared as final can also be used only and cannot be overridden in subclasses.
Page 259 of 485
 finally: generally, the structure behind try… Catch… Always executes the code block, which means that the code here can be executed as long as the JVM is not closed, and the code that releases external resources can be written in the finally block.  finalize: the method defined in the object class. In Java, it is allowed to use the finalize () method to do the necessary cleaning work before the garbage collector clears the object from memory. This method is called by the garbage collector when destroying objects. You can clean up system resources or perform other cleaning work by overriding the finalize () method.
51. Class examplea inherits exception and class exampleb inherits exception
ExampleA。
There are the following code snippets:
try {
throw new ExampleB(“b”) } catch(ExampleA e){ System.out.println(“ExampleA”); } catch(Exception e){ System.out.println(“Exception”); }
**What is the output of executing this code?
Answer:
Output: examplea. (according to the Richter substitution principle [where the parent type can be used, the child type must be used], the catch block that grabs the exception of type examplea can catch the exception of type exampleb thrown in the try block)
Page 260 of 485
Interview question – say the running results of the following code. (the source of this topic is the Book Java programming ideas)
class Annoyance extends Exception {} class Sneeze extends Annoyance {}
class Human {
public static void main(String[] args) throws Exception { try { try { throw new Sneeze(); } catch ( Annoyance a ) { System.out.println(“Caught Annoyance”); throw a; } } catch ( Sneeze s ) { System.out.println(“Caught Sneeze”); return ; } finally { System.out.println(“Hello World!”); } }
}
52. Do list, set and map inherit from the collection interface?
Page 261 of 485
Answer:
List and set are yes, map is not. Map is a key value pair mapping container, which is obviously different from list and set. Set stores scattered elements and does not allow duplicate elements (the same is true for sets in Mathematics). List is a container with linear structure, which is suitable for accessing elements by numerical index.
53. Describe the storage performance and characteristics of ArrayList, vector and LinkedList.
Answer:
ArrayList and vector use array to store data. The number of array elements is larger than the actual stored data for adding and inserting elements. They both allow direct indexing of elements by sequence number. However, inserting elements involves memory operations such as array element movement, so index data is fast and inserting data is slow. The methods in vector add synchronized decoration, Therefore, vector is a thread safe container, but its performance is worse than ArrayList, so it is already a legacy container in Java. LinkedList uses a two-way linked list to store (associate the scattered memory units in the memory through additional references to form a linear structure that can be indexed by serial number. This linked storage mode has higher memory utilization than the continuous storage mode of array). The data indexed by serial number needs to be traversed forward or backward, However, when inserting data, you only need to record the front and rear items of this item, so the insertion speed is faster. Vector is a legacy container (provided in earlier versions of Java. In addition, hashtable, dictionary, BitSet, stack and properties are legacy containers), which is not recommended. However, since ArrayList and linkedlisted are non thread safe, if multiple threads operate on the same container, You can convert it into a thread safe container through the synchronizedlist method in the tool class collections and then use it (this is an application of the decoration mode, passing an existing object into the constructor of another class to create a new object to enhance the implementation).
Supplement: the properties class and stack class in the legacy container have serious problems in design. Properties is a special key value pair mapping where the key and value are strings. In design, it should be associated with a hashtable and set its two generic parameters to string type, but the properties in the Java API directly inherit the hashtable, This is clearly an abuse of inheritance. Here’s how to reuse code
Page 262 of 485
The method should be has-a relationship rather than is-a relationship. On the other hand, containers belong to tool classes. Inheriting tool classes is a wrong practice. The best way to use tool classes is has-a relationship (Association) or Use-A relationship (dependency). Similarly, it is not correct for stack class to inherit vector. Sun engineers also make such low-level mistakes, which makes people sigh.
54. What is the difference between collections and collections?
Answer:
Collection is an interface, which is the parent interface of containers such as set and list; Collections is a tool class that provides a series of static methods to assist container operations, including container search, sorting, thread safety, and so on.
55. What are the characteristics of list, map and set when accessing elements?
Answer:
List accesses elements with a specific index, and there can be duplicate elements. Set cannot store duplicate elements (use the equals() method of the object to distinguish whether the elements are duplicate). Map saves the key value pair mapping. The mapping relationship can be one-to-one or many to one. Both set and map containers have two implementation versions based on Hash storage and sorting tree. The theoretical access time complexity of the version based on Hash storage is O (1), while the implementation based on sorting tree version will form a sorting tree according to the element or element key when inserting or deleting elements, so as to achieve the effect of sorting and de duplication.
56. How do treemap and TreeSet compare elements when sorting?
How do the sort () method in the collections tool class compare elements?
Page 263 of 485
Answer:
TreeSet requires that the class of the stored object must implement the comparable interface, which provides the CompareTo () method of comparing elements. When inserting elements, it will call back this method to compare the size of elements. Treemap requires that the stored key value pair and the mapped key must implement the comparable interface to sort the elements according to the key. The sort method of collections tool class has two overloaded forms. The first requires the comparison of objects stored in the passed in container to be sorted to implement the comparable interface to realize the comparison of elements; The second non mandatory requirement is that the elements in the container must be comparable, but the second parameter is required to be passed in. The parameter is a subtype of the comparator interface (the compare method needs to be rewritten to compare elements), which is equivalent to a temporarily defined sorting rule. In fact, it is an algorithm for comparing element sizes through the interface, and it is also an application of the callback mode (support for functional programming in Java). Example 1:
Public class student implements comparable < student > {private string name; / / name private int age; / / age
public Student(String name, int age) { this.name = name; this.age = age; }
@Override public String toString() { return “Student [name=” + name + “, age=” + age + “]”; }
@Override public int CompareTo (student o) {return this.age – o.age; / / compare age (ascending order of age)}
Page 264 of 485
}
import java.util.Set; import java.util.TreeSet;
class Test01 {
Public static void main (string [] args) {set < student > set = new TreeSet < > (); / / diamond syntax of Java 7 (no write type is required in angle brackets after constructor) set.add (new student (“Hao Luo”, 33)); set.add (new student (“XJ Wang”, 32)); set.add (new student (“Bruce Lee”, 60)); set.add (new student (“Bob Yang”, 22));
For (student stu: Set) {system. Out. Println (stu);} / / output result: / / student [name = Bob Yang, age = 22] / / student [name = XJ Wang, age = 32] / / student [name = Hao Luo, age = 33] / / student [name = Bruce Lee, age = 60]}}
Example 2:
Public class student {private string name; / / name
Page 265 of 485
private int age; // Age
public Student(String name, int age) { this.name = name; this.age = age; }
/* Get student name * / public string getname() {return name;}
/* Get student age * / public int getage() {return age;}
@Override public String toString() { return “Student [name=” + name + “, age=” + age + “]”; }
}
import java.util.ArrayList; import java.util.Collections; import java.util.Comparator;
Page 266 of 485
import java.util.List;
class Test02 {
Public static void main (string [] args) {list < student > List = new ArrayList < > (); / / diamond syntax of Java 7 (no write type is required in angle brackets after constructor) list.add (new student (“Hao Luo”, 33)); list.add (new student (“XJ Wang”, 32)); list.add (new student (“Bruce Lee”, 60)); list.add (new student (“Bob Yang”, 22));
//Pass in a comparator interface object through the second parameter of the sort method. / / this is equivalent to passing in an algorithm for comparing the size of objects to the sort method. / / there are no concepts such as function pointers, functors, and delegates in Java. / / therefore, the only option to pass an algorithm into a method is to call back collections. Sort (list, new comparator < student > through the interface () {
@Override public int compare(Student o1, Student o2) { return o1.getName().compareTo(o2.getName()); //
Compare student names
}
});
For (student stu: list) {system. Out. Println (stu);} / / output result: / / student [name = Bob Yang, age = 22] / / student [name = Bruce Lee, age = 60]
Page 267 of 485
// Student [name=Hao LUO, age=33] // Student [name=XJ WANG, age=32] } }
57. The sleep () method of thread class and the wait () method of object can make the line
The process is suspended. What’s the difference between them?
Answer:
The sleep () method (sleep) is a static method of the thread class (thread). Calling this method will make the current thread suspend execution for the specified time and give the execution opportunity (CPU) to other threads, but the lock of the object remains. Therefore, it will recover automatically after the sleep time ends (the thread returns to the ready state, please refer to the thread state transition diagram in question 66). Wait() is a method of the object class. Calling the wait() method of the object causes the current thread to give up the lock of the object (the thread suspends execution) and enter the wait pool of the object. Only when calling the notify() method (or notifyall() method) of the object can the thread in the wait pool wake up and enter the lock pool, If the thread regains the lock of the object, it can enter the ready state.
Add: many people may be vague about what is a process and what is a thread, and they don’t particularly understand why multi-threaded programming is needed. In short: a process is a program with certain independent functions. It is a running activity on a data set. It is an independent unit for resource allocation and scheduling by the operating system; Thread is an entity of a process. It is the basic unit of CPU scheduling and dispatching. It is a smaller basic unit that can run independently than a process. The dividing scale of threads is smaller than that of processes, which makes multithreaded programs have high concurrency; Processes usually have independent memory units when executing, and threads can share memory. Multithreaded programming can usually bring better performance and user experience, but multithreaded programs are not friendly to other programs because it may occupy more CPU resources. Of course, it is not that the more threads, the better the performance of the program, because the scheduling and switching between threads will also waste CPU time. The fashionable node.js adopts the working mode of single thread asynchronous I / O.
Page 268 of 485
58. What is the difference between the sleep () method and the yield () method of a thread?
Answer:
① When the sleep () method gives other threads a chance to run, it does not consider the priority of the thread, so it will give low priority threads a chance to run; The yield () method will only give threads with the same priority or higher priority the chance to run; ② The thread enters the blocked state after executing the sleep () method, and enters the ready state after executing the yield () method; ③ The sleep () method declares to throw interruptedexception, while the yield () method does not declare any exception; ④ The sleep () method is more portable than the yield () method (related to operating system CPU scheduling).
59. After a thread enters synchronized method a of an object,
Can other threads enter synchronized method B of this object?
Answer:
No. Other threads can only access the asynchronous methods of the object, and synchronous methods cannot enter. Because the synchronized modifier on a non static method requires that the lock of the object be obtained when the method is executed. If the object lock has been removed after entering method a, the thread trying to enter method B can only wait for the lock of the object in the wait lock pool (note that it is not the wait pool).
60. Please describe the methods related to thread synchronization and thread scheduling.
Answer:
Page 269 of 485
 wait (): make a thread in a waiting (blocking) state and release the lock of the held object;  sleep (): it is a static method to make a running thread sleep. Calling this method will handle the interruptedexception exception;  notify(): wake up a thread in a waiting state. Of course, when this method is called, it cannot wake up a thread in a waiting state exactly, but the JVM determines which thread to wake up, which has nothing to do with priority;  notityall(): wake up all threads in the waiting state. This method does not lock the object to all threads, but let them compete. Only the thread that obtains the lock can enter the ready state;
Tip: about Java multithreading and concurrent programming, I suggest you see my other article summary and Thinking on Java Concurrent Programming. Add: Java 5 provides an explicit lock mechanism through the lock interface to enhance flexibility and thread coordination. The lock interface defines the methods of locking (lock ()) and unlocking (unlock ()), and also provides the newcondition () method to generate the condition object for communication between threads; In addition, Java 5 also provides a semaphore mechanism, which can be used to limit the number of threads accessing a shared resource. Before accessing the resource, the thread must obtain the permission of the semaphore (call the acquire () method of the semaphore object); After accessing the resource, the thread must return the permission to the semaphore (call the release () method of the semaphore object).
The following example demonstrates the execution of 100 threads depositing 1 yuan into a bank account at the same time without using the synchronization mechanism and using the synchronization mechanism.
 bank accounts:
/* bank account@Author Luo Hao */
Page 270 of 485
Public class account {private double balance; / / account balance
/* deposit@Param money deposit amount/Public void deposit (double money) {double NewBalance = balance + money; try {thread.sleep (10); / / simulating this business takes a period of processing time} catch (interruptedexception Ex) {ex.printstacktrace();} balance = NewBalance;}
/* Obtain account balance * / public double getbalance() {return balance;}
}
 deposit thread class:
/* Saving thread
Page 271 of 485

  • @Author Luo Hao /Public class addmoneythread implements runnable {private account account; / / deposit into account private double money; / / deposit amount

public AddMoneyThread(Account account, double money) { this.account = account; this.money = money; }
@Override public void run() { account.deposit(money); }
}
 test class:
import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors;
public class Test01 {
public static void main(String[] args) { Account account = new Account(); ExecutorService service = Executors.newFixedThreadPool(100);
Page 272 of 485
for(int i = 1; i <= 100; i++) { service.execute(new AddMoneyThread(account, 1)); }
service.shutdown();
while(!service.isTerminated()) {}
System. Out. Println (“account balance:” + account. Getbalance());
}
}
When there is no synchronization, the execution result usually shows that the account balance is less than 10 yuan. The reason for this situation is that when one thread a attempts to deposit 1 yuan, another thread B can also enter the deposit method. The account balance read by thread B is still the account balance before thread a deposits 1 yuan, Therefore, the operation of adding 1 yuan to the original balance 0 is also done. Similarly, thread C will do similar things. Therefore, at the end of the last 100 threads, the expected account balance was 100 yuan, but the actual result is usually less than 10 yuan (probably 1 yuan). The solution to this problem is synchronization. When a thread deposits money in a bank account, it needs to lock the account and allow other threads to operate after its operation is completed. The code has the following adjustment schemes:
 synchronize the keyword on the deposit method of the bank account
/* bank account@Author Luo Hao*/Public class account {private double balance; / / account balance
Page 273 of 485
/* deposit@Param money deposit amount/Public synchronized void deposit (double money) {double NewBalance = balance + money; try {thread. Sleep (10); / / simulating this business takes a period of processing time} catch (interruptedexception Ex) {ex.printstacktrace();} balance = NewBalance;}
/* Obtain account balance * / public double getbalance() {return balance;}
}
 synchronize the bank account when the thread calls the deposit method
/* Saving thread@Author Luo Hao
Page 274 of 485
*/Public class addmoneythread implements runnable {private account account; / / deposit into account private double money; / / deposit amount
public AddMoneyThread(Account account, double money) { this.account = account; this.money = money; }
@Override public void run() { synchronized (account) { account.deposit(money); } }
}
 create a lock object for each bank account through the lock mechanism displayed in Java 5, and lock and unlock the deposit operation
import java.util.concurrent.locks.Lock; import java.util.concurrent.locks.ReentrantLock;
/* bank account @Author Luo Hao*
Page 275 of 485
*/Public class account {private lock accountlock = new reentrantlock(); private double balance; / / account balance
/* deposit @param money Deposit amount/Public void deposit (double money) {accountlock. Lock(); try {double NewBalance = balance + money; try {thread. Sleep (10); / / simulating this business requires a period of processing time} catch (interruptedexception Ex) {ex.printstacktrace();} balance = NewBalance;} finally {accountlock. Unlock();}}}
/* Obtain account balance*/
Page 276 of 485
public double getBalance() { return balance; }
}
After modifying the code in the above three ways, rewrite and execute the test code test01, and you will see that the final account balance is 100 yuan. Of course, you can also use semaphore or countdownlatch to achieve synchronization.
61. How many ways to write multithreaded programs?
Answer:
Before Java 5, there were two ways to implement multithreading: one is to inherit the thread class; The other is to implement the runnable interface. Both methods define the behavior of threads by overriding the run () method. The latter is recommended because inheritance in Java is single inheritance. A class has a parent class. If it inherits the thread class, it can no longer inherit other classes. Obviously, it is more flexible to use the runnable interface.
Add: there is a third way to create threads after Java 5: implement the callable interface. The call method in this interface can generate a return value at the end of thread execution. The code is as follows:
import java.util.ArrayList; import java.util.List; import java.util.concurrent.Callable; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.concurrent.Future;
class MyTask implements Callable<Integer> {
Page 277 of 485
private int upperBounds;
public MyTask(int upperBounds) { this.upperBounds = upperBounds; }
@Override public Integer call() throws Exception { int sum = 0; for(int i = 1; i <= upperBounds; i++) { sum += i; } return sum; }
}
class Test {
public static void main(String[] args) throws Exception { List<Future<Integer>> list = new ArrayList<>(); ExecutorService service = Executors.newFixedThreadPool(10); for(int i = 0; i < 10; i++) { list.add(service.submit(new MyTask((int) (Math.random() * 100)))); }
int sum = 0; for(Future<Integer> future : list) { // while(!future.isDone()) ; sum += future.get();
Page 278 of 485
}
System.out.println(sum);
}
}
62. Usage of synchronized keyword?
Answer:
Synchronized keyword can mark an object or method as synchronized to achieve mutually exclusive access to objects and methods. You can define synchronized code blocks with synchronized (object) {…}, or use synchronized as a modifier of the method when declaring the method. The usage of the synchronized keyword has been shown in the example of question 60.
63. Give examples of synchronous and asynchronous.
Answer:
If there are critical resources in the system (the number of resources is less than the number of threads competing for resources), for example, the data being written may be read by another thread, or the data being read may have been written by another thread, these data must be accessed synchronously (the exclusive lock in database operation is the best example). When an application calls a method that takes a long time to execute on an object and does not want the program to wait for the return of the method, asynchronous programming should be used. In many cases, asynchronous approach is often more efficient. In fact, the so-called synchronization refers to blocking operation, while asynchrony is non blocking operation.
64. Whether to call the run () or start () method to start a thread?
Page 279 of 485
Answer:
Starting a thread is to call the start () method to make the virtual processor represented by the thread in a runnable state, which means that it can be scheduled and executed by the JVM, which does not mean that the thread will run immediately. The run () method is the method to call back after the thread is started.
65. What is a thread pool?
Answer:
In object-oriented programming, creating and destroying objects is time-consuming, because creating an object requires memory resources or more resources. This is especially true in Java, where the virtual machine will try to track each object so that it can garbage collect after the object is destroyed. Therefore, a means to improve the efficiency of service programs is to reduce the number of object creation and destruction as much as possible, especially the creation and destruction of some resource consuming objects, which is the reason for the emergence of “pooled resources” technology. As the name suggests, a thread pool is to create several executable threads in advance and put them into a pool (container). When necessary, the threads obtained from the pool do not need to be created by themselves. After use, the threads do not need to be destroyed, but put them back into the pool, so as to reduce the cost of creating and destroying Thread objects. The executor interface in Java 5 + defines a tool for executing threads. Its subtype, the thread pool interface, is executorservice. It is complex to configure a thread pool, especially when the principle of thread pool is not very clear. Therefore, some static factory methods are provided on the tool class executors to generate some common thread pools, as shown below:
 newsinglethreadexecutor: create a single thread thread pool. This thread pool has only one thread working, which is equivalent to a single thread executing all tasks in series. If the only thread ends abnormally, a new thread will replace it. This thread pool ensures that all tasks are executed in the order they are submitted.
Page 280 of 485
 newfixedthreadpool: create a fixed size thread pool. Each time a task is submitted, a thread is created until the thread reaches the maximum size of the thread pool. Once the size of the thread pool reaches the maximum, it will remain unchanged. If a thread ends due to execution exception, the thread pool will supplement a new thread.  newcachedthreadpool: create a cacheable thread pool. If the size of the thread pool exceeds the threads required for processing tasks, some idle threads (not executing tasks for 60 seconds) will be recycled. When the number of tasks increases, this thread pool can intelligently add new threads to process tasks. This thread pool does not limit the size of the thread pool. The size of the thread pool completely depends on the maximum thread size that the operating system (or JVM) can create.  newscheduledthreadpool: create a thread pool with unlimited size. This thread pool supports the need to execute tasks regularly and periodically.  newsinglethreadexecutor: create a single thread thread pool. This thread pool supports the need to execute tasks regularly and periodically.
The example in question 60 demonstrates the code of creating a thread pool through the executors tool class and using the thread pool to execute threads. If you want to use a thread pool on the server, it is strongly recommended to use the newfixedthreadpool method to create a thread pool for better performance.
66. The basic state of the thread and the relationship between the States?
Answer:
Page 281 of 485
Note: running indicates the running state, runnable indicates the ready state (everything is ready, only CPU is owed), blocked indicates the blocking state, and there are many cases of blocking state, which may be because the wait () method is called to enter the waiting pool, the synchronous method or the synchronous code block is executed to enter the lock pool, or the sleep () method or join () method is called Method waits for hibernation or other threads to end, or because an I / O interrupt has occurred.
67. Briefly describe synchronized and java.util.concurrent.locks.lock
Similarities and differences?
Answer:
Lock is a new API introduced after Java 5. Compared with the keyword synchronized, lock has the main similarities: lock can complete all the functions realized by synchronized; Main differences: lock has more precise thread semantics and better performance than synchronized, and it is not mandatory to obtain a lock. Synchronized will automatically release the lock, and the lock must be released manually by the programmer, and it is best to release it in the finally block (this is the best place to release external resources).
Page 282 of 485
68. What is the significance of how to implement serialization in Java?
Answer:
Serialization is a mechanism used to process object stream. The so-called object stream is to stream the content of the object. You can read and write the streamed objects, or transfer the streamed objects between networks. Serialization is to solve the problems that may be caused when reading and writing an object stream (if serialization is not performed, there may be a problem of data out of order). To realize serialization, a class needs to implement the serializable interface, which is an identifying interface to mark that the class object can be serialized, then use an output stream to construct an object output stream, and write out the implementation object (i.e. save its state) through the writeobject (object) method; If you need to deserialize, you can create an object input stream with an input stream, and then read the object from the stream through the readObject method. In addition to the persistence of objects, serialization can also be used for deep cloning of objects (refer to question 29).
69. How many types of streams are there in Java?
Answer:
Byte stream and character stream. Byte stream inherits from InputStream and OutputStream, and character stream inherits from reader and writer. There are many other streams in the java.io package, mainly to improve performance and ease of use. There are two points to note about Java I / O: one is two kinds of symmetry (input and output symmetry, byte and character symmetry); The second is two design modes (adapter mode and decoration mode). In addition, the flow in Java is different from c# in that it has only one dimension and one direction.
Interview questions – programming file copy. (this topic often appears in the written examination. The following code gives two implementation schemes)
import java.io.FileInputStream;
Page 283 of 485
import java.io.FileOutputStream; import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; import java.nio.ByteBuffer; import java.nio.channels.FileChannel;
public final class MyUtil {
private MyUtil() { throw new AssertionError(); }
public static void fileCopy(String source, String target) throws IOException { try (InputStream in = new FileInputStream(source)) { try (OutputStream out = new FileOutputStream(target)) { byte[] buffer = new byte[4096]; int bytesToRead; while((bytesToRead = in.read(buffer)) != -1) { out.write(buffer, 0, bytesToRead); } } } }
public static void fileCopyNIO(String source, String target) throws IOException { try (FileInputStream in = new FileInputStream(source)) { try(FileOutputStreamout=newFileOutputStream(target)){ FileChannel inChannel = in.getChannel();
Page 284 of 485
FileChannel outChannel = out.getChannel(); ByteBuffer buffer = ByteBuffer.allocate(4096); while(inChannel.read(buffer) != -1) { buffer.flip(); outChannel.write(buffer); buffer.clear(); }
}
}
}
}
Note: the above uses Java 7 TWR. After using TWR, you don’t need to release external resources in finally, so as to make the code more elegant.
70. Write a method, input a file name and a string, and count the word
The number of occurrences of the string in this file.
Answer:
The code is as follows:
import java.io.BufferedReader; import java.io.FileReader;
public final class MyUtil {
//The methods in the tool class are accessed statically, so the constructor is private and object creation is not allowed (absolutely good habit)
Page 285 of 485
private MyUtil() { throw new AssertionError(); }
/* Counts the number of occurrences of a given string in a given file @Param filename filename@Param word string@Return the number of times a string appears in a file * / public static int countwordinfile (string filename, string word) {int counter = 0; try (FileReader fr = new FileReader (filename)) {try (BufferedReader br = new BufferedReader (FR)) {string line = null; while ((line = br. Readline())! = null) {int index = – 1; while (line. Length() > = word. Length())&& (index = line.indexOf(word)) >= 0) { counter++; line = line.substring(index + word.length()); } } } } catch (Exception ex) { ex.printStackTrace(); } return counter; }
Page 286 of 485
}
71. How to list all files in a directory with java code?
Answer:
If only the files under the current folder are required to be listed, the code is as follows:
import java.io.File;
class Test12 {
public static void main(String[] args) { File f = new File(“/Users/Hao/Downloads”); for(File temp : f.listFiles()) { if(temp.isFile()) { System.out.println(temp.getName()); } } }
}
If you need to continue expanding the folder, the code is as follows:
import java.io.File;
class Test12 {
public static void main(String[] args) { showDirectory(new File(“/Users/Hao/Downloads”));
Page 287 of 485
}
public static void showDirectory(File f) { _walkDirectory(f, 0); }
private static void _walkDirectory(File f, int level) { if(f.isDirectory()) { for(File temp : f.listFiles()) { _walkDirectory(temp, level + 1); } } else { for(int i = 0; i < level – 1; i++) { System.out.print(“\t”); } System.out.println(f.getName()); } }
}
In Java 7, NiO. 2 API can be used to do the same thing. The code is as follows:
class ShowFileTest {
public static void main(String[] args) throws IOException { Path initPath = Paths.get(“/Users/Hao/Downloads”); Files.walkFileTree(initPath, new SimpleFileVisitor<Path>() {
@Override
Page 288 of 485
public FileVisitResult visitFile(Path file, BasicFileAttributes
attrs)
throws IOException { System.out.println(file.getFileName().toString()); return FileVisitResult.CONTINUE;
}
});
}
}
72. Implement a multithreaded echo service with java socket programming
Server.
Answer:
import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; import java.io.PrintWriter; import java.net.ServerSocket; import java.net.Socket;
public class EchoServer {
private static final int ECHO_SERVER_PORT = 6789;
public static void main(String[] args) {
Page 289 of 485
Try (ServerSocket server = new ServerSocket (echo_server_port)) {system. Out. Println (“the server has started…”); while (true) {socket client = server. Accept(); new thread (New clienthandler (client)). Start();}} catch (IOException E) {e.printstacktrace();}}
private static class ClientHandler implements Runnable { private Socket client;
public ClientHandler(Socket client) { this.client = client; }
@Override public void run() {try (BufferedReader br = new BufferedReader (New inputstreamreader (client. Getinputstream()); printwriter PW = new printwriter (client. Getoutputstream()) {string MSG = br. Readline(); system.out.println (“received” + client. Getinetaddress() + “sent:” + MSG); pw.println (MSG); PW. Flush();} catch (exception Ex){
Page 290 of 485
ex.printStackTrace(); } finally { try { client.close(); } catch (IOException e) { e.printStackTrace(); } }
}
}
}
Note: the above code uses the TWR syntax of Java 7. Since many external resource classes indirectly implement the autoclosable interface (single method callback interface), you can use the TWR syntax to automatically call the close () method of the external resource class by callback at the end of the try to avoid writing lengthy finally code blocks. In addition, the above code uses a static internal class to realize the function of threads. Using multithreading can avoid the interruption caused by a user’s I / O operation and affect the access of other users to the server. In short, the input operation of one user will not cause the obstruction of other users. Of course, the above code can get better performance by using thread pool, because the overhead caused by frequent creation and destruction of threads can not be ignored.
The following is an echo client test code:
import java.io.BufferedReader; import java.io.InputStreamReader; import java.io.PrintWriter; import java.net.Socket; import java.util.Scanner;
public class EchoClient {
Page 291 of 485
Public static void main (string [] args) throws exception {socket client = new socket (“localhost”, 6789); scanner SC = new scanner (system. In); system.out.print (“please enter:”); string MSG = sc.nextline(); sc.close(); printwriter PW = new printwriter (client. Getoutputstream()); pw.println (MSG); pw.flush(); BufferedReader br = new BufferedReader (new InputStreamReader(client.getInputStream())); System.out.println(br.readLine()); client.close(); } }
If you want to implement the server with NiO’s multiplexed socket, the code is as follows. Although NiO operations bring better performance, some operations are relatively low-level and difficult for beginners to understand.
import java.io.IOException; import java.net.InetSocketAddress; import java.nio.ByteBuffer; import java.nio.CharBuffer; import java.nio.channels.SelectionKey; import java.nio.channels.Selector; import java.nio.channels.ServerSocketChannel; import java.nio.channels.SocketChannel; import java.util.Iterator;
Page 292 of 485
public class EchoServerNIO {
private static final int ECHO_SERVER_PORT = 6789; private static final int ECHO_SERVER_TIMEOUT = 5000; private static final int BUFFER_SIZE = 1024;
private static ServerSocketChannel serverChannel = null; private static Selector selector = null; // Multiplexer private static ByteBuffer buffer = null// buffer
public static void main(String[] args) { init(); listen(); }
private static void init() { try { serverChannel = ServerSocketChannel.open(); buffer = ByteBuffer.allocate(BUFFER_SIZE); serverChannel.socket().bind(new InetSocketAddress(ECHO_SERVER_PORT)); serverChannel.configureBlocking(false); selector = Selector.open(); serverChannel.register(selector, SelectionKey.OP_ACCEPT); } catch (Exception e) { throw new RuntimeException(e); } }
private static void listen() { while (true) {
Page 293 of 485
try {
if (selector.select(ECHO_SERVER_TIMEOUT) != 0) { Iterator<SelectionKey> it = selector.selectedKeys().iterator(); while (it.hasNext()) { SelectionKey key = it.next(); it.remove(); handleKey(key); } } } catch (Exception e) { e.printStackTrace(); } } }
private static void handleKey(SelectionKey key) throws IOException { SocketChannel channel = null;
try {
if (key.isAcceptable()) { ServerSocketChannel serverChannel = (ServerSocketChannel) key.channel(); channel = serverChannel.accept(); channel.configureBlocking(false); channel.register(selector, SelectionKey.OP_READ); } else if (key.isReadable()) { channel = (SocketChannel) key.channel(); buffer.clear(); if (channel.read(buffer) > 0) { buffer.flip();
Page 294 of 485
CharBuffer charBuffer = CharsetHelper.decode(buffer); String msg = charBuffer.toString(); System. Out. Println (“message received” + channel. Getremoteaddress() + “: + MSG);
channel.write(CharsetHelper.encode(CharBuffer.wrap(msg))); } else { channel.close(); } } } catch (Exception e) { e.printStackTrace(); if (channel != null) { channel.close(); } } }
}
import java.nio.ByteBuffer; import java.nio.CharBuffer; import java.nio.charset.CharacterCodingException; import java.nio.charset.Charset; import java.nio.charset.CharsetDecoder; import java.nio.charset.CharsetEncoder;
public final class CharsetHelper { private static final String UTF_8 = “UTF-8”;
Page 295 of 485
private static CharsetEncoder encoder = Charset.forName(UTF_8).newEncoder(); private static CharsetDecoder decoder = Charset.forName(UTF_8).newDecoder();
private CharsetHelper() { }
public static ByteBuffer encode(CharBuffer in) throws CharacterCodingException{ return encoder.encode(in); }
public static CharBuffer decode(ByteBuffer in) throws CharacterCodingException{ return decoder.decode(in); } }
73. How many forms do XML document definitions take? What is the essential difference between them? analysis
What are several ways to XML documents?
Answer:
XML document definition is divided into two forms: DTD and schema. Both of them are constraints on XML syntax. The essential difference is that schema itself is also an XML file, which can be parsed by XML parser, and can define types for the data carried by XML. The constraint ability is more powerful than DTD. XML parsing mainly includes DOM (document object model), Sax (simple API for XML) and Stax (a new way of parsing XML introduced in Java 6, streaming API for XML),
Page 296 of 485
When DOM processes large files, its performance degrades severely. This problem is caused by the large memory occupied by the DOM tree structure, and the DOM parsing method must load the whole document into memory before parsing the file, which is suitable for random access to XML (typical strategy of exchanging space for time); Sax is an event driven XML parsing method. It reads XML files sequentially without loading all the files at once. When encountering problems such as the beginning of the file, the end of the document, or the beginning and end of the tag, it will trigger an event. The user processes the XML file through the event callback code, which is suitable for sequential access to XML; As the name suggests, Stax focuses on streams. In fact, the essential difference between Stax and other parsing methods is that applications can process XML as an event stream. The idea of treating XML as a set of events is not novel (SAX does that), but the difference is that Stax allows application code to pull these events out one by one without providing a handler to receive events from the parser at the convenience of the parser.
74. Where did you use XML in the project?
Answer:
XML has two main functions: data exchange and information configuration. During data exchange, XML assembles the data with tags, then compresses, packs and encrypts the data and transmits it to the receiver through the network. After receiving, decrypting and decompressing, it restores the relevant information from the XML file for processing. XML used to be the de facto standard for data exchange between heterogeneous systems, but this function has almost been replaced by JSON (JavaScript object notation). Of course, at present, many software still use XML to store configuration information. In many projects, we usually write hard code as configuration information in XML files. Many Java frameworks do the same, and these frameworks choose Dom4j as a tool to process XML, because the official API of Sun company is not very easy to use.
Add: now many fashionable software (such as sublime) have begun to write configuration files in JSON format. We have a strong feeling that another function of XML will be gradually abandoned by the industry.
75. Describe the steps of JDBC operating the database.
Page 297 of 485
Answer:
The following code takes connecting to the native Oracle database as an example to demonstrate the steps of JDBC operating the database.
 load drive.
Class.forName(“oracle.jdbc.driver.OracleDriver”);
 create connections.
Connection con = DriverManager.getConnection(“jdbc:oracle:thin:@localhost:1521:orcl”, “scott”, “tiger”);
 create statements.
PreparedStatement ps = con.prepareStatement(“select * from emp where sal between ? and ?”); ps.setInt(1, 1000); ps.setInt(2, 3000);
 execute statements.
ResultSet rs = ps.executeQuery();
Page 298 of 485
 processing results.
while(rs.next()) { System.out.println(rs.getInt(“empno”) + ” – ” + rs.getString(“ename”)); }
 close resources.
finally { if(con != null) { try { con.close(); } catch (SQLException e) { e.printStackTrace(); } } }
Tip: the order of closing external resources should be opposite to the order of opening, that is, close the resultset first, then the statement, and then the connection. The above code only closes the connection. Although the statement created on the connection and the open cursor will also be closed when the connection is closed, it cannot be guaranteed that this is always the case. Therefore, it should be closed respectively according to the order just mentioned. In addition, the first step of loading the driver can be omitted in JDBC 4.0 (automatically loading the driver from the classpath), but we recommend retaining it.
76. What is the difference between a statement and a Preparedstatement? Which sex
Can it be better?
Page 299 of 485
Answer:
Compared with statement, ① Preparedstatement interface represents precompiled statements. Its main advantage is that it can reduce SQL compilation errors and increase SQL security (reduce the possibility of SQL injection attack); ② SQL statements in Preparedstatement can take parameters, which avoids the trouble and insecurity of splicing SQL statements with string connection; ③ When batch processing SQL or frequently executing the same query, Preparedstatement has obvious performance advantages. Because the database can cache the compiled and optimized SQL statements, it will execute the statements with the same structure very quickly next time (there is no need to compile and generate the execution plan again).
Supplement: in order to provide calls to stored procedures, the JDBC API also provides the callablestatement interface. Stored procedure is a set of SQL statements in the database to complete specific functions. It is compiled and stored in the database. Users execute it by specifying the name of the stored procedure and giving parameters (if the stored procedure has parameters). Although calling stored procedures will gain many benefits in network overhead, security and performance, there will be a lot of trouble if the underlying database is migrated, because there are many differences in the writing of stored procedures of each database.
77. How to improve the performance of reading data when using JDBC to operate the database? as
How to improve the performance of updating data?
Answer:
To improve the performance of reading data, you can specify the number of records captured each time through the setfetchsize() method of the result set object (typical space for time strategy); To improve the performance of updating data, you can use the Preparedstatement statement statement to build a batch and execute several SQL statements in one batch.
Page 300 of 485
78. What is the role of connection pool in database programming?
Answer:
Since creating and releasing connections have great overhead (especially when the database server is not local, three TCP handshakes are required for each connection establishment, and four TCP handshakes are required for connection release, which can not be ignored), in order to improve the performance of the system accessing the database, several connections can be created in advance and placed in the connection pool, When necessary, it is directly obtained from the connection pool. At the end of use, it returns the connection pool without closing the connection, so as to avoid the overhead caused by frequent connection creation and release. This is a typical strategy of using space for time (it wastes space to store connections, but saves the time to create and release connections). Pooling technology is very common in java development. The same is true for creating thread pools when using threads. Java based open source database connection pools mainly include: c3p0, Proxool, DBCP, bonecp, Druid, etc.
Supplement: in computer system, time and space are irreconcilable contradictions. Understanding this is very important to design algorithms that meet performance requirements. A key to the performance optimization of large websites is to use caching, which is very similar to the connection pool principle mentioned above. It is also a strategy of using space for time. Hotspot data can be placed in the cache. When users query these data, they can get them directly from the cache, which is faster than querying in the database anyway. Of course, the replacement strategy of cache will also have an important impact on the system performance. The discussion on this issue is beyond the scope to be described here.
79. What is Dao mode?
Answer:
As its name implies, Dao (data access object) is an object that provides an abstract interface for database or other persistence mechanisms. It provides various data access operations without exposing the implementation details of the underlying persistence scheme. In actual development, all access operations to data sources should be abstracted and encapsulated in a public API. In programming language, it is to establish an interface, which defines all transaction methods that will be used in this application. In this application, when you need to interact with the data source
Page 301 of 485
When interacting with each other, use this interface, and write a separate class to implement this interface. Logically, this class corresponds to a specific data store. Dao mode actually includes two modes: data accessor and data object. The former solves the problem of how to access data, while the latter solves the problem of how to encapsulate data with objects.
80. What is the acid of a transaction?
Answer:
 atomicity: all operations in a transaction can be done or not done. The failure of any operation will lead to the failure of the whole transaction;  consistency: the system state is consistent after the transaction ends;  isolated: concurrent transactions cannot see each other’s intermediate state;  durable: all changes made after the transaction is completed will be persisted, even in case of catastrophic failure. Log and synchronous backup can rebuild data after failure.
Add: about affairs, the probability of being asked in the interview is very high, and there are many questions to ask. The first thing to know is that transactions are required only when concurrent data access exists. When multiple transactions access the same data, there may be five types of problems, including three types of data reading problems (dirty reading, non repeatable reading and phantom reading) and two types of data update problems (type 1 missing update and type 2 missing update).
Dirty read: transaction a reads the uncommitted data of transaction B and operates on it. If transaction B rolls back, the data read by transaction a is dirty data.
Time transfer transaction a withdrawal transaction B
T1 start transaction
T2 start transaction
T3 query account balance is 1000 yuan
Page 302 of 485
T4 withdraws 500 yuan and modifies the balance to 500 yuan
T5 query account balance is 500 yuan (dirty reading)
T6 the balance of cancelled transactions is restored to 1000 yuan
T7 remit 100 yuan and modify the balance to 600 yuan
T8 commit transaction
Unrepeatable read: transaction a re reads the previously read data and finds that the data has been modified by another committed transaction B.
Time transfer transaction a withdrawal transaction B
T1 start transaction
T2 start transaction
T3 query account balance is 1000 yuan
T4 inquiry account balance is 1000 yuan
T5 takes out 100 yuan and modifies the balance to 900 yuan
T6 commit transaction
T7 query account balance is 900 yuan (not repeatable)
Phantom read: transaction a re executes a query, returns a series of rows that meet the query criteria, and finds that the row submitted by transaction B is inserted.
Page 303 of 485
Time statistics amount transaction a transfer transaction B
T1 start transaction
T2 start transaction
According to T3 statistics, the total deposit is 10000 yuan
T4 add a deposit account to deposit 100 yuan
T5 commit transactions
T6 re counted the total deposit as 10100 yuan (unreal reading)
Type 1 missing updates: when transaction a cancels, the updated data of committed transaction B is overwritten.
Time withdrawal transaction a transfer transaction B
T1 start transaction
T2 start transaction
T3 query account balance is 1000 yuan
T4 inquiry account balance is 1000 yuan
T5 remits 100 yuan, and the modified balance is 1100 yuan
T6 commit transaction
T7 takes out 100 yuan and modifies the balance to 900 yuan
T8 undo transaction
T9 balance restored to 1000 yuan (lost update)
Page 304 of 485
Type 2 missing updates: transaction a overwrites the data submitted by transaction B, resulting in the loss of operations done by transaction B.
Time transfer transaction a withdrawal transaction B
T1 start transaction
T2 start transaction
T3 query account balance is 1000 yuan
T4 inquiry account balance is 1000 yuan
T5 withdraw 100 yuan and modify the balance to 900 yuan
T6 commit transaction
T7 remits 100 yuan and modifies the balance to 1100 yuan
T8 commit transaction
T9 query account balance is 1100 yuan (missing update)
The problems caused by concurrent data access may be allowed in some scenarios, but may be fatal in some scenarios. The database usually solves the problem of concurrent data access through the locking mechanism. It can be divided into table level locks and row level locks according to different locking objects; According to the locking relationship of concurrent transactions, it can be divided into shared locks and exclusive locks. You can refer to the data for details. Using locks directly is very troublesome. Therefore, the database provides users with an automatic locking mechanism. As long as the user specifies the transaction isolation level of the session, the database will analyze SQL statements and add appropriate locks to the resources accessed by transactions. In addition, the database will maintain these locks and improve the performance of the system by various means, which are transparent to users (in other words, you don’t need to understand. In fact, I don’t know). ANSI / ISO SQL 92 standard defines four levels of transaction isolation, as shown in the following table:
Isolation level dirty read non repeatable read unreal read type I lost update type II lost update
Page 305 of 485
READ UNCOMMITED
Allow not allow
READ COMMITTED
Not allowed
REPEATABLE READ
Not allowed not allowed not allowed not allowed
Serializable not allowed not allowed not allowed not allowed not allowed
It should be noted that the transaction isolation level is opposite to the concurrency of data access. The higher the transaction isolation level, the worse the concurrency. Therefore, there is no universal principle to determine the appropriate transaction isolation level according to the specific application.
**81. How to process transactions in JDBC?
Answer:
Connection provides a method for transaction processing. You can set manual transaction submission by calling setautocommit (false); When the transaction is completed, explicitly commit the transaction with commit(); If an exception occurs during transaction processing, the transaction is rolled back through rollback(). In addition, the concept of savepoint is introduced from JDBC 3.0, which allows you to set the savepoint through code and roll back the transaction to the specified savepoint.
Page 306 of 485
82. Can JDBC handle blobs and clobs?
Answer:
Blob refers to binary large object and CLOB refers to character large objec. Therefore, blob is designed to store large binary data and CLOB is designed to store large text data. Both Preparedstatement and resultset of JDBC provide corresponding methods to support blob and CLOB operations. The following code shows how to use JDBC to operate lob: take MySQL database as an example to create a user table with three fields, including number (ID), name (name) and photo. The table creation statement is as follows:
create table tb_user ( id int primary key auto_increment, name varchar(20) unique not null, photo longblob );
The following java code inserts a record into the database:
import java.io.FileInputStream; import java.io.IOException; import java.io.InputStream; import java.sql.Connection; import java.sql.DriverManager; import java.sql.PreparedStatement; import java.sql.SQLException;
class JdbcLobTest {
Page 307 of 485
Public static void main (string [] args) {connection con = null; try {/ / 1. Load the driver (Java version or above can be omitted) class. Forname (“com. Mysql. JDBC. Driver”); / / 2. Establish a connection con = drivermanager. Getconnection (“JDBC: mysql://localhost:3306/test “,” root “,” 123456 “); / / 3. Create statement object Preparedstatement PS = con.preparestatement( “Insert into tb_user values (default,?,?)”); ps.setstring (1, “Luo Hao”); / / replace the first placeholder in the SQL statement with the string try (InputStream in = new FileInputStream (“test. JPG”) {/ / TWR ps.setbinarystream (2, in) in Java 7; / / replace the second placeholder in the SQL statement with a binary stream. / / 4. Issue the SQL statement to obtain the number of affected rows system.out.println (ps.executeupdate() = = 1? “Insert succeeded”: “insert failed”);} catch (IOException E) {system.out.println (“failed to read photos!”);}} catch (classnotfoundexception | sqlexception E) {/ / multiple exception capture e.printstacktrace(); and} finally {/ / all codes that release external resources should be placed in finally to ensure that they can be executed try{
Page 308 of 485
If (con! = null & &! Con. Isclosed()) {con. Close(); / / 5. Release the database connection con = null; / / indicates that the garbage collector can recycle the object}} catch (sqlexception E) {e.printstacktrace();}
}
}
}
83. Briefly describe regular expressions and their uses.
Answer:
When writing programs to process strings, there is often a need to find strings that meet some complex rules. Regular expressions are tools for describing these rules. In other words, regular expressions are code that records text rules.
Note: at the beginning of the birth of the computer, almost all the information processed by the computer was numeric, but time has changed. Today, the information processed by the computer is more often not numeric but string. Regular expression is the most powerful tool for string matching and processing. Most languages provide support for regular expression.
84. How does Java support regular expression operations?
Answer:
Page 309 of 485
The string class in Java provides methods that support regular expression operations, including matches (), replaceall (), replacefirst (), split (). In addition, the pattern class can be used to represent regular expression objects in Java. It provides rich APIs for various regular expression operations. Please refer to the code of the interview question below.
Interview question: – if you want to intercept the string before the first English left parenthesis from the string, such as Beijing (Chaoyang District) (Xicheng District) (Haidian District), the interception result is: Beijing, how do you write the regular expression?
import java.util.regex.Matcher; import java.util.regex.Pattern;
class RegExpTest {
Public static void main (string [] args) {string STR = “Beijing (Chaoyang District) (Xicheng District) (Haidian District)”; pattern P = pattern. Compile (“*? (? = \ ()”); matcher M = p.matcher (STR); if (m.find()) {system. Out. Println (m.group());}}
}
Note: lazy matching and forward-looking are used in the above regular expressions. If you don’t know these contents, it is recommended to read the famous regular expression 30 minute introductory tutorial on the Internet.
85. What are the ways to obtain the class object of a class?
Answer:
Page 310 of 485
 method 1: type. Class, for example: string. Class  method 2: object. Getclass(), for example: “hello”. Getclass()  method 3: class. Forname(), for example: class. Forname (“Java. Lang. string”)
**86. How to create objects through reflection?
Answer:
 method 1: call the newinstance () method through the class object, for example: string. Class. Newinstance()  method 2: obtain the constructor object through the getconstructor () or getdeclaraedconstructor () method of the class object and call its newinstance () method to create the object, for example: string. Class. Getconstructor (string. Class). Newinstance (“hello”);
**87. How to obtain and set the value of the private field of an object through reflection?
Answer:
You can use the getdeclaraedfield() method of the class object to access the field object, and then set it accessible through setaccessible (true) of the field object. Next, you can get / set the value of the field through the get / set method. The following code implements a reflective tool class, in which two static methods are used to obtain and set the value of private fields respectively. Fields can be basic types or object types, and support multi-level object operations, such as reflectionutil.get (dog, “owner. Car. Engine. ID”); You can get the ID number of the engine of the owner’s car of the dog object.
import java.lang.reflect.Constructor; import java.lang.reflect.Field; import java.lang.reflect.Modifier; import java.util.ArrayList;
Page 311 of 485
import java.util.List;
/* Reflection tool class@Author Luo Hao */ public class ReflectionUtil {
private ReflectionUtil() { throw new AssertionError(); }
/* Get the value of the specified field (property) of the object through reflection@Param target target object@The name of the param fieldname field@Throws if the value of the specified field of the object cannot be obtained, an exception is thrown@Return field value * / public static object getValue (object target, string fieldname) {class <? > clazz = target. Getclass(); string [] FS = fieldname. Split (“\.);
try {
for(int i = 0; i < fs.length – 1; i++) { Field f = clazz.getDeclaredField(fs[i]); f.setAccessible(true); target = f.get(target); clazz = target.getClass(); }
Page 312 of 485
Field f = clazz.getDeclaredField(fs[fs.length – 1]); f.setAccessible(true); return f.get(target);
} catch (Exception e) { throw new RuntimeException(e); }
}
/* Assign a value to the specified field of the object by reflection@Param target target object@The name of the param fieldname field@Param value value/ public static void setValue(Object target, String fieldName, Object value) { Class<?> clazz = target.getClass(); String[] fs = fieldName.split(“\.”); try { for(int i = 0; i < fs.length – 1; i++) { Field f = clazz.getDeclaredField(fs[i]); f.setAccessible(true); Object val = f.get(target); if(val == null) { Constructor<?> c = f.getType().getDeclaredConstructor(); c.setAccessible(true); val = c.newInstance(); f.set(target, val);
Page 313 of 485
} target = val; clazz = target.getClass();
}
Field f = clazz.getDeclaredField(fs[fs.length – 1]); f.setAccessible(true); f.set(target, value);
} catch (Exception e) { throw new RuntimeException(e); }
}
}
88. How to call object methods through reflection?
Answer:
See the following code:
import java.lang.reflect.Method;
class MethodInvokeTest {
public static void main(String[] args) throws Exception { String str = “hello”; Method m = str.getClass().getMethod(“toUpperCase”); System.out.println(m.invoke(str)); // HELLO
Page 314 of 485
}
}
**89. Briefly describe the “six principles and one law” to the object.
Answer:
 single responsibility principle: a class only does what it should do. (what the single responsibility principle wants to express is “high cohesion”. The ultimate principle of writing code is only six words “high cohesion and low coupling”, just as the central idea of sunflower Scripture or evil ward sword spectrum is eight words “if you want to practice this skill, you must first come to the Palace”. The so-called high cohesion is that a code module only completes one function. In object-oriented, if only one class completes it, it should be What we do without involving areas unrelated to it is to practice the principle of high cohesion. This class has only a single responsibility. We all know that “because of concentration, we are professional”. If an object undertakes too many responsibilities, it is doomed to do nothing well. Any good thing in the world has two characteristics: a single function and a good camera It’s definitely not the kind sold in TV shopping. A machine has more than 100 functions, and it can only take pictures. The other is modularization. A good bicycle is an assembly vehicle. All parts, from shock fork, brake to transmission, can be disassembled and reassembled. A good table tennis racket is not a finished racket. It must be that the bottom plate and rubber can be disassembled and self-assembled , for a good software system, each functional module in it should also be easily available for use in other systems, so as to achieve the goal of software reuse.)  opening and closing principle: software entities should be open to expansion and closed to modification. (in an ideal state, when we need to add new functions to a software system, we only need to derive some new classes from the original system without modifying any line of original code. There are two key points to open and close: ① abstraction is the key, and if there is no abstract class or interface in a system, the system has no extension point; ② encapsulate variability and change the system Various variable factors in are encapsulated into an inheritance structure. If multiple variable factors are mixed together, the system will become complex and disordered. If you don’t know how to encapsulate variability, you can refer to the chapter on bridge mode in the book “design pattern interpretation”)  Dependency Inversion Principle: interface oriented programming. (this principle is straightforward and specific. When declaring the parameter type, return type and reference type of a method, use the abstract type instead of the concrete type as much as possible, because the abstract type can be replaced by any of its subtypes. Please refer to the Richter replacement principle below.)
Page 315 of 485
Richter substitution principle: you can replace the parent type with a subtype at any time. (Ms. Barbara Liskov’s description of the Richter substitution principle is much more complicated than this, but simply put, where the parent type can be used, the child type can be used. The Richter substitution principle can check whether the inheritance relationship is reasonable. If an inheritance relationship violates the Richter substitution principle, the inheritance relationship must be wrong and the code needs to be modified Row refactoring. For example, it is wrong to let the cat inherit the dog, or the dog inherit the cat, or let the square inherit the rectangle, because you can easily find a scene that violates the Richter substitution principle. It should be noted that subclasses must increase the ability of the parent rather than reduce the ability of the parent, because children have more ability to compare with the parent and regard objects with more ability as abilities Of course, there is no problem with using fewer objects.)  interface isolation principle: interfaces should be small and specialized, and never large and complete. (a bloated interface pollutes the interface. Since the interface represents capability, an interface should only describe one capability, and the interface should also be highly cohesive. For example, piano, chess, calligraphy and painting should be designed as four interfaces instead of four methods in one interface, because if they are designed as four methods in one interface, the interface is difficult to use After all, there are still a few people who are proficient in piano, chess, calligraphy and painting. If four interfaces are designed, several interfaces will be implemented for several items. In this way, each interface is highly likely to be reused. Interfaces in Java represent capabilities, conventions and roles. Whether interfaces can be used correctly must be an important indicator of programming level.)  composite aggregation Reuse Principle: give priority to the use of aggregation or composite relationship reuse code. (reusing code through inheritance is the most abused thing in object-oriented programming, because all textbooks advocate inheritance without exception, which misleads beginners. There are three kinds of relationships between classes: is-a relationship, has-a relationship and Use-A relationship, which represent inheritance, association and dependency respectively. Among them, the association relationship is based on its association The strength of can be further divided into association, aggregation and synthesis, but to put it bluntly, it is the has-a relationship. The principle of synthetic aggregation reuse wants to express that the has-a relationship is given priority rather than the is-a relationship to reuse code. Why? You can find 10000 reasons on Baidu. It should be noted that even in the Java API, there are many examples of abusing inheritance, such as the properties class It inherits the hashtable class and the stack class inherits the vector class. These inheritances are obviously wrong. A better way is to place a member of hashtable type in the properties class and set its key and value as a string to store data. The design of the stack class should also put a vector object in the stack class to store data. Remember: do not inherit at any time Tool class. Tools can be owned and used, not inherited.)  dimitt’s Law: dimitt’s law is also called the least knowledge principle. An object should know as little as possible about other objects. (Dimitri’s law is simply how to achieve “low coupling”, facade mode and
Page 316 of 485
The mediator model is the practice of Dimitri’s law. For the facade model, you can take a simple example. When you go to a company to negotiate business, you don’t need to know how the company operates internally. You can even know nothing about the company. When you go, you just need to find the front desk beauty at the entrance of the company, tell her what you want to do, and they will find the right person to contact you, The beauty at the front desk is the facade of the company’s system. No matter how complex the system is, it can provide users with a simple facade. Isn’t servlet or filter as the front-end controller in Java Web development a facade? Browsers know nothing about the operation mode of the server, but they can get the corresponding services according to your request through the front-end controller. The mediator mode can also be illustrated by a simple example. For example, a computer, CPU, memory, hard disk, graphics card and sound card need to cooperate with each other to work well. However, if these things are directly connected together, the wiring of the computer will be extremely complex. In this case, the motherboard appears as a mediator, It connects various devices together without directly exchanging data between each device, which reduces the coupling and complexity of the system, as shown in the figure below. Dimitri’s law, in popular words, is not to deal with strangers. If you really need it, find your own friend and let him deal with strangers for you.)
Page 317 of 485
90. Briefly describe the design patterns you know.
Answer:
The so-called design pattern is the summary of a set of repeatedly used code design experience (a proven solution to a problem in a situation). The purpose of using design patterns is to reuse the code, make the code easier to be understood by others and ensure the reliability of the code. Design patterns make it easier for people to reuse successful designs and architectures. Expressing proven technologies into design patterns will also make it easier for new system developers to understand their design ideas. In GoF’s design patterns: elements of reusable object oriented software, there are 23 design patterns (creation type [abstraction of class instantiation process], structural type [description of how to combine classes or objects to form a larger structure], and behavioral type [abstraction of dividing responsibilities and algorithms between different objects]), Including: abstract factory mode, builder mode, factory method mode, prototype mode and singleton mode; Facade mode, adapter mode, bridge mode, composite mode, decorator mode, flyweight mode, proxy mode; Command mode,
Page 318 of 485
Interpreter mode, visitor mode, iterator mode, mediator mode, memo mode, observer mode, state mode, strategy mode, template method, chain of responsibility mode. When you are asked about the knowledge of design patterns in the interview, you can pick the most commonly used answers, such as:
 factory mode: factory classes can generate different subclass instances according to conditions. These subclasses have a common abstract parent class and implement the same methods, but these methods perform different operations (polymorphic methods) for different data. When the subclass instance is obtained, the developer can call the method in the base class without considering which subclass instance is returned.  proxy mode: provide a proxy object for an object, and the proxy object controls the reference of the original object. In actual development, according to different purposes, agents can be divided into: remote agent, virtual agent, protection agent, cache agent, firewall agent, synchronization agent and intelligent reference agent.  adapter mode: transform the interface of a class into another interface expected by the client, so that classes that cannot be used together due to interface mismatch can work together.  template method pattern: provide an abstract class, implement some logic in the form of concrete methods or constructors, and then declare some abstract methods to force subclasses to implement the remaining logic. Different subclasses can implement these abstract methods (polymorphic Implementation) in different ways, so as to realize different business logic.
In addition, you can also talk about the facade mode, bridge mode, singleton mode, decoration mode mentioned above (decoration mode is used in collections tools and I / O systems). Anyway, the basic principle is to choose the answers you are most familiar with and use most, so as not to lose more words.
91. Write a singleton class in Java.
Answer:
 hungry Han style single example
Page 319 of 485
public class Singleton { private Singleton(){} private static Singleton instance = new Singleton(); public static Singleton getInstance(){ return instance; } }
 lazy single example
public class Singleton { private static Singleton instance = null; private Singleton() {} public static synchronized Singleton getInstance(){ if (instance == null) instance = new Singleton(); return instance; } }
Note: there are two points to note when implementing a singleton: ① keep the constructor private and do not allow the outside world to create objects through the constructor; ② Returns a unique instance of a class to the outside world through an exposed static method. Here’s a question to consider: spring’s IOC container can create singletons for ordinary classes. How does it do that?
92. What is UML?
Answer:
Page 320 of 485
UML is the abbreviation of unified modeling language. It was published in 1997. It integrates the existing object-oriented modeling language, methods and processes. It is a graphical language that supports modeling and software system development, and provides modeling and visualization support for all stages of software development. Using UML can help communication and communication, assist application design and document generation, and explain the structure and behavior of the system.
93. What are the commonly used diagrams in UML?
Answer:
UML defines a variety of graphical symbols to describe some or all of the static and dynamic structures of the software system, including use case diagram, class diagram, sequence diagram, collaboration diagram, state diagram, activity diagram and component diagram (component diagram), deployment diagram, etc. among these graphical symbols, three kinds of diagrams are the most important, namely: use case diagram (used to capture requirements and describe system functions. Through this diagram, you can quickly understand the system function modules and their relationships), class diagram (describe classes and the relationship between classes, through which you can quickly understand the system), sequence diagram (describe the interaction relationship and execution sequence between objects when performing specific tasks, and through this diagram you can understand the messages that objects can receive, that is, the services that objects can provide to the outside world). Use case diagram:
Page 321 of 485
Class diagram:
Sequence diagram:
Page 322 of 485
**94. Write a bubble sort in Java.
Answer:
Almost all programmers can write bubble sorting, but not everyone can do it during the interview. Here is a reference code:
import java.util.Comparator;
/* Sorter interface (policy pattern: encapsulate algorithms into separate classes with common interfaces so that they can replace each other)@Author Luo Hao */ public interface Sorter {
/* sort
Page 323 of 485

  • @Param list array to be sorted * / public < T extends comparable < T > > void sort (t [] list);

/* sort@Param list array to be sorted@Param comp compares the comparator * / public < T > void sort (t [] list, comparator < T > COMP) of two objects;
}
import java.util.Comparator;
/* Bubble sorting @Author Luo Hao / public class BubbleSorter implements Sorter {
@Override public <T extends Comparable<T>> void sort(T[] list) { boolean swapped = true; for (int i = 1, len = list.length; i < len && swapped; ++i) { swapped = false; for (int j = 0; j < len – i; ++j) { if (list[j].compareTo(list[j + 1]) > 0) { T temp = list[j]; list[j] = list[j + 1];
Page 324 of 485
list[j + 1] = temp; swapped = true;
}
}
}
}
@Override public <T> void sort(T[] list, Comparator<T> comp) { boolean swapped = true; for (int i = 1, len = list.length; i < len && swapped; ++i) { swapped = false; for (int j = 0; j < len – i; ++j) { if (comp.compare(list[j], list[j + 1]) > 0) { T temp = list[j]; list[j] = list[j + 1]; list[j + 1] = temp; swapped = true; } } } }
}
95. Write a half search in Java.
Answer:
Half search, also known as binary search and binary search, is a search algorithm for finding a specific element in an ordered array. The search process starts from the middle element of the array. If the middle element is exactly the element to be searched,
Page 325 of 485
The search process ends; If a specific element is greater than or less than the intermediate element, it is searched in the half of the array that is greater than or less than the intermediate element, and the comparison starts from the intermediate element as at the beginning. If the array is empty in a step, the specified element cannot be found. Each comparison of this search algorithm reduces the search scope by half, and its time complexity is O (logn).
import java.util.Comparator;
public class MyUtil {
public static <T extends Comparable<T>> int binarySearch(T[] x, T key) { return binarySearch(x, 0, x.length- 1, key); }
//Binary search using loop implementation public static < T > int binarysearch (t [] x, T key, comparator < T > COMP)
{
int low = 0; int high = x.length – 1; while (low <= high) { int mid = (low + high) >>> 1; int cmp = comp.compare(x[mid], key); if (cmp < 0) { low= mid + 1; } else if (cmp > 0) { high= mid – 1; } else { return mid; }
Page 326 of 485
} return -1;
}
//Binary search using recursion private static < T extends comparable < T > > int binarysearch (t [] x, int low, int high, T key) {if (low < = high) {int mid = Low + ((high – low) > > 1); if (key. CompareTo (x [mid]) = = 0) {return mid;} else if (key. CompareTo (x [mid]) < 0) {return binarysearch (x, low, mid – 1, key);} else {return binarysearch (x,mid + 1, high, key); } } return -1; } }
Note: the above code gives two versions of half search, one with recursion and the other with loop. It should be noted that (high + low) / 2 should not be used when calculating the middle position, because the addition operation may cause the integer to cross the boundary. Here, one of the following three methods should be used: Low + (high – low) / 2 or low + (high – low) > > 1 or (low + high) > > > 1 (> > > is a logical right shift and a right shift without sign bit)
Java interview questions (2)
Page 327 of 485
The topics included in this list of Java interview questions are listed below
 multithreading, concurrency and thread foundation  basic principles of data type conversion  garbage collection (GC)  Java collection framework  array  string  GOF design pattern  solid  abstract classes and interfaces  Java foundation, Such as equals and hashcode  generics and enumeration  Java IO and NiO  common network protocols  data structures and algorithms in Java  regular expressions  JVM bottom  Java best practices  JDBC  date, time and calendar  Java processing XML  JUnit  programming
Now it’s time to show you the 133 questions I have collected from various interviews in the past five years. I’m sure you’ve seen a lot of these questions in your interview, and you can answer many questions correctly.
Page 328 of 485
Basic problems of multithreading, concurrency and threading
1. Can you create volatile arrays in Java?
Yes, a volatile array can be created in Java, but it is only a reference to the array, not the whole array. I mean, if you change the array pointed to by the reference, it will be protected by volatile, but if multiple threads change the elements of the array at the same time, the volatile identifier will not play the previous protection role.
2. Can volatile make a non atomic operation atomic?
A typical example is a member variable of type long in a class. If you know that the member variable will be accessed by multiple threads, such as counters, prices, etc., you’d better set it to volatile. Why? Because reading a long variable in Java is not atomic, it needs to be divided into two steps. If one thread is modifying the value of the long variable, the other thread may only see half of the value (the first 32 bits). But the reading and writing of a volatile long or double variable is atomic.
3. What is the practice of the volatile modifier?
One practice is to modify long and double variables with volatile so that they can be read and written by atomic type. Both double and long are 64 bits wide, so the reading of these two types is divided into two parts. Read the first 32 bits for the first time, and then read the remaining 32 bits. This process is not atomic, but the reading and writing of volatile long or double variables in Java is atomic. Another function of volatile fixer is to provide memory barrier, such as application in distributed framework. In short, before you write a volatile variable, the JAVA memory model will insert a write barrier, and before you read a volatile variable, a read barrier will be inserted. That is, when you write a volatile field, it can ensure that any thread can see the value you write. At the same time,
Page 329 of 485
Before writing, it can also ensure that the update of any value is visible to all threads, because the memory barrier will update all other written values to the cache.
4. What guarantees do volatile type variables provide?
Volatile variables provide order and visibility guarantees. For example, JVM or jit will reorder statements for better performance, but volatile type variables will not reorder with other statements even if they are assigned without synchronization blocks. Volatile provides the guarantee of happens before to ensure that the modifications of one thread can be visible to other threads. In some cases, volatile can also provide atomicity, such as reading 64 bit data types. For example, long and double are not atomic, but double and long of volatile type are atomic.
5. Which is easier to write synchronous code with 10 threads and 2 threads?
From the perspective of writing code, the complexity of the two is the same, because synchronous code and the number of threads are independent of each other. However, the choice of synchronization strategy depends on the number of threads, because more threads mean greater competition, so you need to use synchronization technologies, such as lock separation, which requires more complex code and expertise.
6. How do you call the wait () method? Use if block or loop? Why?
Do you?
The wait () method should be called in a loop, because other conditions may not be met when the thread obtains that the CPU starts execution, so it is better to check whether the conditions are met before processing. The following is a standard code using the wait and notify methods:
// The standard idiom for using the wait method synchronized (obj) {
Page 330 of 485
while (condition does not hold) obj.wait(); // (Releases lock, and reacquires on wakeup) … // Perform action appropriate to condition }
See article 69 of [effective Java] for more information on why the wait method should be called in a loop.

7. What is false sharing in a multithreaded environment? Pseudo sharing is a well-known performance problem in multithreaded systems (each processor has its own local cache). Pseudo sharing occurs when threads on different processors modify variables depending on the same cache line, as shown in the following figure:

Java interview questions for experienced programmers
The pseudo sharing problem is difficult to find because threads may access completely different global variables, but they happen to be in a very close location in memory. Like many other concurrency problems, the most basic way to avoid pseudo sharing is to carefully review the code and adjust your data structure according to the cache line.
8. What is busy spin? Why should we use it?
Busy spin is a technology that waits for events without releasing the CPU. It is often used to avoid losing data in the CPU cache (if the thread pauses first and then runs on other CPUs, it will be lost). Therefore, if your work requires low latency and your threads do not have any order at present, you can loop to detect new messages in the queue instead of calling sleep () or wait () methods. Its only advantage is that you only have to wait for a short time, such as a few microseconds or nanoseconds. Lmax distributed framework is a library for high-performance inter thread communication. The library has a busyspinwaitstrategy class, which is implemented based on this concept, and uses busy spin to loop eventprocessors to wait for the barrier.
Page 331 of 485
9. How to get a thread dump file in Java?
Under Linux, you can use the command kill – 3 PID (process ID of java process) to obtain the dump file of Java application. Under windows, you can press Ctrl + break to get. In this way, the JVM will print the dump file of the thread to the standard output or error file. It may be printed in the console or log file. The specific location depends on the application configuration. If you use Tomcat.
10. Is swing thread safe?
No, swing is not thread safe. You cannot update swing components through any thread, such as JTable, JList or JPanel. In fact, they can only be updated through GUI or AWT threads. This is why Swing provides invokeandwait () and invokelater () methods to get GUI update requests from other threads. These methods put the update request into the thread queue of AWT, which can wait all the time, or directly return the result through asynchronous update. You can also view and learn more details in the reference answers.
11. What are thread local variables?
Thread local variables are variables limited to the thread itself and are not shared among multiple threads. Java provides ThreadLocal class to support thread local variables, which is a way to realize thread safety. However, you should be careful when using thread local variables in a management environment (such as a web server). In this case, the life cycle of a worker thread is longer than that of any application variable. Once any thread local variable is not released after the work is completed, there is a risk of memory leakage in Java applications.
12. Write a piece of code with wait notify to solve the producer consumer problem?
answer
Page 332 of 485
http://java67.blogspot.sg/201… t-and-notify-example.html
Please refer to the sample code in the answer. Just remember to call the wait () and notify () method in the synchronization block, if blocking, test the wait condition through the loop.
13. Write a thread safe singleton in Java?
answer
http://javarevisited.blogspot… eton-in-java-example.html
Please refer to the sample code in the answer, which teaches you to create a thread safe Java singleton class step by step. When we say thread safety, it means that a single instance can be guaranteed even if the initialization is in a multithreaded environment. In Java, using enumeration as a singleton class is the easiest way to create thread safe singleton mode.
14. What is the difference between sleep method and wait method in Java?
Although both are used to pause the currently running thread, sleep () is actually only a short pause because it will not release the lock, while wait () means conditional waiting. This is why this method releases the lock, because only in this way can other waiting threads obtain the lock when the conditions are met.
15. What is immutable object? What’s in Java
Create an immutable object?
Page 333 of 485
Immutable object means that once an object is created, its state cannot be changed. Any modification creates a new object, such as string, integer, and other wrapper classes. For details, see the answer, which guides you step by step to create an immutable class in Java.
16. Can we create an immutable object that contains a mutable object?
Yes, we can create an immutable object containing a variable object. You just need to be careful not to share the reference of the variable object. If you need to change, return a copy of the original object. The most common example is that an object contains a reference to a date object.
Data types and Java foundation interview questions
17. What data types should be used in Java to represent prices?
If you are not particularly concerned about memory and performance, use BigDecimal, otherwise use the double type of predefined precision.
18. How to convert byte to string?
The constructor of string receiving byte [] parameter can be used for conversion. Note that the correct code should be used. Otherwise, the platform default code will be used. This code may be the same as or different from the original code.
19. How to convert bytes to long type in Java?
Page 334 of 485
You answer this question: -)
20. Can we cast int to a byte variable? If this value
What will happen if it is larger than the range of byte type?
Yes, we can do coercion, but in Java, int is 32 bits and byte is 8 bits. Therefore, if the coercion is, the top 24 bits of int type will be discarded, and the range of byte type is – 128 to 128.
21. There are two classes. B inherits a and C inherits B. we can convert B to
C? If C = (c) B;
answer
http://javarevisited.blogspot… ss-interface-example.html
22. Which class contains clone method? Cloneable or object?
Java.lang.clonable is a symbolic interface and does not contain any methods. The clone method is defined in the object class. And you need to know that the clone () method is a local method, which means that it is implemented in C or C + + or other local languages.
23. Is the + + operator thread safe in Java?
Page 335 of 485
Answer: not thread safe operation. It involves multiple instructions, such as reading variable values, increasing, and then storing them back to memory. This process may lead to multiple threads crossing.
23. It is not a thread safe operation. It involves multiple instructions, such as reading variable values,
Increase and then store it back to memory. Multiple threads may cross in this process.
24. Difference between a = a + B and a + = b
+=Implicitly cast the result type of addition operation to the type of holding result. If two integers are added, such as byte, short, or int, they are first promoted to the int type, and then the addition operation is performed. If the result of the addition operation is greater than the maximum value of a, a + B will have a compilation error, but a + = B is OK, as follows:
byte a = 127; byte b = 127; b = a + b; // error : cannot convert from int to byte b += a; // ok
(translator’s note: this place should be stated incorrectly. In fact, the compiler will report an error no matter what the value of a + B is. Because a + B operation will promote a and B to int types, assigning int type to byte will compile an error.)
25. I can assign a double value to
A variable of type long?
Page 336 of 485
No, you can’t assign a double value to a variable of long type without cast, because the range of double type is wider than that of long type, so cast must be performed.
26. What will be returned if 3 * 0.1 = = 0.3? True or false?
False, because some floating-point numbers cannot be expressed exactly.
27. Which of int and integer will occupy more memory?
Integer objects take up more memory. Integer is an object and needs to store the metadata of the object. But int is a primitive type of data, so it takes less space.
28. Why is string immutable in Java?
String in Java is immutable because Java designers think that strings are used very frequently. Setting strings to immutable allows multiple clients to share the same string.
29. Can we use string in switch? Starting with Java 7, we can use strings in switch case, but this is just a syntax sugar. The internal implementation uses the hash code of the string in the switch.
30. What is the constructor chain in Java?
When you call another constructor from a constructor, it is the constructor chain in Java. This happens only when the constructor of the class is overloaded.
Interview questions between JVM bottom layer and GC (garbage collection)
Page 337 of 485
31. In 64 bit JVMs, the length of int is most?
In Java, the length of int type variables is a fixed value, independent of the platform, and is 32 bits. That is, in 32-bit and 64 bit Java virtual machines, the length of int type is the same.
32. What are the differences between serial and parallel GC?
Both serial and parallel cause stop the world during GC execution. The main difference between them is that the serial collector is the default replication collector. When executing GC, there is only one thread, while the parallel collector uses multiple GC threads to execute.
For 33, 32 and 64 bit JVMs, the length of int type variables is most?
In 32-bit and 64 bit JVMs, the length of int type variables is the same, both 32 bits or 4 bytes.
34. What is the difference between WeakReference and softreference in Java?
Although both WeakReference and softreference help to improve the efficiency of GC and memory, once the last strong reference is lost, the WeakReference will be recycled by GC. Although the soft reference cannot be prevented from being recycled, it can be delayed until the JVM memory is insufficient.
35. How does weakhashmap work?
Page 338 of 485
The work of weakhashmap is similar to that of normal HashMap, but weak references are used as keys, which means that when the key object has no references, the key / value will be recycled.
36. What is the function of JVM option – XX: + usecompressedoops?
Why use?
When you migrate your application from a 32-bit JVM to a 64 bit JVM, the heap memory will suddenly increase and almost double as the object pointer increases from 32 bits to 64 bits. This will also adversely affect the data cached by the CPU (much smaller than memory). Because the main motivation for migrating to a 64 bit JVM is to specify the maximum heap size, which can save some memory by compressing OOP. With the – XX: + usecompressedoops option, the JVM will use 32-bit OOP instead of 64 bit OOP.
37. How to judge whether the JVM is 32-bit or 64 bit through Java program
Bit?
You can check some system properties such as sun.arch.data.model or os.arch to get this information.
What is the maximum heap memory of 38, 32-bit and 64 bit JVMs respectively?
Theoretically, the 32-bit JVM heap memory can reach 2 ^ 32, that is, 4GB, but it will actually be much smaller than this. Different operating systems are different. For example, windows system is about 1.5 GB and Solaris is about 3 GB. 64 bit JVM allows you to specify the maximum heap memory, which can theoretically reach 2 ^ 64, which is a very large number. In fact, you can specify the heap memory size to 100GB. Even some JVMs, such as Azul, have heap memory up to 1000g.
Page 339 of 485
39. What are the differences between JRE, JDK, JVM and JIT?
JRE stands for Java run time, which is necessary to run Java references. JDK stands for java development kit. It is a development tool for Java programs, such as Java compiler. It also contains JRE. The JVM represents the Java virtual machine, and its responsibility is to run Java applications. JIT stands for just in time compilation. When the number of code execution exceeds a certain threshold, the Java bytecode will be converted into local code. For example, the main hot code will be allowed to be changed into local code, which will greatly improve the performance of Java applications.
Java interview questions with 3 years of working experience
40. Explain Java heap space and GC?
When a java process is started through a Java command, memory is allocated to it. Part of the memory is used to create heap space. When objects are created in the program, memory is allocated from the pair space. GC is a process within the JVM that reclaims the memory of invalid objects for future allocation.
JVM bottom interview questions and answers
41. Can you guarantee the execution of GC?
No, although you can call system. GC () or runtime. GC (), there is no way to guarantee the execution of GC.
42. How to get the memory used by Java programs? Percentage of heap used?
Page 340 of 485
You can obtain the remaining memory, total memory and maximum heap memory through the memory related methods in the java.lang.runtime class. Through these methods, you can also get the percentage of heap used and the remaining space of heap memory. The runtime. Freememory() method returns the number of bytes of remaining space, the runtime. Totalmemory() method returns the number of bytes of total memory, and runtime. Maxmemory() returns the number of bytes of maximum memory.
43. What is the difference between heap and stack in Java?
The heap and stack in the JVM belong to different memory areas and are used for different purposes. Stacks are often used to hold method frames and local variables, and objects are always allocated on the heap. Stacks are usually smaller than the heap and are not shared among multiple threads, and the heap is shared by all threads of the entire JVM.
Interview questions and answers about memory
Java basic concept interview questions
44. What is the difference between “a = = B” and “A. equals (b)”?
If both a and B are objects, a = = B is the reference to compare the two objects. True will be returned only when a and B point to the same object in the heap, while A. equals (b) is for logical comparison. Therefore, it is usually necessary to rewrite this method to provide logical consistency comparison. For example, the string class overrides the equals () method, so it can be used for a comparison of two different objects that contain the same letters.
45. What is the use of A. hashcode()? What does it have to do with A. equals (b)?
Page 341 of 485
The hashcode () method is the hash value of the corresponding object integer. It is commonly used in hash based collection classes, such as hashtable, HashMap, LinkedHashMap, and so on. It is particularly relevant to the equals () method. According to the Java specification, two objects that use the equal () method to determine equality must have the same hash code.
46. What are the differences between final, finalize and finally?
Final is a modifier that modifies variables, methods, and classes. If final modifies a variable, it means that the value of the variable cannot be changed after initialization. The finalize method is the method that is called before the object is recycled, and gives the object the chance to resurrect itself. However, what time to call finalize is not guaranteed. Finally is a keyword used for exception handling together with try and catch. Finally, the block must be executed regardless of whether an exception occurs in the try block.
47. What are compile time constants in Java? What are the risks of using it?
Public static final variables are what we call compile time constants. Here, public is optional. In fact, these variables are replaced at compile time because the compiler knows the values of these variables and that they cannot be changed at run time. A problem with this method is that you use a public compile time constant in an internal or third-party library, but this value has been changed by others, but your client is still using the old value, and even you have deployed a new jar. To avoid this, be sure to recompile your program when updating dependent jar files.
Interview questions for Java collection framework
This section also contains interview questions about data structures, algorithms and arrays
48. Difference between list, set, map and queue (answer)
Page 342 of 485
List is an ordered collection that allows elements to repeat. Some of its implementations can provide constant access time based on subscript values, but this is not guaranteed by the list interface. Set is an unordered set.
49. What is the difference between the poll () method and the remove () method?
Both poll () and remove () take an element from the queue, but poll () will return null when it fails to get the element, but an exception will be thrown when remove () fails.
50. What is the difference between LinkedHashMap and PriorityQueue in Java
what?
PriorityQueue ensures that the highest or lowest priority elements are always at the head of the queue, but the order maintained by LinkedHashMap is the order in which elements are inserted. When traversing a PriorityQueue, there is no order guarantee, but the LinkedHashMap class guarantees that the traversal order is the order of element insertion.
51. Is there any difference between ArrayList and LinkedList?
The most obvious difference is that the underlying data structure of ArrayList is array, which supports random access, while the underlying data structure of LinkedList is book linked list, which does not support random access. Using subscripts to access an element, the time complexity of ArrayList is O (1) and LinkedList is O (n). For a more detailed discussion, see the answers.
52. What are the two ways to sort sets?
Page 343 of 485
You can use ordered collections, such as TreeSet or treemap, or you can use ordered collections, such as list, and then sort through collections. Sort().
53. How to print arrays in Java?
You can use the arrays. Tostring() and arrays. Deeptostring() methods to print the array. Because the array does not implement the tostring() method, if the array is passed to the system. Out. Println() method, the contents of the array cannot be printed, but arrays. Tostring() can print each element.
54. Is the LinkedList in java a one-way linked list or a two-way linked list?
Is a two-way linked list, you can check the source code of JDK. In eclipse, you can use the shortcut Ctrl + T to open the class directly in the editor.
55. What tree is used to implement treemap in Java? (answer)
Treemap in Java is implemented using red and black trees.
56. What are the differences between hashtable and HashMap?
There are many differences between the two classes. Some of them are listed below: a) hashtable is a class left over from JDK 1, and HashMap is added later. b) Hashtable is synchronous and slow, but HashMap has no synchronization strategy, so it will be faster. c) Hashtable does not allow an empty key, but HashMap allows a null key. See the answer for more differences.
Page 344 of 485
57. How does the HashSet in Java work internally?
The interior of HashSet is implemented by HashMap. Since map requires key and value, all keys have a default value. Similar to HashMap, HashSet does not allow duplicate keys, and only one null key is allowed, which means that only one null object is allowed to be stored in HashSet.
58. Write a piece of code to remove an element when traversing the ArrayList?
The key to this question is whether the interviewer uses the remove () method of ArrayList or the remove () method of iterator. Here is a sample code that uses the correct way to remove elements during traversal without concurrent modificationexception.
59. Can we write a container class ourselves and use the for each loop code?
Yes, you can write your own container class. If you want to use the enhanced loop in Java to traverse, you only need to implement the iteratable interface. If you implement the collection interface, it has this property by default.
60. What is the default size of ArrayList and HashMap?
In Java 7, the default size of ArrayList is 10 elements, and the default size of HashMap is 16 elements (must be a power of 2). This is the code fragment of ArrayList and HashMap classes in Java 7:
// from ArrayList.java JDK 1.7 private static final int DEFAULT_CAPACITY = 10;
//from HashMap.java JDK 7
Page 345 of 485
static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16
61. Is it possible that two unequal objects have the same hashcode?
It is possible that two unequal objects may have the same hashcode value, which is why there are conflicts in HashMap. The provision of equal hashcode value only means that if two objects are equal, they must have the same hashcode value, but there is no provision for unequal objects.
62. Will two identical objects have different hash codes?
No, according to the hash code, this is impossible.
63. Can we use random numbers in hashcode()?
answer
http://javarevisited.blogspot… ple.html
No, because the hashcode value of the object must be the same. See the answer for more information about rewriting the hashcode () method in Java.
64. What is the difference between comparator and comparable in Java?
The comparable interface is used to define the natural order of objects, while the comparator is usually used to define the order customized by the user. There is always only one comparable, but there can be multiple comparators to define the order of objects.
Page 346 of 485
65) why should the hashcode method be rewritten when the equals method is rewritten? (answer) because there are mandatory specifications specifying that hashcode and equal methods need to be overridden at the same time, many container classes, such as HashMap and HashSet, rely on the provisions of hashcode and equals.
Interview questions for Java IO and NiO
IO is a very important point in Java interview. You should master Java IO, NiO, nio2 and basic knowledge related to operating system and disk IO. The following are frequently asked questions in Java io.
66. In my java program, I have three sockets. How many wires do I need
How can I handle it?
67. How to create ByteBuffer in Java?
byte[] bytes = new byte[10]; ByteBuffer buf = ByteBuffer.wrap(bytes);
68. How to read and write ByteBuffer in Java?
69. Is java big or small?
70. What is the byte order in ByteBuffer?
Page 347 of 485
71. What is the difference between direct buffer and indirect buffer in Java?
answer http://javarevisited.blogspot…
72. What is the memory mapped cache in Java?
answer
http://javarevisited.blogspot… ava.html
73. What does the socket option TCP no delay mean?
74. What is the difference between TCP protocol and UDP protocol?
answer
http://javarevisited.blogspot… -udp-protocol.html
75. What is the difference between ByteBuffer and StringBuffer in Java? (answer)
(case)
Page 348 of 485
Interview questions for Java best practices
Contains best practices for various parts of Java, such as collections, strings, IO, multithreading, error and exception handling, design patterns, and so on.
76. What best practices do you follow when writing multithreaded programs in Java?
These are some best practices I follow when writing Java Concurrent Programs: a) naming threads to help debugging. b) Minimize the scope of synchronization, instead of synchronizing the whole method, and only synchronize the key parts. c) If possible, use volatile rather than synchronized. d) Use higher-level concurrency tools instead of wait () and notify () to realize inter thread communication, such as BlockingQueue, countdownlatch and semeaphore. e) Concurrent collections are preferred over synchronization of collections. Concurrent collections provide better scalability.
77. Name some best practices for using collections in Java
These are some of my best practices in using the collectionc class in Java: a) use the correct collection class. For example, if you don’t need to synchronize the list, use ArrayList instead of vector. b) Concurrent collections are preferred over synchronization of collections. Concurrent collections provide better scalability. c) Use interfaces to represent and access collections, such as using list to store ArrayList, using map to store HashMap, and so on. d) Use iterators to loop the collection. e) Use generics when using collections.
78. Name at least 5 best practices for using threads in Java.
Page 349 of 485
answer
http://java67.blogspot.com/20… gthread-in-java.html
This question is similar to the previous question. You can use the answer above. For threads, you should: a) name the thread, b) separate the thread from the task, and use the thread pool executor to execute runnable or callable. c) Use thread pool
79. Name 5 IO best practices (answers)
IO is very important for the performance of Java applications. Ideally, you should not avoid IO operations on the critical path of your application. Here are some Java IO best practices you should follow: a) use IO classes with buffers instead of reading bytes or characters alone. b) Use NiO and nio2 C) to close the flow in the finally block, or use the try with resource statement. d) Use memory mapped files for faster io.
80. List 5 JDBC best practices that should be followed
answer
http://javarevisited.blogspot… ava.html))
There are many best practices that you can cite according to your preferences. Here are some more general principles: a) use batch operations to insert and update data
Page 350 of 485
b) Use Preparedstatement to avoid SQL exceptions and improve performance. c) Use the database connection pool d) get the result set through the column name, not the subscript of the column.
81. What are some best practices for method overloading in Java?
Here are a few best practices to follow to avoid the confusion of automatic boxing. a) Do not overload methods where one method receives an int parameter and another receives an integer parameter. b) Do not overload methods with the same number of parameters, but only different order of parameters. c) If the number of overloaded method parameters is more than 5, variable parameters are used.
Interview questions for date, time and calendar
82. Is simpledateformat thread safe in a multithreaded environment?
No, unfortunately, all implementations of dateformat, including simpledateformat, are not thread safe, so you should not use it in multiline programs, except in an external thread safe environment, such as limiting simpledateformat to ThreadLocal. If you don’t, you may get an incorrect result when parsing or formatting the date. Therefore, from all the practices of date and time processing, I strongly recommend the joda time library.
83. How to format a date in Java? If formatted as ddmmyyyy
The form of?
answer
Page 351 of 485
http://javarevisited.blogspot… Dateformat.html in Java, you can use the simpledateformat class or joda time library to format dates. The dateformat class allows you to format dates in a variety of popular formats. See the sample code in the answer, which demonstrates formatting dates into different formats, such as DD mm yyyy or ddmmyyyy.
84. In Java, how to display the time zone in the formatted date?
answer
http://java67.blogspot.sg/201… eformat-example.html
85. What is the difference between java.util.date and java.sql.date in Java?
answer
http://java67.blogspot.sg/201… ldate-example.html
86. How to calculate the difference between two dates in Java?
program
http://javarevisited.blogspot… tween-two-dates-in-java.html
Page 352 of 485
87. How to convert string yyyymmdd into date in Java?
answer
http://java67.blogspot.sg/201… hreading.html
Unit test JUnit interview questions
89. How to test static methods? (answer)
You can use the powermock library to test static methods.
90. How to use JUnit to test the exception of a method?
answer
http://javarevisited.blogspot… ption-thrown-by-java-method.html
91. Which unit test library have you used to test your Java program?
92. What is the difference between @ before and @ beforeclass?
answer
Page 353 of 485
http://javarevisited.blogspot… ption-thrown-by-java-method.html
Interview questions related to programming and code
93. How to check that a string contains only numbers? Solution
http://java67.blogspot.com/20… mbers-in-String.html
94. How to write an LRU cache using generics in Java?
95. Write a java program to convert byte to long?
95. How to reverse a string without using StringBuffer?
Solution
http://java67.blogspot.com/20… buffer-stringbuilder.htm
97. In Java, how to get the highest frequency of words in a file?
Solution
Page 354 of 485
http://java67.blogspot.com/20… ds-and-count.html
98. How to check that two given strings are in reverse order?
Solution
http://javarevisited.blogspot… tring-are-anagrams-example-tutorial.html
99. In Java, how to print all permutations of a string?
Solution
http://javarevisited.blogspot…
100. In Java, how can I print out duplicate elements in an array?
Solution
http://javarevisited.blogspot… ents-in-array-java.html
101. How to convert a string to an integer in Java?
Page 355 of 485
String s=”123″; int i; The first method: I = integer. ParseInt (s); The second method: I = integer. Valueof (s). Intvalue();
102. How to exchange the values of two integer variables without using temporary variables?
Solution
https://blog.csdn.net/zidane_…
Interview questions about OOP and design patterns
This part includes the design principles of solid in the Java interview process, OOP basis, such as class, object, interface, inheritance, polymorphism, encapsulation, abstraction and some higher-level concepts, such as composition, aggregation and association. It also includes the problem of GOF design pattern.
103. What is the interface? Why use interfaces instead of concrete classes?
Interfaces are used to define APIs. It defines the rules that a class must follow. At the same time, it provides an abstraction, because the client only uses the interface, so there can be multiple implementations, such as the list interface. You can use the ArrayList that can be accessed randomly or the LinkedList that can be easily inserted and deleted. It is not allowed to write code in the interface to ensure abstraction, but in Java 8, you can declare static default methods in the interface, which are specific.
104. What are the differences between abstract classes and interfaces in Java?
Page 356 of 485
There are many differences between abstract classes and interfaces in Java, but the most important one is that Java restricts a class to inherit only one class, but can implement multiple interfaces. Abstract classes can well define the default behavior of a family class, while interfaces can better define types, which is helpful to implement polymorphism mechanism later.
105. In addition to the singleton pattern, what design patterns have you used in the production environment?
This needs to be answered according to your experience. In general, you can say dependency injection, factory mode, decoration mode or observer mode. You can choose one you have used at will. However, you should be prepared to answer the next questions based on the mode you choose.
106. Can you explain the Richter substitution principle?
answer
https://blog.csdn.net/pu_xubo…
107) under what circumstances will Dimitri’s law be violated? Why is there this problem?
Dimitri’s Law suggests “only talk to friends, not strangers” to reduce the coupling between classes.
108. What is the adapter mode? When will it be used?
Adapter mode provides conversion of interfaces. If your client uses some interfaces, but you have other interfaces, you can write an adapter to connect these interfaces.
109. What are “dependency injection” and “control inversion”? Why is it used?
Page 357 of 485
Inversion of control (IOC) is the core idea of the spring framework. In my own words, you have to do one thing. Don’t be new. Just say what you want to do, and then outsource it ~ dependency injection (DI) in my shallow idea, it is to complete some things and pass them back to the place where they need to be used through the reference of the interface and the expression of the construction method~
110. What is an abstract class? What is the difference between it and interface? Why did you use it
Abstract class?
Interfaces are used for specification and abstract classes are used for commonality. Classes that declare the existence of methods without implementing them are called abstract classes. Interfaces are variants of abstract classes. In an interface, all methods are abstract.
111. Constructor injection and setter dependency injection, which is better?
Each method has its disadvantages and advantages. Constructor injection ensures that all injections are initialized, but setter injection provides better flexibility to set optional dependencies. If you use XML to describe dependencies, setter injection is more readable and writable. The rule of thumb is to use constructor injection for mandatory dependencies and setter injection for optional dependencies.
112. What is the difference between dependency injection and engineering mode?
Although both modes separate the creation of objects from the application logic, dependency injection is clearer than engineering mode. Through dependency injection, your class is a POJO. It only knows dependencies and doesn’t care how to get them. Using the factory pattern, your class needs to obtain dependencies through the factory. Therefore, using Di is easier to test than using factory mode.
Page 358 of 485
113. What is the difference between adapter mode and decorator mode?
Although the structure of adapter mode and decorator mode is similar, the intention of each mode is different. The adapter pattern is used to bridge two interfaces, while the purpose of the decoration pattern is to add new functions to the class without modifying the class.
114. What is the difference between adapter mode and proxy mode?
This problem is similar to the previous one. The difference between adapter mode and proxy mode is that they have different intentions. Since both adapter mode and proxy mode are classes that encapsulate real actions, the structure is consistent. However, adapter mode is used for conversion between interfaces, while proxy mode adds an additional middle layer to support allocation, control or intelligent access.
115. What is the template method mode?
The template method provides the framework of the algorithm. You can configure or define the steps yourself. For example, you can think of the sorting algorithm as a template. It defines the sorting steps, but for specific comparison, you can use comparable or similar things in its language, and the specific strategy is configured by you. The method of outlining the algorithm is the well-known template method.
116. When to use visitor mode?
The visitor pattern is used to add operations at the inheritance level of a class, but is not directly associated with it. This mode uses the form of double distribution to increase the middle layer.
117. When to use combination mode?
Page 359 of 485
The composite pattern uses a tree structure to show the inheritance relationship between part and whole. It allows clients to treat individual objects and object containers in a unified form. When you want to show the inheritance relationship between this part of the object and the whole, use the combination mode.
118. What is the difference between inheritance and composition?
Although both can realize code reuse, composition is more flexible than inheritance, because composition allows you to choose different implementations at run time. Code implemented with composition is also simpler than inheritance testing.
119. Describe overloading and rewriting in Java?
Both overloading and rewriting allow you to implement different functions with the same name, but overloading is a compile time activity and rewriting is a run-time activity. You can overload methods in the same class, but you can only override methods in subclasses. Overrides must have inheritance.
120. What is the difference between nested public static classes and top-level classes in Java?
There can be multiple nested public static classes inside a class, but a java source file can only have one top-level public class, and the name of the top-level public class must be consistent with the name of the source file.
121. What is the difference between composition, aggregation and association in OOP?
If two objects are related to each other, they are related to each other. Composition and aggregation are two forms of association in object-oriented. Composition is a stronger association than aggregation. In composition, one object is the owner of another, while aggregation means that one object uses another object. If object a is by object B
Page 360 of 485
Combined, if a does not exist, B must not exist. However, if an object a aggregates an object B, B can exist alone even if a does not exist.
122. Give me an example of a design pattern that conforms to the opening and closing principle?
The open close principle requires that your code be open to extensions and closed to modifications. This means that if you want to add a new function, you can easily add new code without changing the tested code. Several design patterns are based on the opening and closing principle, such as the policy pattern. If you need a new strategy, you only need to implement the interface and add the configuration without changing the core logic. A working example is the collections. Sort () method, which is based on the policy mode and follows the opening and closing principle. You don’t need to modify the sort () method for new objects. All you need to do is implement your own comparator interface.
123. What is the difference between abstract factory pattern and prototype pattern?
Abstract factory pattern: usually implemented by factory method pattern. However, a factory often contains multiple factory methods to generate a series of products. This model emphasizes that the customer code guarantees to use only one series of products at a time. When you want to switch to another series of products, just change to another factory class.
Prototype pattern: the biggest disadvantage of factory method is that the product class corresponding to an inheritance system should have an equally complex inheritance system of factory class. Can we put the factory methods in the factory class into the product class itself? If so, the two inheritance systems can be divided into one. This is the idea of the prototype pattern. The factory method in the prototype pattern is clone, which will return a copy (either a shallow copy or a deep copy, which is determined by the designer). In order to ensure that the user code can call clone through the pointer to dynamically generate the required specific classes. These prototype objects must be constructed in advance.
Another benefit of the prototype pattern to the factory method pattern is that the copy efficiency is generally higher than the construction efficiency.
124. When to use the sharing mode?
Page 361 of 485
Meta mode avoids creating too many objects by sharing objects. In order to use the sharing mode, you need to ensure that your objects are immutable so that you can share them safely. String pool, integer pool and long pool in JDK are good examples of using the meta mode.
Other kinds of questions in Java interview
This section contains the interview questions about XML in Java, regular expression interview questions, Java errors and exceptions and serialization surface questions
125. What is the difference between nested static classes and top-level classes?
The source file name of a public top-level class is the same as the class name, which is not required for nested static classes. A nested class is located inside the top-level class. You need to use the name of the top-level class to reference the nested static class. For example, hashmap.entry is a nested static class, HashMap is a top-level class, and entry is a nested static class.
126. You can write a regular expression to judge whether a string is a number
Words?
A numeric string can only contain numbers, such as 0 to 9 and the beginning of +, -. Through this information, you can use the following regular expression to judge whether a given string is a number.
First, import java.util.regex.pattern and java.util.regex.matcher public Boolean IsNumeric (string STR) {pattern pattern = pattern. Compile (“[0-9] *”); matcher isnum = pattern. Matcher (STR); if (! Isnum. Matches()){
Page 362 of 485
return false;
} return true;
}
127. What is the difference between checked exceptions and unchecked exceptions in Java?
The checked exception compiler checks during compilation. For this exception, the method is forced to handle or declared through the throws clause. One case is a subclass of exception, but not a subclass of runtimeException. Unchecked is a subclass of runtimeException and is not checked by the compiler at the compilation stage.
128. What is the difference between throw and throws in Java
Throw is used to throw an instantiated object of java.lang.throwable class, which means that you can throw an error or an exception through the keyword throw, such as throw new illegalargumentexception (“size must be multiple of 2”)
The purpose of throws is to throw the corresponding exception as part of the method declaration and signature so that the caller can handle it. In Java, any unhandled checked exception is forced to be declared in the throws clause.
129. What is the difference between serializable and externalizable in Java?
Serializable interface is an interface for serializing Java classes so that they can be transmitted on the network or their state can be saved on disk. It is the default serialization method embedded in the JVM. It is costly, fragile and unsafe. Externalizable allows you to control the whole serialization process, specify specific binary format and increase security mechanism.
Page 363 of 485
130. What are the differences between Dom and sax parsers in Java?
DOM parser loads the whole XML document into memory to create a DOM model tree, which can find nodes and modify XML structure faster. Sax parser is an event-based parser and will not load the whole XML document into memory. For this reason, DOM is faster than sax and requires more memory. It is not suitable for parsing large XML files.
131. Name three new features in JDK 1.7?
Although JDK 1.7 is not as large as JDK 5 and 8, it still has many new features, such as try with resource statement, so you don’t need to close manually when using streams or resources, and Java will close automatically. Fork join pool implements map reduce in Java to some extent. Allow string variables and text in a switch. The diamond operator (< >) is used for type inference. It is no longer necessary to declare generics on the right of variable declaration, so it can write more readable and concise code. Another feature worth mentioning is improved exception handling, such as allowing multiple exceptions to be caught in the same catch block.
132. Name five new features introduced by JDK 1.8?
Java 8 is an innovative version in the history of Java. The following five main features of JDK 8 are: lambda expression, which allows anonymous function stream API to be passed like an object, makes full use of modern multi-core CPU, and can write very simple code date and time API. Finally, there is a stable and simple date and time library for you to use extension methods. Now, There can be static and default methods in the interface. Repeat annotation. Now you can use the same annotation on the same type multiple times.
133. What is the difference between Maven and ant in Java?
Page 364 of 485
Although both are construction tools and are used to create Java applications, Maven does more. Based on the concept of “Convention is better than configuration”, Maven provides a standard Java project structure and can automatically manage dependencies for applications (JAR files relied on in applications). See the answer for more differences between Maven and ant tools.
That’s all the interview questions, so many, isn’t it? I can guarantee that if you can answer all the questions in the list, you can easily handle any core Java or advanced Java interview. Although it does not cover servlet, JSP, JSF, JPA, JMS, EJB and other Java EE technologies, mainstream frameworks such as spring MVC, struts 2.0 and hibernate, nor soap and restful web services, this list is also useful for Java developers and candidates for Java Web development positions, because all Java interviews, The initial problems are related to Java foundation and JDK API. If you think I have any popular Java problems that should be omitted from this list, you are free to give me suggestions. My goal is to create a list of the latest and best Java interview questions from recent interviews.
Spring interview questions (I)
1. General issues
1.1. What are the main functions of different versions of spring framework?
Version Feature
Spring 2.5 was released in 2007. This is the first version that supports annotations.
Spring 3.0 was released in 2009. It takes full advantage of the improvements in Java 5 and provides support for jee6.
Spring 4.0 was released in 2013. This is the first version that fully supports Java 8.
Page 365 of 485
1.2. What is the spring framework?
Spring is an open source application framework designed to reduce the complexity of application development. It is lightweight and loosely coupled. It has a hierarchical architecture, allows users to select components, and provides a cohesive framework for J2EE application development. It can integrate other frameworks, such as structures, hibernate, EJB, etc., so it is also called the framework of the framework.
1.3. List the advantages of spring framework.
Due to the layered architecture of spring frameworks, users can freely choose the components they need. The spring framework supports POJO (plain old Java object) programming for continuous integration and testability. JDBC is simplified due to dependency injection and control inversion. It is open source and free.
1.4 what are the different functions of the spring framework?
Lightweight – spring is lightweight in terms of code size and transparency. IOC – control inversion AOP – aspect oriented programming can separate application business logic from system services to achieve high cohesion. Container – spring is responsible for creating and managing the lifecycle and configuration of objects (beans). MVC – provides high configurability for web applications, and the integration of other frameworks is also very convenient. Transaction management – provides a general abstraction layer for transaction management. Spring’s transaction support can also be used in environments with fewer containers. JDBC exception – spring’s JDBC abstraction layer provides an exception hierarchy that simplifies error handling strategies.
1.5 how many modules are there in the spring framework and what are they?
Page 366 of 485
Spring core container – this layer is basically the core of the spring framework. It contains the following modules:
 Spring Core  Spring Bean  SpEL (Spring Expression Language)  Spring Context
Data access / integration – this layer provides support for interaction with the database. It contains the following modules:
 JDBC (Java DataBase Connectivity)  ORM (Object Relational Mapping)  OXM (Object XML Mappers)
Page 367 of 485
 JMS (Java Messaging Service)  Transaction
Web – this layer provides support for creating web applications. It contains the following modules:
 Web  Web – Servlet  Web – Socket  Web – Portlet
AOP
 this layer supports aspect oriented programming
Instrumentation
 this layer supports class detection and class loader implementation.
Test
 this layer supports testing with JUnit and TestNG.
Several miscellaneous modules:
Page 368 of 485
Messaging – this module supports stomp. It also supports the annotation programming model, which is used to route and process stomp messages from websocket clients.
Aspects – this module supports integration with AspectJ.
1.6. What is a spring configuration file?
The spring configuration file is an XML file. This file mainly contains class information. It describes how these classes are configured and introduced into each other. However, XML configuration files are lengthy and cleaner. Without proper planning and writing, it becomes very difficult to manage in large projects.
1.7 what are the different components of spring applications?
Spring applications generally have the following components:
 interfaces – define functions.  bean class – it contains properties, setter and getter methods, functions, etc.  spring aspect oriented programming (AOP) – provides the function of aspect oriented programming.  bean configuration file – contains information about classes and how to configure them.  user program – it uses interfaces.
1.8. What are the ways to use spring?
Spring can be used in the following ways:
 as a mature spring web application.
Page 369 of 485
 as a third-party Web Framework, spring frameworks middle tier is used.  for remote use.  as an enterprise java bean, it can wrap existing POJOs (plain old Java objects).
2. Dependency injection (IOC)
2.1. What is a spring IOC container?
The core of the spring framework is the spring container. Containers create objects, assemble them together, configure them, and manage their full lifecycle. The spring container uses dependency injection to manage the components that make up the application. The container receives instructions for instantiation, configuration and assembly of objects by reading the provided configuration metadata. This metadata can be provided through XML, Java annotations, or Java code.
2.2. What is dependency injection?
Page 370 of 485
In dependency injection, you do not have to create objects, but you must describe how to create them. Instead of connecting components and services directly in code, you describe which components in the configuration file need which services. They are assembled together by IOC containers.
2.3. How many ways can dependency injection be completed?
Generally, dependency injection can be completed in three ways, namely:
 constructor injection  setter injection  interface injection
In the spring framework, only constructor and setter injection are used.
2.4. Distinguish between constructor injection and setter injection.
Constructor injection setter injection
No partial injection, partial injection
The setter property is not overwritten. The setter property is overwritten
Any modification will create a new instance. Any modification will not create a new instance
Suitable for setting many properties, suitable for setting a few properties
2.5. How many IOC containers are there in spring?
Page 371 of 485
Beanfactory – beanfactory is like a factory class that contains a collection of beans. It instantiates the bean when requested by the client.
ApplicationContext – the ApplicationContext interface extends the beanfactory interface. It provides some additional functions based on beanfactory.
2.6. Distinguish between beanfactory and ApplicationContext.
BeanFactory ApplicationContext
It uses lazy loading, it uses instant loading
It explicitly provides resource objects using syntax, and it creates and manages resource objects itself
Internationalization is not supported
Dependency based annotations are not supported
2.7 list some benefits of IOC.
Some of the benefits of IOC are:
 it will minimize the amount of code in the application.  it will make your application easy to test because it does not require any singleton or JNDI lookup mechanism in unit test cases.  it promotes loose coupling with minimal impact and minimal intrusion mechanism.  it supports instant instantiation and deferred loading services.
2.8. Implementation mechanism of spring IOC.
Page 372 of 485
The implementation principle of IOC in spring is factory mode plus reflection mechanism.
Example:
interface Fruit { public abstract void eat(); } class Apple implements Fruit { public void eat(){ System.out.println(“Apple”); } } class Orange implements Fruit { public void eat(){ System.out.println(“Orange”); } } class Factory { public static Fruit getInstance(String ClassName) { Fruit f=null; try { f=(Fruit)Class.forName(ClassName).newInstance(); } catch (Exception e) { e.printStackTrace(); } return f; } } class Client { public static void main(String[] a) {
Page 373 of 485
Fruit f=Factory.getInstance(“io.github.dunwu.spring.Apple”); if(f!=null){ f.eat(); }
}
}
3、Beans
3.1. What is spring bean?
 they are the objects that form the backbone of the user application.  beans are managed by the spring IOC container.  they are instantiated, configured, assembled and managed by the spring IOC container.  beans are created based on the configuration metadata provided to the container by the user.
3.2 what configuration methods does spring provide?
XML based configuration
The dependencies and services required by the bean are specified in a configuration file in XML format. These configuration files typically contain many bean definitions and application specific configuration options. They usually begin with a bean tag. For example:
<bean id=”studentbean” class=”org.edureka.firstSpring.StudentBean”> <property name=”name” value=”Edureka”></property> </bean>
Page 374 of 485
Annotation based configuration
Instead of using XML to describe bean assembly, you can configure beans as component classes themselves by using annotations on related class, method, or field declarations. By default, the annotation assembly is not open in the spring container. Therefore, you need to enable it in the spring configuration file before using it. For example:
<beans> <context:annotation-config/> <!– bean definitions go here –> </beans>
Java API based configuration
Spring’s Java configuration is implemented by using @ bean and @ configuration.
1. The @ bean annotation plays the same role as the < bean / > element. 2. The @ configuration class allows you to define inter bean dependencies by simply calling other @ bean methods in the same class.
For example:
@Configuration public class StudentConfig { @Bean public StudentBean myStudent() { return new StudentBean(); } }
Page 375 of 485
3.3. Spring supports centralized bean scope?
Spring beans support five kinds of scopes:
Singleton – each spring IOC container has only one single instance. Prototype – each request generates a new instance. Request – each HTTP request will generate a new instance, and the bean is only valid in the current HTTP request. Session – each HTTP request will generate a new bean, which is only valid in the current HTTP session. Global session – similar to the standard HTTP session scope, but it only makes sense in portlet based web applications. The portlet specification defines the concept of global session, which is shared by all the different portlets that make up a portlet web application. The beans defined in the global session scope are limited to the life cycle of the global portlet session. If you use the global session scope to identify beans in the web, the web will automatically be used as a session type.
The last three are available only if the user uses a web enabled ApplicationContext.
3.4 what is the life cycle of spring bean container?
The life cycle process of spring bean container is as follows:
1. The spring container instantiates beans according to the bean definitions in the configuration. 2. Spring uses dependency injection to populate all properties, such as the configuration defined in the bean. 3. If the bean implements the beannameaware interface, the factory calls setbeanname() by passing the ID of the bean. 4. If the bean implements the beanfactory aware interface, the factory calls setbeanfactory () by passing its own instance. 5. If there are any beanpostprocessors associated with the bean, the preprocessebeforeinitialization () method is called. 6. If an init method is specified for the bean (< bean > init method attribute), it will be called. 7. Finally, if there are any beanpostprocessors associated with the bean, the
Page 376 of 485
Use the postprocessafterinitialization () method. 8. If the bean implements the disposablebean interface, destroy () is called when the spring container is closed. 9. If a destroy method is specified for the bean (< destroy method attribute of bean >), it will be called.
3.5. What is the internal bean of spring?
A bean can only be declared as an internal bean if it is used as a property of another bean. In order to define beans, spring’s XML based configuration metadata provides the use of < bean > elements in < property > or < constructor Arg >. Internal beans are always anonymous, and they are always used as prototypes.
For example, suppose we have a student class that references the person class. Here we will only create an instance of the person class and use it in student.
Student.java
public class Student { private Person person; //Setters and Getters
Page 377 of 485
} public class Person { private String name; private String address; //Setters and Getters }
bean.xml
<bean id=“StudentBean” class=”com.edureka.Student”> <property name=”person”> <!–This is inner bean –> <bean class=”com.edureka.Person”> <property name=”name” value=“Scott”></property> <property name=”address” value= “Bangalore”></property> </bean> </property> </bean>
3.6 what is spring assembly
When beans are grouped together in a spring container, it is called an assembly or bean assembly. The spring container needs to know what beans are needed and how the container should use dependency injection to bind beans together and assemble beans at the same time.
3.7 what are the methods of automatic assembly?
Page 378 of 485
The spring container can assemble beans automatically. In other words, spring can automatically parse bean collaborators by checking the contents of beanfactory.
Different modes of automatic assembly:
No – this is the default setting, indicating that there is no automatic assembly. Explicit bean references should be used for assembly. Byname – it injects object dependencies based on the name of the bean. It matches and assembles beans whose properties are defined by the same name in the XML file. Bytype – it injects object dependencies based on the type. If the type of the attribute matches a bean name in the XML file, match and assemble the attribute. Constructor – it injects dependencies by calling the constructor of the class. It has a large number of parameters. Autodetect – first, the container tries to use autowire assembly through the constructor, and if not, tries to use bytype auto assembly.
3.8 what are the limitations of automatic assembly?
Possibility of overriding – you can always specify dependencies using < constructor Arg > and < property > settings, which will override automatic assembly. Basic metadata type – simple properties (such as original data type, string and class) cannot be assembled automatically. Confusing nature – always prefer to use explicit assembly because automatic assembly is less accurate.
4. Annotation
4.1. What is annotation based container configuration
Instead of using XML to describe bean assembly, developers move the configuration to the component class itself by using annotations on the relevant class, method, or field declarations. It can be used as an alternative to XML settings. For example:
Spring’s Java configuration is implemented by using @ bean and @ configuration.
Page 379 of 485
@Bean annotations play the same role as [email protected] The configuration class allows you to define inter bean dependencies by simply calling other @ bean methods in the same class.
For example:
@Configuration public class StudentConfig { @Bean public StudentBean myStudent() { return new StudentBean(); } }
4.2. How to start annotation assembly in spring?
By default, the annotation assembly is not open in the spring container. Therefore, to use annotation based assembly, we must enable it in the spring configuration file by configuring the < context: annotation config / > element.
4.3、@Component, @Controller, @Repository,
@What’s the difference?
@Component: This marks the Java class as a bean. It is a common stereotype of any spring managed component. Spring’s component scanning mechanism can now pick it up and pull it into the application [email protected] Controller: This marks a class as the spring web MVC controller. The bean marked with it will be automatically imported into the IOC [email protected] Service: this annotation is a specialization of the Component annotation. It does not provide any other behavior for the @ Component annotation. You can use it in service layer classes
Page 380 of 485
@Service instead of @ component because it specifies the intent in a better [email protected] Repository: this annotation is a specialization of the @ Component annotation with similar uses and functions. It provides additional benefits for Dao. It imports the Dao into the IOC container and qualifies unchecked exceptions to be converted into spring dataaccessexception.
4.4 what is the use of @ required annotation?
@Required applies to the bean property setter method. This annotation only indicates that the affected bean properties must be populated at configuration time using explicit property values in the bean definition or using automatic assembly. If the affected bean properties have not been populated, the container throws a beaninitializationexception.
Example:
public class Employee { private String name; @Required public void setName(String name){ this.name=name; } public string getName(){ return name; } }
4.5 what’s the use of @ Autowired annotation?
@Autowired can more accurately control where and how automatic assembly should be carried out. This annotation is used to automatically assemble beans on setter methods, constructors, properties or methods with arbitrary names or multiple parameters. By default, it is type driven injection.
Page 381 of 485
public class Employee { private String name; @Autowired public void setName(String name) { this.name=name; } public string getName(){ return name; } }
4.6 what is the use of @ qualifier annotation?
When you create multiple beans of the same type and want to assemble only one of them with properties, you can use the @ qualifier annotation and @ Autowired to disambiguate by specifying which exact bean should be assembled.
For example, here we have two classes, employee and empaccount. In empaccount, use @ qualifier to specify that the bean with ID emp1 must be assembled.
Employee.java
public class Employee { private String name; @Autowired public void setName(String name) { this.name=name; } public string getName() {
Page 382 of 485
return name;
}
}
EmpAccount.java
public class EmpAccount { private Employee emp;
@Autowired @Qualifier(emp1) public void showName() { System.out.println(“Employee name : ”+emp.getName); }
}
4.7 what is the use of @ requestmapping annotation?
@The requestmapping annotation is used to map a specific HTTP request method to a specific class / method in the controller that will process the corresponding request. This comment can be applied at two levels:
Class level: map the URL of the request method level: map the URL and HTTP request method
5. Data access
5.1 what is the use of spring Dao?
Page 383 of 485
Spring Dao makes it easier for data access technologies such as JDBC, hibernate or JDO to work in a unified way. This makes it easy for users to switch between persistence technologies. It also allows you to write code without thinking about catching exceptions that are different for each technology.
5.2. List the exceptions thrown by spring Dao.
5.3 what classes exist in spring JDBC API?
 JdbcTemplate  SimpleJdbcTemplate  NamedParameterJdbcTemplate  SimpleJdbcInsert
Page 384 of 485
 SimpleJdbcCall
5.4. What are the methods to access hibernate using spring?
We can use spring to access hibernate in two ways:
1. Use hibernate template and callback for control inversion 2. Extend hibernatedaosupport and apply AOP interceptor node
5.5. List the transaction management types supported by spring
Spring supports two types of transaction management:
1. Programmed transaction management: in this process, manage transactions with the help of programming. It provides you with great flexibility, but it is very difficult to maintain. 2. Declarative transaction management: here, transaction management is separated from business code. Only annotations or XML based configurations are used to manage transactions.
5.6 what ORM frameworks does spring support
 Hibernate  iBatis  JPA  JDO  OJB
Page 385 of 485
6、AOP
6.1 what is AOP?
AOP (aspect oriented programming), that is, aspect oriented programming, complements OOP (object oriented programming) and provides a perspective of abstract software structure different from OOP. In OOP, we take class as our basic unit, and the basic unit in AOP is aspect
6.2. What is aspect?
Aspect consists of pointcount and advice. It includes both the definition of crosscutting logic and the definition of connection points. Spring AOP is the framework responsible for implementing the section. It weaves the crosscutting logic defined by the section into the connection points specified by the section. The focus of AOP is how to enhance the connection points of the weaving target objects. Here are two tasks:
1. How to locate a specific join point through pointcut and advice 2. How to write aspect code in advice
Page 386 of 485
You can simply think that the class annotated with @ aspect is the aspect
6.3. What is joinpoint
Some point in time when a program is running, such as the execution of a method or the handling of an exception
In spring AOP, the join point is always the execution point of the method.
6.4 what is advice?
The action taken by an aspect at a particular joinpoint is called advice. Spring AOP uses an advice as an interceptor to maintain a series of interceptors “around” joinpoint.
Page 387 of 485
6.5 what types of advice are there?
 before – these types of advice are executed before the joinpoint method and configured with the @ before annotation tag.  after returning – these types of advice are executed after the normal execution of the connection point method and configured with the @ after returning annotation tag.  after throwing – these types of advice are only executed when the joinpoint method exits by throwing an exception and marks the configuration with the @ after throwing annotation.  after (finally) – these types of advice are executed after the connection point method, whether the method exits normally or returns abnormally, and are configured with the @ after annotation tag.  around – these types of advice are executed before and after the connection point and configured using the @ around annotation tag.
6.6. Point out concern and cross cutting in spring AOP
The difference of concern.
Concern is the behavior we want to define in a specific module of the application. It can be defined as the function we want to achieve.
Cross cutting concern is a behavior applicable to the whole application, which will affect the whole application. For example, logging, security and data transmission are issues that almost every module of an application needs to pay attention to, so they are cross domain issues.
6.7 what are the implementation methods of AOP?
The technologies for realizing AOP are mainly divided into two categories:
Page 388 of 485
Static proxy
It refers to compiling with the commands provided by the AOP framework, so that AOP proxy classes can be generated at the compilation stage. Therefore, it is also called compile time enhancement;
 compile time weaving (special Compiler Implementation)  class load time weaving (special class loader Implementation).
Dynamic agent
AOP dynamic proxy classes are generated “temporarily” in memory at runtime, so it is also called runtime enhancement.
 JDK dynamic agent  cglib
6.8 what is the difference between spring AOP and AspectJ AOP?
Spring AOP is implemented based on dynamic proxy; AspectJ is implemented based on static proxy. Spring AOP only supports pointcut at the method level; Full AOP support is provided, and it also supports attribute level pointcut.
6.9. How to understand proxy in spring?
The object created after advice is applied to the target object is called a proxy. In the case of a client object, the target object and the proxy object are the same.
Page 389 of 485
Advice + Target Object = Proxy
6.10 what is weaving?
Linking an aspect with other application types or objects in order to create an advice object is called weaving. In spring AOP, weaving is performed at run time. Please refer to the following figure:
7、MVC
7.1 what is the use of spring MVC framework?
Spring web MVC framework provides model view controller architecture and ready to use components for developing flexible and loosely coupled web applications. The MVC pattern helps to separate different aspects of the application, such as input logic, business logic, and UI logic, while providing loose coupling between all these elements.
7.2. Describe the workflow of dispatcher servlet
The workflow of dispatcherservlet can be illustrated by a figure:
Page 390 of 485
1. Send an HTTP request to the server, which is captured by the front-end controller dispatcherservlet.
2. Dispatcherservlet parses the requested URL according to the configuration in – servlet.xml to obtain the requested resource identifier (URI). Then, according to the URI, call handlermapping to obtain all relevant objects configured by the handler (including the handler object and the interceptor corresponding to the handler object), and finally return them in the form of handlerexecutionchain object.
3. Dispatcherservlet selects an appropriate handleradapter according to the obtained handler. (Note: if the handleradapter is successfully obtained, the pre handler (…) method of the interceptor will be executed at this time).
4. Extract the model data in the request, fill in the handler input parameters, and start to execute the handler (controller). In the process of filling in the handler input parameters, spring will help you do some additional work according to your configuration:
Page 391 of 485
 httpmessageconverter: convert the request message (such as JSON, XML and other data) into an object, and convert the object into the specified response information.  data conversion: perform data conversion on the request message. Such as converting string to integer, double, etc.  data rooting: format the request message. Such as converting a string into a formatted number or a formatted date.  data verification: verify the validity of data (length, format, etc.), and store the verification results in bindingresult or error.
5. After the handler (controller) completes execution, it returns a modelandview object to the dispatcher servlet;
6. According to the returned modelandview, select a suitable viewresolver (which must be a viewresolver registered in the spring container) to return to the dispatcher servlet.
7. Viewresolver combines model and view to render the view.
8. The view is responsible for returning the rendered results to the client.
7.3. Introduce webapplicationcontext
Webapplicationcontext is an extension of ApplicationContext. It has some additional features required by web applications. It differs from ordinary ApplicationContext in its ability to parse topics and determine which servlet to associate with.
English original link: https://www.edureka.co/blog/i… ons/
Page 392 of 485
Spring interview questions (2)
1. What is spring?
Spring is an open source development framework for Java enterprise applications. Spring is mainly used to develop java applications, but some extensions are aimed at building web applications on J2EE platform. The goal of spring framework is to simplify Java enterprise application development and promote good programming habits through POJO based programming model.
2. What are the benefits of using the spring framework?
 lightweight: spring is lightweight, and the basic version is about 2MB.  control inversion: Spring realizes loose coupling through control inversion. Objects give their dependencies instead of creating or finding dependent objects. Aspect oriented programming (AOP): spring supports aspect oriented programming and separates application business logic from system services.  container: spring contains and manages the life cycle and configuration of objects in the application.  MVC framework: spring’s web framework is a well-designed framework and a good substitute for the web framework.  transaction management: spring provides a continuous transaction management interface, which can be extended from local transactions to global transactions (JTA).  exception handling: spring provides a convenient API to convert specific technology related exceptions (such as those thrown by JDBC, hibernate or JDO) into consistent unchecked exceptions.
3. What modules does spring consist of?
The following are the basic modules of the spring framework:
Page 393 of 485
 Core module  Bean module  Context module  Expression Language module  JDBC module  ORM module  OXM module  Java Messaging Service(JMS) module  Transaction module  Web module  Web-Servlet module  Web-Struts module  Web-Portlet module
4. Core container (application context) module.
This is a basic spring module that provides the basic functions of the spring framework. Beanfactory is the core of any spring based application. The spring framework is built on this module, which makes spring a container.
5. Beanfactory – beanfactory implementation example.
Bean factory is an implementation of factory pattern, which provides control inversion function to separate application configuration and dependency from real application code.
The most commonly used implementation of beanfactory is the xmlbeanfactory class.
Page 394 of 485
6、XMLBeanFactory
The most commonly used is org.springframework.beans.factory.xml.xmlbeanfactory, which loads beans according to the definition in the XML file. The container reads the configuration metadata from the XML file and uses it to create a fully configured system or application.
7. Explain AOP module
The AOP module is used for aspect oriented development of spring applications sent to us. Many supports are provided by the AOP alliance, which ensures the commonality between spring and other AOP frameworks. This module introduces metadata programming into spring.
8. Explain JDBC abstraction and Dao modules.
By using JDBC abstraction and Dao module, we can ensure the simplicity of database code and avoid the problems caused by wrong shutdown of database resources. It provides a unified exception access layer on the error information of various databases. It also uses spring’s AOP module to provide transaction management services for objects in spring applications.
9. Explain the object / relational mapping integration module.
Spring supports us to use an object / Relational Mapping (ORM) tool on direct JDBC by providing ORM module. Spring supports the integration of mainstream ORM frameworks, such as hibernate, JDO and ibatis SQL maps. Spring’s transaction management also supports all the above ORM frameworks and JDBC.
10. Explain the web module.
Page 395 of 485
Spring’s web module is built on the application context module to provide a context suitable for web applications. This module also supports a variety of Web-oriented tasks, such as transparently processing multiple file upload requests and binding program level request parameters to your business objects. It also has support for Jakarta struts.
12. Spring configuration file
The spring configuration file is an XML file that contains class information describing how to configure them and how to call each other.
13. What is a spring IOC container?
Spring IOC is responsible for creating objects, managing objects (through dependency injection (DI), assembling objects, configuring objects, and managing the whole life cycle of these objects.
14. What are the advantages of IOC?
IOC or dependency injection minimizes the amount of application code. It makes the application easy to test, and unit testing no longer needs singleton and JNDI lookup mechanism. Loose coupling can be realized with minimum cost and minimum invasiveness. The IOC container supports hungry initialization and lazy loading when loading services.
15. What is the usual implementation of ApplicationContext?
 filesystemxmlapplicationcontext: this container loads the definition of beans from an XML file. The full pathname of the XML bean configuration file must be provided to its constructor.
Page 396 of 485
 classpathxmlapplicationcontext: this container also loads the definition of beans from an XML file. Here, you need to set the classpath correctly, because this container will find the bean configuration in the classpath.  webxmlapplicationcontext: this container loads an XML file that defines all beans of a web application.
16. What is the difference between bean factories and application contexts?
Application contexts provides a method to process text messages. A common approach is to load file resources (such as images), which can publish events to beans registered as listeners. In addition, operations performed on containers or objects within containers that have to be processed programmatically by the bean factory can be processed declaratively in application contexts. Application contexts implements the MessageSource interface, which provides a pluggable way to obtain localized messages.
17. What does a spring application look like?
 an interface that defines some functions.  this implementation includes attributes, its setters, getter methods and functions, etc.  Spring AOP。  XML configuration file of spring.  client programs that use the above functions.
Dependency injection
18. What is spring dependency injection?
Page 397 of 485
Dependency injection, an aspect of IOC, is a common concept. It has many explanations. The concept is that you don’t have to create an object, you just need to describe how it is created. You do not directly assemble your components and services in the code, but you should describe which components need which services in the configuration file, and then a container (IOC container) is responsible for assembling them.
19. What are the different types of IOC (dependency injection)?
 constructor dependency injection: constructor dependency injection is implemented by the container triggering the constructor of a class, which has a series of parameters, and each parameter represents a dependency on other classes. Setter method injection: the Setter method injection is the container’s setter method that invokes the bean after instantiating bean by calling the no parametric constructor or the non parametric static factory method, that is, setter based dependency injection.
20. Which dependency injection method do you recommend, constructor injection or setter
Method injection?
You can use both dependency methods, constructor injection and setter method injection. The best solution is to implement mandatory dependencies with constructor parameters and optional dependencies with setter methods.
Spring Beans
21. What is spring beans?
Page 398 of 485
Spring beans are Java objects that form the backbone of spring applications. They are initialized, assembled, and managed by the spring IOC container. These beans are created from the metadata configured in the container. For example, it is defined in the form of an XML file.
The beans defined by the spring framework are all single beans. There is an attribute “Singleton” in the bean tag. If it is set to true, the bean is a singleton, otherwise it is a prototype bean. The default is true, so all beans in the spring framework are singleton by default.
22. What does a spring bean definition contain?
The definition of a spring bean contains all the configuration metadata that the container must know, including how to create a bean, its life cycle details and its dependencies.
23. How to provide configuration metadata for spring container?
There are three important ways to provide configuration metadata to the spring container.
XML configuration file.
Annotation based configuration.
Java based configuration.
24. How do you define the scope of a class?
When defining a bean in spring, we can also declare a scope for the bean. It can be defined through the scope attribute in the bean definition. For example, when spring wants to produce a new bean instance every time it needs to, the scope attribute of the bean is specified as prototype. On the other hand, a bean
Page 399 of 485
The same instance must be returned every time it is used, and the scope property of this bean must be set to singleton.
25. Explain the scope of several beans supported by spring.
The spring framework supports the following five bean scopes:
 singleton: the bean has only one instance in each spring IOC container.  prototype: the definition of a bean can have multiple instances.  request: each HTTP request will create a bean, and this scope is only valid in the case of Web-based spring ApplicationContext.  session: in an HTTP session, a bean definition corresponds to an instance. This scope is only valid in the case of Web-based spring ApplicationContext.  global session: in a global HTTP session, a bean definition corresponds to an instance. This scope is only valid in the case of Web-based springapplicationcontext.
The scope of the default spring bean is singleton
26. Is the singleton bean in the spring framework thread safe?
No, singleton beans in the spring framework are not thread safe.
27. Explain the life cycle of beans in the spring framework.
 the spring container reads the bean definition from the XML file and instantiates the bean.  spring populates all attributes according to the bean definition.
Page 400 of 485
 if the bean implements the beannameaware interface, spring passes the bean ID to the setbeanname method.  if the bean implements the beanfactoryaware interface, spring passes beanfactory to the setbeanfactory method.  if there are any beanpostprocessors associated with the bean, spring will call them within the postprocesserbeforeinitialization () method.  if the bean implements the intializingbean, call its afterpropertyset method. If the bean declares an initialization method, call this initialization method.  if beanpostprocessors are associated with beans, the postprocessafterinitialization () method of these beans will be called.  if the bean implements disposablebean, it will call the destroy () method.
28. What are the important bean life cycle methods? Can you reload them?
There are two important bean life cycle methods. The first is setup, which is called when the container loads beans. The second method is teardown, which is called when the container unloads the class.
The bean tag has two important properties (init method and destroy method). With them, you can customize the initialization and logoff methods yourself. They also have corresponding annotations (@ postconstruct and @ predestroy).
29. What is the internal bean of spring?
When a bean is only used as an attribute of another bean, it can be declared as an internal bean. In order to define an internal bean, elements can be used in or elements in spring’s XML based configuration metadata. Internal beans are usually anonymous, and their scope is generally prototype.
30. How to inject a Java collection in spring?
Page 401 of 485
Spring provides configuration elements for the following Collections:
 type is used to inject a column of values, and the same values are allowed.  type is used to inject a set of values, and the same value is not allowed.  type is used to inject a set of key value pairs. Keys and values can be of any type.  type is used to inject a set of key value pairs. Both keys and values can only be string type.
31. What is bean assembly?
Assembly, or bean assembly, refers to assembling beans together in the spring container. The premise is that the container needs to know the dependencies of beans and how to assemble them together through dependency injection.
32. What is the automatic assembly of beans?
The spring container can automatically assemble beans that cooperate with each other, which means that the container does not need and configuration, and can automatically handle the cooperation between beans through the bean factory.
33. Explain different ways of automatic assembly.
There are five automatic assembly methods, which can be used to guide the spring container to use the automatic assembly method for dependency injection.
 No: the default method is not to perform automatic assembly, but to perform assembly by explicitly setting ref attribute.
Page 402 of 485
 byname: through automatic assembly of parameter names, the spring container finds that the autowire property of the bean is set to byname in the configuration file, and then the container tries to match and assemble beans with the same name as the properties of the bean.  bytype:: through automatic assembly of parameter types, the spring container finds that the autowire property of the bean is set to bytype in the configuration file, and then the container tries to match and assemble beans with the same type as the properties of the bean. If more than one bean meets the criteria, an error is thrown.  constructor: this method is similar to bytype, but it is required to provide constructor parameters. If there is no definite constructor parameter type with parameters, an exception will be thrown.  autodetect: first try to use the constructor for automatic assembly. If it cannot work, use bytype.
34. What are the limitations of automatic assembly?
The limitations of automatic assembly are:
 Rewriting: you still need and configuration to define dependencies, which means that automatic assembly is always rewritten.  basic data type: you cannot automatically assemble simple properties, such as basic data type, string, and class.  fuzzy characteristics: automatic assembly is not as accurate as explicit assembly. If possible, explicit assembly is recommended.
35. Can you inject a null and an empty string into spring?
sure.
Spring annotation
Page 403 of 485
36. What is java based spring annotation configuration? Give some annotated examples
Java based configuration allows you to make most of your spring configuration with the help of a small number of Java annotations rather than through XML files.
Take the @ configuration annotation as an example. It is used to mark the class. It can be used as a bean definition by the spring IOC container. Another example is the @ bean annotation, which indicates that this method will return an object and register it into the spring application context as a bean.
37. What is annotation based container configuration?
Compared with XML files, annotated configuration relies on assembling components through byte symbol data rather than angle bracket declarations.
Developers can directly configure in component classes by using annotations on corresponding classes, methods or attributes, rather than using XML to express the assembly relationship of beans.
38. How to open annotation assembly?
Annotation assembly is not enabled by default. In order to use annotation assembly, we must configure the context: annotation config / element in the spring configuration file.
39 @ required annotation
This annotation indicates that the properties of the bean must be set during configuration, either through the explicit property value defined by a bean or through automatic assembly. If the bean property of the @ required annotation is not set, the container will throw a beaninitializationexception.
Page 404 of 485
40 @ Autowired annotation
@Autowired annotations provide finer grained control, including where and how to complete automatic assembly. Like @ required, it modifies setter methods, constructors, properties, or PN methods with any name and / or multiple parameters.
41 @ qualifier notes
When there are multiple beans of the same type but only one needs to be assembled automatically, the @ qualifier annotation and the @ autowire annotation are used together to eliminate this confusion and specify the exact bean to be assembled.
Spring data access
42. How to use JDBC more effectively in the spring framework?
Using the spring JDBC framework, the cost of resource management and error handling will be reduced. Therefore, developers only need to write statements and queries to access data from data. JDBC can also be used more effectively with the help of the template class provided by the spring framework. This template is called jdbctemplate (see here for an example)
43、JdbcTemplate
The jdbctemplate class provides many convenient methods to solve problems, such as transforming database data into basic data types or objects, executing written or callable database operation statements, and providing user-defined data error handling.
Page 405 of 485
44. Spring support for Dao
Spring’s support for data access objects (DAO) aims to simplify its use in combination with data access technologies such as JDBC, hibernate or JDO. This allows us to easily switch the persistence layer. You don’t have to worry about catching exceptions specific to each technology.
45. How to access hibernate using spring?
There are two ways to access hibernate in spring:
 control and reverse hibernate template and callback.  inherit hibernatedaosupport and provide an AOP interceptor.
46. ORM supported by spring
Spring supports the following ORM:
 Hibernate  iBatis  JPA (Java Persistence API)  TopLink  JDO (Java Data Objects)  OJB
Page 406 of 485
47. How to integrate spring and Hibernate through hibernatedaosupport
Combined?
Call localsessionfactory with spring’s sessionfactory. The integration process is divided into three steps:
 configure the hibernate sessionfactory.  inherit hibernatedaosupport to implement a Dao.  assemble in transactions supported by AOP.
48. Transaction management types supported by spring
Spring supports two types of transaction management:
 programming transaction management: this means that you manage transactions through programming, which brings you great flexibility, but it is difficult to maintain.  declarative transaction management: this means that you can separate business code from transaction management. You only need annotations and XML configuration to manage transactions.
49. What are the advantages of transaction management in spring framework?
 it provides a constant programming mode for different transaction APIs such as JTA, JDBC, hibernate, JPA and JDO.  it provides a set of simple APIs for programmatic transaction management instead of some complex transaction APIs, such as
Page 407 of 485
 it supports declarative transaction management.  it integrates well with various data access abstraction layers of spring.
50. Which type of transaction management do you prefer?
Most users of spring framework choose declarative transaction management because it has the least impact on application code, so it is more in line with the idea of a non intrusive lightweight container. Declarative transaction management is better than programmatic transaction management, although it has a little less flexibility than programmed transaction management, which allows you to control transactions through code.
Spring aspect oriented programming (AOP)
51. Explain AOP
Aspect oriented programming, or AOP, is a programming technology that allows programs to modularize, cut concerns horizontally, or cut typical responsibilities, such as logging and transaction management.
52. Aspect section
The core of AOP is aspect. It encapsulates the common behavior of multiple classes into reusable modules, which contain a set of APIs to provide crosscutting functions. For example, a log module can be called the AOP aspect of logs. According to different requirements, an application can have several aspects. In spring AOP, facets are implemented through classes annotated with @ aspect.
52. What is the difference between concerns and crosscutting concerns in spring AOP?
Page 408 of 485
A concern is the behavior of a module in an application. A concern may be defined as a function we want to implement.
Crosscutting concerns are concerns that will be used by the whole application and affect the whole application, such as logging, security and data transmission. Almost every module of the application needs functions. Therefore, these are crosscutting concerns.
54. Connection point
The connection point represents a location of an application where we can insert an AOP aspect, which is actually the location where the application executes spring AOP.
55. Notice
Notification is an action to be done before or after method execution. In fact, it is a code segment to be triggered by the spring AOP framework during program execution.
The spring aspect can apply five types of notifications:
 before: pre notification, which is called before a method is executed. After: the notification that is called after the method is executed, regardless of whether the execution of the method is successful.  after returning: a notification that is executed only after the method completes successfully.  after throwing: the notification executed when the method throws an exception and exits. Around: notification before and after execution of the method.
56. Tangent point
Page 409 of 485
A pointcut is a join point or set of join points at which notifications will be executed. Pointcuts can be indicated by expressions or matches.
57. What is introduction?
The introduction allows us to add new methods and properties to existing classes.
58. What is the target object?
An object notified by one or more aspects. It is usually a proxy object. It also refers to the notified object.
59. What is an agent?
A proxy is an object created after notifying the target object. From the client’s point of view, the proxy object and the target object are the same.
60. How many different types of automatic agents are there?
BeanNameAutoProxyCreator
DefaultAdvisorAutoProxyCreator
Metadata autoproxying
61. What is weaving. What are the differences in weaving applications?
Page 410 of 485
Weaving is the process of connecting the cut and to other application types or objects or creating a notified object.
Weaving can be done at compile time, load time, or run time.
62. Explain the aspect implementation based on XML schema.
In this case, the facet is implemented by regular classes and XML based configurations.
63. Explain the implementation of section based on annotation
In this case (based on the implementation of @ AspectJ), the style of the aspect declaration involved is consistent with the ordinary Java class with java5 annotation.
Spring MVC
64. What is the MVC framework of spring?
Spring is equipped with a full-featured MVC framework for building web applications. Spring can easily integrate with other MVC frameworks, such as struts. Spring’s MVC framework uses control inversion to clearly isolate business objects from control logic. It also allows you to declaratively bind request parameters to business objects.
65、DispatcherServlet
Spring’s MVC framework is designed around dispatcher servlet, which is used to handle all HTTP requests and responses.
Page 411 of 485
66、WebApplicationContext
Webapplicationcontext inherits ApplicationContext and adds some unique functions necessary for web applications. It is different from general ApplicationContext because it can handle topics and find associated servlets.
67. What is the controller of spring MVC framework?
The controller provides a behavior to access the application, which is usually implemented through the service interface. The controller parses the user input and converts it into a model presented to the user by the view. Spring implements a control layer in a very abstract way, allowing users to create multi-purpose controllers.
68 @ controller annotation
This annotation indicates that this class plays the role of controller. Spring does not need you to inherit any other controller base class or reference servlet API.
69 @ requestmapping annotation
This annotation is used to map a URL to a class or a specific method.
Micro service interview questions
1. What do you know about microservices?
Page 412 of 485
Micro service, also known as micro service
Architecture is an architectural style that builds applications into a small collection of autonomous services based on business domains.
Generally speaking, you have to see how bees build their honeycomb by aligning hexagonal wax cells. They started with small parts of various materials and continued to build a large beehive from them. These cells form patterns that produce strong structures that hold specific parts of the honeycomb together. Here, each cell is independent of another cell, but it is also related to other cells. This means that damage to one cell will not damage other cells, so bees can rebuild these cells without affecting the complete hive.
Page 413 of 485
Figure 1: cellular representation of microservices – microservice interview questions
Please refer to the figure above. Here, each hexagonal shape represents a separate service component. Similar to the work of bees, each agile team builds a separate service component using the available framework and the selected technology stack. Just like in a hive, each service component forms a powerful microservice architecture to provide better scalability. In addition, agile teams can deal with each service component individually, with no or minimal impact on the entire application.
2. What are the advantages of microservice architecture?
Figure 2: advantages of microservices – microservice interview questions
 independent development – all microservices can be easily developed according to their respective functions  independent deployment – based on their services, they can be deployed separately in any application  fault isolation – even if one service of the application does not work, The system can still run  hybrid technology stack – different services of the same application can be built using different languages and technologies  granularity scaling – a single component can be scaled as needed without scaling all components together
Page 414 of 485
3。 What are the characteristics of microservices?
Figure 3: characteristics of microservices – microservice interview questions
 decoupling – services within the system are largely separated. Therefore, the entire application can be easily built, changed and expanded  componentization – microservices are regarded as independent components that can be easily replaced and upgraded  business capabilities – microservices are very simple and focus on a single function  autonomy – developers and teams can work independently of each other to improve speed  continuous delivery – created through software, System automation for testing and approval, allowing frequent release of software  responsibility – microservices do not focus on applications as projects. Instead, they see the application as the product they are responsible for  decentralized governance – the focus is on using the right tools to do the right job. This means that there is no standardized model or any technical model. Developers are free to choose the most useful tools to solve their problems  agile – microservices support agile development. Any new functionality can be quickly developed and discarded again
Page 415 of 485
4. What are the best practices for designing microservices?
The following are best practices for designing microservices:
Figure 4: Best Practices for designing microservices – microservice interview questions
5. How does the microservice architecture work?
The microservice architecture has the following components:
Page 416 of 485
Figure 5: microservice Architecture – microservice interview questions
 client – different users from different devices send requests.  identity provider – verifies the identity of a user or customer and issues a security token.  API gateway – handles client requests.  static content – contains all the content of the system.  management – balancing services on nodes and identifying faults.  service discovery – a guide for finding communication paths between microservices.  content delivery network – a distributed network of proxy servers and their data centers.  remote services – enable remote access to information residing on the IT equipment network.
6. What are the advantages and disadvantages of microservice architecture?
Page 417 of 485
7. What is the difference between monolithic, SOA and microservice architecture?
Figure 6: comparison between monolithic SOA and microservices – microservice interview questions
 a monolithic architecture is similar to a large container in which all software components of an application are assembled together and tightly encapsulated.
Page 418 of 485
 a service-oriented architecture is a collection of mutual communication services. Communication can involve simple data transfer or two or more services that coordinate certain activities.  micro service architecture is an architectural style, which constructs applications as a small collection of autonomous services based on business domain.
8. What challenges do you face when using the microservice architecture?
Developing smaller microservices sounds easy, but the common challenges in developing them are as follows.
 automation components: it is difficult to automate because there are many smaller components. Therefore, for each component, we must follow the stages of build, deploy and monitor.  susceptibility: maintaining a large number of components together makes it difficult to deploy, maintain, monitor and identify problems. It requires good perception around all components.  configuration management: sometimes it becomes difficult to maintain the configuration of components in various environments.  debugging: it is difficult to find every wrong service. Maintaining centralized logging and dashboards to debug problems is critical.
9. What are the main differences between SOA and microservice architecture?
The main differences between SOA and microservices are as follows:
Page 419 of 485
10. What are the characteristics of microservices?
You can list the characteristics of microservices as follows:
Figure 7: characteristics of microservices – microservice interview questions
11. What is Domain Driven Design?
Page 420 of 485
Figure 8: DDD principle – microservice interview questions
12. Why do you need Domain Driven Design (DDD)?
Figure 9: factors we need DDD – microservice interview questions
13. What is ubiquitous language?
Page 421 of 485
If you must define a ubiquitous language (UL), it is the common language used by developers and users of a specific domain, through which the domain can be easily interpreted.
The ubiquitous language must be very clear so that it puts all team members on the same page and translates in a machine understandable way.
14. What is cohesion?
The degree to which the elements within the module belong is considered cohesion.
15. What is coupling?
A measure of the strength of dependencies between components is considered coupling. A good design is always considered to have high cohesion and low coupling.
16. What is rest / restful and what is its purpose?
Representative state transfer (rest) / restful web services are an architectural style that helps computer systems communicate over the Internet. This makes microservices easier to understand and implement.
Microservices can be implemented with or without restful APIs, but it is always easier to build loosely coupled microservices with restful APIs.
17. What do you know about spring boot?
Page 422 of 485
In fact, with the increase of new functions, springs become more and more complex. If you must start a new spring project, you must add the build path or add Maven dependencies, configure the application server, and add the spring configuration. So everything must start from scratch.
Spring boot is the solution to this problem. Using spring boot avoids all boilerplate code and configuration. Therefore, basically think of yourself as if you are baking a cake. Spring is like the ingredients needed to make a cake. Spring boots are the complete cake in your hand.
Figure 10: factors of spring boot – microservice interview questions
18. What is a spring bootstrap actuator?
Spring boot executors provide restful web services to access the current state of applications running in a production environment. With the help of actuators, you can check various metrics and monitor your application.
19. What is spring cloud?
Page 423 of 485
According to the official website of spring cloud, spring cloud provides developers with tools to quickly build some common patterns in distributed systems (such as configuration management, service discovery, circuit breaker, intelligent routing, leadership election, distributed session, cluster status).
20. What problems does spring cloud solve?
When using spring boot to develop distributed microservices, the problems we face are rarely solved by spring cloud.
 complexity associated with distributed systems – including network issues, latency overhead, bandwidth issues, security issues.  ability to handle service discovery – service discovery allows processes and services in the cluster to find and communicate with each other.  solve redundancy problems – redundancy problems often occur in distributed systems.  load balancing – improve workload distribution across multiple computing resources (e.g., computer clusters, network links, central processing units).  reduce performance problems – reduce performance problems caused by various operational overhead.
21. What are the reasons for using webmvctest annotation in spring MVC application
What’s the use?
When the test target only focuses on spring MVC components, the webmvctest annotation is used to unit test spring MVC applications. In the snapshot shown above, we just want to start the totestcontroller. When you perform this unit test, all other controllers and mappings are not started.
Page 424 of 485
22。 Can you give some key points about rest and micro service?
Although you can implement microservices in many ways, rest over HTTP is one way to implement microservices. Rest can also be used for other applications, such as web applications, API design and MVC applications, to provide business data.
Microservice is an architecture in which all components of the system are put into separate components, which can be built, deployed and extended separately. Some of the principles and best practices of microservices help build resilient applications.
In short, you can say that rest is the medium for building microservices.
23. What are the different types of microservice testing?
When using microservices, testing becomes very complex because multiple microservices work together. Therefore, the test is divided into different levels.
 at the bottom, we have technology-oriented testing, such as unit testing and performance testing. These are fully automated.  at the intermediate level, we conducted exploratory tests such as stress testing and usability testing.  at the top level, we have a small number of acceptance tests. These acceptance tests help stakeholders understand and verify software functionality.
24. What do you know about distributed transaction?
Page 425 of 485
A distributed transaction is any situation where a single event causes a mutation in two or more separate data sources that cannot be submitted atomically. In the world of microservices, it becomes more complex because each service is a unit of work, and most of the time, multiple services must work together to make the business successful.
25. What is idempotence and where is it used?
Idempotency is the ability to do things twice in such a way that the final result will remain the same, as if it had been done only once.
Usage: use idempotent in a remote service or data source so that when it receives instructions multiple times, it processes the instructions only once.
26. What is bounded context?
Bounded context is the core pattern of domain driven design. DDD’s strategic design department focuses on dealing with large models and teams. DDD deals with large models by dividing them into different bounded contexts and clarifying their relationships.
27. What is two factor authentication?
Two factor authentication enables second level authentication for the account login process.
Page 426 of 485
Figure 11: presentation of two factor authentication – microservice interview questions
Therefore, assuming that users must only enter user names and passwords, this is considered single factor authentication.
28. What are the credential types of two factor authentication?
The three vouchers are:
Page 427 of 485
Figure 12: Certificate types for two factor certification – microservice interview questions
29. What is a customer certificate?
A digital certificate used by the client system to send authenticated requests to the remote server is called the client certificate. Client certificate plays a very important role in many mutual authentication designs, and provides a strong guarantee for the identity of the requester.
30. What is the purpose of pact in microservice architecture?
Pact is an open source tool that allows the interaction between service providers and consumers to be tested and isolated from contracts, so as to improve the reliability of microservice integration.
Usage in microservices
 consumer driven contracts for micro services.  test consumer driven contracts between consumers and providers of micro services.
View upcoming batches
31. What is OAuth?
OAuth stands for open license agreement. This allows the resource owner’s resources to be accessed by enabling client applications (such as third-party providers Facebook, GitHub, etc.) on the HTTP service. Therefore, you can share resources stored on one site with another site without using its credentials.
Page 428 of 485
32. What is Conway’s law?
“Any organization that designs a system (broadly defined) will produce a design whose structure is a copy of the organization’s communication structure.” – Mel Conway
Figure 13: representation of Conway’s law – microservice interview questions
The law basically attempts to convey the fact that in order for software modules to work, the whole team should communicate well. Therefore, the structure of the system reflects the social boundary of the organization that produced it.
33. What do you know about contract testing?
According to Martin flower, contract testing is conducted at the external service boundary to verify whether it meets the expected contract of consumer services.
In addition, contract testing does not test the behavior of services in depth. More specifically, it tests that the input & output of the service call contains the required attributes and the response delay, and the throughput is within the allowable limit.
Page 429 of 485
34. What is end-to-end microservice testing?
The end-to-end test verifies that each process in the workflow is running normally. This ensures that the system works together as a whole and meets all requirements.
Generally speaking, you can say that end-to-end testing is a test that tests everything after a specific period of time.
Figure 14: Test hierarchy – microservice interview questions
35. What is the purpose of container in microservices?
Page 430 of 485
Containers are a good way to manage microservice based applications so that they can be developed and deployed separately
。 You can encapsulate a microservice in a container image and its dependencies, and then use it to scroll the microservices of on-demand instances without any additional work.
Figure 15: presentation of containers and how they are used in microservices – microservice interview questions
36. What is dry in microservice architecture?
Dry means don’t repeat yourself. It basically promotes the concept of reusing code. This leads to the development and sharing of libraries, which in turn leads to tight coupling.
37. What is consumer driven contract (CDC)?
This is basically a pattern for developing microservices so that they can be used by external systems. When we deal with microservices, there is a specific provider building it, and there are one or more consumers using microservices.
Typically, providers specify interfaces in XML documents. However, in consumer driven contracts, each service consumer conveys the interface expected by the provider.
Page 431 of 485
38. What is the role of web and restful APIs in microservices?
Microservice architecture is based on the concept that all services should be able to interact with each other to build business functions. Therefore, to achieve this, each microservice must have an interface. This makes the web API a very important enabler of microservices. Based on the open network principle of web, restful API provides the most reasonable model for constructing the interface between various components of microservice architecture.
39. What do you know about semantic monitoring in microservice architecture?
Semantic monitoring, also known as integrated monitoring, combines automated testing with monitoring applications to detect business failure factors.
40. How do we conduct cross functional testing?
Cross functional testing is the verification of non functional requirements, that is, those requirements that cannot be realized like ordinary functions.
41. How can we eliminate non determinism in testing?
Non deterministic testing (NDT) is basically unreliable testing. So sometimes they may pass, and obviously sometimes they may fail. When they fail, they run again.
Some ways to remove uncertainty from the test are as follows:
1. Isolation 2. Asynchronous 3. Remote service 4. Isolation
Page 432 of 485
5. Time 6. Resource leakage
42. What’s the difference between mock and stub?
stub
 a virtual object that helps run the test.  provide fixed behavior under certain conditions that can be hard coded.  never test any other behavior of the stub.
For example, for an empty stack, you can create a stub that returns true only for the empty () method. Therefore, it doesn’t care whether there are elements in the stack.
ridicule
 a dummy object in which certain properties are initially set.  the behavior of this object depends on the set attribute.  you can also test the behavior of the object.
For example, for a customer object, you can simulate it by setting its name and age. You can set age to 12 and test the isadult () method, which returns true when you are older than 18. Therefore, your mock customer object applies to the specified conditions.
43. What do you know about Mike Cohn’s test pyramid?
Page 433 of 485
Mike Cohn provided a model called test pyramid. This describes the types of automated testing required for software development.
Figure 16: Mike Cohn’s test pyramid – microservice interview questions
According to the pyramid, the number of tests on the first layer should be the highest. In the service layer, the number of tests should be less than the unit test level, but greater than the end-to-end level.
44. What is the purpose of docker?
Docker provides a container environment that can be used to host any application. Here, the software application is tightly packaged with the dependencies that support it.
Therefore, this packaged product is called a container. Because it is completed by docker, it is called a docker container!
Page 434 of 485
45. What is Canary release?
Canary releasing is a technology to reduce the risk of introducing new software versions into production. This is done by slowly rolling out changes to a small number of users, then publishing them to the entire infrastructure, making them available to everyone.
46. What is continuous integration (CI)?
Continuous integration (CI) is the process of automatically building and testing code each time team members submit version control changes. This encourages developers to share code and unit tests by merging changes into a shared version control repository after each small task is completed.
47. What is continuous monitoring?
Continuous monitoring, in-depth monitoring of coverage, from front-end performance indicators in the browser to application performance, and then to host virtualization infrastructure indicators.
48. What is the role of the architect in the microservice architecture?
Architects in the microservice architecture play the following roles:
 determine the layout of the whole software system.  help determine the zoning of components. Therefore, they ensure that the components are bonded to each other but not tightly coupled.  write code with developers to understand the challenges in daily life.  provide suggestions on some tools and technologies for the team developing microservices.
Page 435 of 485
 provide technology governance so that the technology development team follows the microservice principle.
49. Can we create a state machine with microservices?
We know that each microservice with its own database is an independently deployable program unit, which in turn allows us to create a state machine. Therefore, we can specify different states and events for specific microservices.
For example, we can define the order microservice. Orders can have different statuses. The transition of the order state can be an independent event in the order microservice.
50. What is reactive extension in microservices?
Reactive extensions are also known as Rx. This is a design method. We collect the results by calling multiple services, and then compile the composite response. These calls can be synchronous or asynchronous, blocking or non blocking. Rx is a very popular tool in distributed systems, contrary to traditional processes.
I hope these microservice interview questions can help you interview the microservice architect.
Translation source: https://www.edureka.co/blog/i… w-questions/
Linux interview questions
Page 436 of 485
1. What symbol is used to represent the absolute path? What is the current directory and upper level directory represented?
What is the home directory? What command is used to switch directories?
answer:
Absolute path: for example, / etc / init. D current directory and upper directory:.. /.. / home directory: ~ / switch Directory: CD
2. How do I view the current process? How to exit? How do I view the current path?
answer:
View current process: PS execute exit: exit view current path: PWD
3. How to clear the screen? How do I exit the current command? How to perform sleep? How to check when
Former user ID? What command is used to view the specified help?
answer:
Clear screen: clear exit the current command: Ctrl + C completely exit execution sleep: Ctrl + Z suspend the current process FG restore the background
Page 437 of 485
View the current user ID: “Id”: view the uid, GID, group and user name of the current login account. View the specified help: for example, man addUser is complete and has examples; addUser — help tells you some common parameters; info adduesr;
4. What functions does the LS command perform? What parameters can be brought and what is the difference?
answer:
Ls functions: list the directories in the specified directory and the file parameters and differences: a all file l details, including size, number of bytes, readable, writable, executable permissions, etc
5. Commands for establishing soft links (shortcuts) and hard links.
answer:
Soft link: ln – s slink source hard link: ln link source
6. What commands are used for directory creation? What commands are used to create files? What is used to copy files
What command?
answer:
Create directory: MKDIR
Page 438 of 485
Create files: typically, touch and VI can also create files. In fact, as long as they are output to a nonexistent file, they will create files and copy files: CP 7. What commands are used to modify file permissions? What is the format? File permission modification: Chmod
The format is as follows:
Chmodu + xfile adds execution permission to the owner of file Chmod 751 file assigns read, write and execute (7) permissions to the owner of file, assigns read and execute (5) permissions to the group of file, and assigns other users the permission to execute (1). Chmodu = RWX, g = Rx, o = xfile another form of the above example Chmod = R file assigns read permission to all users chmod444file is the same as Chmod a-wx, A + R file is the same as the above example $Chmod – r u + R directory, recursively assigning read permissions to the owners of all files and subdirectories under the directory directory
7. What commands are available to view file contents?
answer:
VI file name # edit view, can modify cat file name # display all file contents more file name # paginate display file contents less file name # is similar to more, the better thing is that you can turn the page forward, tail file name # only view the tail, and you can also specify the number of lines head file name # only view the header, and you can also specify the number of lines
Page 439 of 485
8. Random write file command? How to output a string with spaces to the screen, such as “
hello world”?
answer:
Write file command: VI
Output a string with spaces to the screen: echo Hello World
9. Which file is in which folder of the terminal? Which folder is the black hole file in
Which command?
answer:
Terminal / dev / TTY
Black hole file / dev / null
10. Which command is used to move files? Which command is used to change the name?
answer:
mv mv
Page 440 of 485
11. Which command is used to copy files? What if you need to copy it together with the folder?
What if you need a prompt function?
answer:
cp cp -r ????
12. Which command is used to delete files? If you need to connect the directory and the files under the directory
Delete? What command is used to delete an empty folder?
answer:
rm rm -r rmdir
13. What kinds of wildcards can be used for commands under Linux? What do they represent
Righteousness?
answer:
“?” replaces a single character.
“*” can replace any number of characters.
Square brackets “[charset]” can replace any single character in charset, such as [A-Z], [ababc]
Page 441 of 485
14. What command is used to count the contents of a file? (line number, number of words
Bytes)
answer:
WC command – C counts bytes – L counts lines – W counts words.
15. What’s the use of the grep command? How to ignore case? How to find a file without
The row of the string?
answer:
Is a powerful text search tool that uses regular expressions to search text and print matching lines. grep [stringSTRING] filename grep1 filename
16. What are the states of processes in Linux? In the information displayed by PS,
What symbols are used respectively?
answer:
1. Non interruptible state: the process is in sleep, but the process is non interruptible at the moment. Non interruptible means that the process does not respond to asynchronous signals.
Page 442 of 485
2. Pause status / trace status: Send a sigstop signal to the process, and it will enter the task in response to the signal_ Stopped status; When a process is being tracked, it is in task_ Trace this special state. “Being tracked” means that a process pauses and waits for the process tracking it to operate on it.
3. Ready status: in run_ Status in queue
4. Running status: in run_ Status in queue
5. Interruptible sleep state: the process in this state is suspended because it waits for an event (such as waiting for socket connection and waiting for semaphore)
6. Zombie state (zombie): the father will release the task_struct of the child process without passing the wait series of system calls
7. Exit status
D. non interruptible (normally IO) r is running, or the process s in the queue is in sleep state, t stops or the tracked Z zombie process w enters the memory exchange (invalid since kernel 2.6) x is a dead process
17. How to make a command run in the background?
answer:
It is usually used & at the end of the command to make the program run automatically. (no space can be added after the command)
Page 443 of 485
18. How to display all processes with PS? How to use PS to view the specified entry
Cheng’s information?
answer:
PS – ef (System V output)
PS – aux BSD format output
ps -ef | grep pid
19. Which command is dedicated to viewing background tasks?
answer:
job -l
20. What commands are used to transfer background tasks to the foreground for execution? Stop the background task
What commands are used to execute in the background?
answer:
Transfer the background task to the foreground for execution FG
Execute the stopped background tasks in the background BG
Page 444 of 485
21. What command is used to terminate the process? With what parameters?
answer:
Kill – s < message name or number > or kill [- L < message number >]
kill-9 pid
22. How to view all signals supported by the system?
answer:
kill -l
23. What commands are used to search for files? What is the format?
answer:
Find < specify Directory > < specify condition > < specify action >
Where is plus parameter and file name
Locate adds only the file name
Find directly searches the disk, which is slow.
find / -name “string*”
Page 445 of 485
24. Check who is currently using the host and what commands are used? Find your destination
What commands do I use for client information?
answer:
Find your own terminal information: who am I
View who is currently using the host: who
25. What commands are used to view the list of used commands?
answer:
history
26. What commands are used to view disk space? What about free space?
answer:
df -hl
File system capacity used% mount point available
Filesystem Size Used Avail Use% Mounted on /dev/hda2 45G 19G 24G 44% / /dev/hda1 494M 19M 450M 4% /boot
Page 446 of 485
27. What command is used to check whether the network is connected?
answer:
netstat
28. What commands are used to view IP address and interface information?
answer:
ifconfig
29. What commands are used to view various environment variables?
answer:
View all envs view a, such as home: env $home
30. What command is used to specify the command prompt?
answer:
 U: display the current user account  H: display the current host name
Page 447 of 485
 W: display only the last directory of the current path  W: display the current absolute path (the current user directory will be replaced by ~)  PWD: display the current full path  $: display the command line ‘$’ or ‘#’ symbol  #: the number of commands issued  D: represents the date, the format is week day month date, for example: “monaug1”  T: the display time is 24 hours, For example: HH: mm: SS · \ \ T: display time is in 12 hour format · \ \ A: display time is in 24 hour format: HH: mm · \ \ V: bash version information, such as export PS1 = ‘[\ \ u @ \ \ h \ \ w#]$‘
31. Where did you find the executable file of the search command? How to set it
And add?
answer:
Whereis – bfmsu-m < Directory >… [file…]
Supplementary note: the whereis instruction will find the qualified files in a specific directory. The intensity of these files should belong to the original code, binary files, or help files.
 -b only look for binaries.  B < Directory > only look for binary files in the set directory- F do not display the path name before the file name.  -m only look for documentation.  m < Directory > only find the instruction files in the set directory- S finds only the original code files.  s < Directory > only find the original code file in the set directory- U find files that do not contain the specified type.
Page 448 of 485
The w-hich instruction searches for the location of a system command in the path specified by the path variable, and returns the first search result.  n – N specify the file name length, which must be greater than or equal to the longest file name in all files.  the – P and – n parameters are the same, but the path of the file is included here- W specifies the width of the field when outputting.  – V displays version information
32. What commands are used to find and execute commands?
answer:
Which can only look up executable files
Where is can only check binary files, description documents, source files, etc
33. How to alias a command?
answer:
alias la=’ls -a’
34. What are the definitions of Du and DF and their differences?
answer:
Du displays the size of a directory or file
Page 449 of 485
DF displays the information of the file system where each < File > is located. By default, all file systems are displayed. (the file system allocates some disk blocks to record its own data, such as i-node, disk distribution map, indirect block, super block, etc. These data are invisible to most user level programs, usually referred to as meta data.) Du command is a user level program, which does not consider meta data, The DF command looks at the disk allocation diagram of the file system and considers meta data. The DF command gets the real file system data, while the Du command only looks at part of the file system.
35. Awk detailed explanation.
answer:
Awk ‘{pattern + action}’ {filenames} #cat / etc / passwd | awk – f ‘:’ {print 1 “\ T” 7} ‘/ / – f means separating root / bin / bash daemon / bin / sh with’: ‘to search all lines with root keyword in / etc / passwd

awk -F: ‘/root/’ /etc/passwd root0:0:root:/root:/bin/bash

36. What should you do when you need to bind a macro or key to a command
And?
answer:
You can use the bind command, which can easily bind macros or keys in the shell.
When Binding keys, we need to get the character sequence corresponding to the bound keys first.
Page 450 of 485
For example, the method to obtain the character sequence of F12 is as follows: first press Ctrl + V, and then press F12. We can get the character sequence of F12 ^[[24~.
Then bind with bind.
[[email protected] ~]# bind ‘”\e[24~”:”date”‘
Note: the same key may produce different character sequences under different terminals or terminal simulators.
[attachment] you can also use the showkey – a command to view the character sequence corresponding to the key.
37. If a Linux novice wants to know the status of all commands supported by the current system
List, what does he need to do?
answer:
Using the command compgen – C, you can print out a list of all supported commands.
[[email protected] ~]$ compgen -c l. ll ls which if then else elif fi case
Page 451 of 485
esac for select while until do done …
38. If your assistant wants to print out the current directory stack, what would you advise him to do
Do?
answer:
The current directory stack can be printed using the Linux command dirs.
[[email protected] ~]# dirs /usr/share/X11
[attachment]: the directory stack is operated by pushd POPD.
39. Your system currently has many running tasks without restarting the machine
In this case, what is the method to remove all running processes?
answer:
Use the Linux command ‘disown – R’ to remove all running processes.
Page 452 of 485
40. What is the function of hash command in Bash shell?
answer:
The Linux command ‘hash’ manages a built-in hash table, which records the complete path of the executed commands. With this command, you can print out the commands you have used and the number of executions.
[[email protected] ~]# hash
hits command
2 /bin/ls
2 /bin/su
41. Which bash built-in command can perform mathematical operations.
answer:
The built-in command let of bash shell can perform mathematical operations on integer numbers.

! /bin/bash … … let c=a+b … …

Page 453 of 485
42. How to view the contents of a large file page by page?
answer:
Pipe the command “cat file”_ “Name. TXT” and “more” can be connected together to meet this need
[[email protected] ~]# cat file_name.txt | more
43. Which user does the data dictionary belong to?
answer:
The data dictionary belongs to the ‘sys’ user. Users’ sys’ and’ sysem ‘are automatically created by default
44. How to view the summary and usage of a Linux command? Suppose you are in / bin
How can you know the function of a command you’ve never seen
And usage?
answer:
Use the command whatis to display the brief usage of the command first. For example, you can use whatis zcat to view the introduction and brief usage of ‘zcat’.
[[email protected] ~]# whatis zcat
Page 454 of 485
zcat [gzip] (1) – compress or expand files
45. Which command can you use to view the disk space quota of your file system
And?
answer:
Using the command repquota can display the quota information of a file system
[attachment] only the root user can view the quota of other users.
Spring boot interview questions
1. What is spring boot?
Over the years, with the increase of new functions, spring has become more and more complex. Just visit https://spring.io/projects Page, we will see the different functions of all spring projects that can be used in our application. If we have to start a new spring project, we must add the build path or Maven dependency, configure the application server, and add the spring configuration. Therefore, starting a new spring project requires a lot of effort, because we must now do everything from scratch.
Spring boot is the solution to this problem. Spring boot has been built on the existing spring framework. Using spring startup, we avoided all the template code and configuration we had to do before. Therefore, spring boot can help us use the existing spring functions more robustly with the least workload.
Page 455 of 485
2. What are the advantages of spring boot?
Spring boot has the following advantages:
1. Reduce development and testing time and effort.
2. Using javaconfig helps avoid using XML.
3. Avoid a large number of Maven imports and various version conflicts.
4. Provide advice and development methods.
5. Start development quickly by providing default values.
6. No separate web server is required. This means that you no longer need to start tomcat, GlassFish or anything else.
7. Less configuration is required because there is no web.xml file. Just add the class annotated with @ configuration, and then add the method annotated with @ bean. Spring will automatically load the object and manage it as before. You can even add @ Autowired to the bean method to enable spring to automatically load the required dependencies.
8. Environment based configuration using these properties, you can pass the environment you are using to the application: – dspring. Profiles. Active = {enviornment}. After loading the main application properties file, spring will load subsequent application properties files in (application {environment}. Properties).
3. What is javaconfig?
Page 456 of 485
Spring javaconfig is a product of the spring community. It provides pure Java methods for configuring spring IOC containers. Therefore, it helps to avoid using XML configuration. The advantages of using javaconfig are:
1. Object oriented configuration. Since configuration is defined as a class in javaconfig, users can make full use of object-oriented functions in Java. One configuration class can inherit another, override its @ bean method, etc.
2. Reduce or eliminate XML configuration. The benefits of externalized configuration based on dependency injection principle have been proved. However, many developers do not want to switch back and forth between XML and Java. Javaconfig provides developers with a pure Java method to configure spring containers similar to XML configuration concepts. From a technical point of view, it is feasible to only use javaconfig configuration class to configure containers, but in fact, many people think it is ideal to mix javaconfig with XML.
3. Type safe and refactoring friendly. Javaconfig provides a type safe way to configure spring containers. Thanks to Java 5.0’s support for generics, beans can now be retrieved by type rather than by name, without any casting or string based lookup.
4. How to reload changes on springboot without restarting the service
Device?
This can be achieved using the dev tool. With this dependency, you can save any changes and the embedded Tomcat will restart. Spring boot has a devtools module, which helps to improve the productivity of developers. A major challenge for Java developers is to automatically deploy file changes to the server and restart the server automatically. Developers can reload changes on spring boot without restarting the server. This eliminates the need to manually deploy changes every time. Spring boot did not have this feature when it released its first version. This is what developers need most. The devtools module fully meets the needs of developers. The module will be disabled in the production environment. It also provides an H2 database console to better test applications.
Page 457 of 485
<dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-devtools</artifactId> <optional>true</optional>
5. What is the monitor in spring boot?
Spring boot actor is one of the important functions in spring boot framework. The spring boot monitor helps you access the current state of applications running in a production environment. Several indicators must be checked and monitored in the production environment. Even though some external applications may be using these services to trigger alert messages to relevant personnel. The monitor module exposes a set of rest endpoints that can be accessed directly as HTTP URLs to check status.
6. How do i disable actuator endpoint security in spring boot?
By default, all sensitive HTTP endpoints are secure and only users with the actor role can access them. Security is implemented using the standard httpservletrequest.isuserinrole method. We can use
To disable security. Disabling security is recommended only if the actuator endpoint is accessed behind a firewall.
7. How do I run a spring boot application on a custom port?
In order to run the spring boot application on a custom port, you can specify the port in application.properties.
server.port = 8090
Page 458 of 485
8. What is yaml?
Yaml is a human readable data serialization language. It is usually used for configuration files.
Compared with the property file, if we want to add complex properties to the configuration file, yaml file is more structured and less confusing. It can be seen that yaml has hierarchical configuration data.
9. How to implement the security of spring boot applications?
In order to achieve the security of spring boot, we use the spring boot starter security dependency, and we must add the security configuration. It requires very little code. The configuration class will have to extend the websecurityconfigureradapter and override its methods.
10. How do I integrate spring boot and ActiveMQ?
For integrating spring boot and ActiveMQ, we use
Dependencies. It requires very little configuration and no boilerplate code.
11. How to use spring boot to implement paging and sorting?
Paging using spring boot is very simple. Using spring data JPA, you can realize pagination
Method passed to the repository.
Page 459 of 485
12. What is swagger? Did you implement it with spring boot?
Swagger is widely used in the visualization API and uses swagger UI to provide online sandbox for front-end developers. Swagger is a tool, specification and complete framework implementation for generating visual representation of restful web services. It enables documents to be updated at the same speed as the server. When properly defined by swagger, consumers can use a minimal amount of implementation logic to understand and interact with remote services. Therefore, swagger eliminates guesswork when calling services.
13. What is spring profiles?
Spring profiles allows users to register beans according to configuration files (DEV, test, prod, etc.). Therefore, when the application is running in development, only some beans can be loaded, while in production, some other beans can be loaded. Suppose our requirement is that swagger documents are only applicable to QA environment, and all other documents are disabled. This can be done using a configuration file. Spring boot makes it easy to use configuration files.
14. What is spring batch?
Spring boot batch provides reusable functions that are very important when handling a large number of records, including log / trace, transaction management, job processing statistics, job restart, skip and resource management. It also provides more advanced technical services and functions. Through optimization and zoning technology, it can realize high-volume and high-performance batch processing jobs. Simple and complex batch jobs can use the framework to process a large amount of important information in a highly scalable way.
15. What is a FreeMarker template?
Page 460 of 485
FreeMarker is a Java based template engine, which initially focused on dynamic web page generation using MVC software architecture. The main advantage of using FreeMarker is the complete separation of presentation layer and business layer. Programmers can work with application code, while designers can work with HTML page design. Finally, freemaker can combine these to give the final output page.
16. How to use spring boot to implement exception handling?
Spring provides a very useful way to handle exceptions using controlleradvice. We implement a controleradvice class to handle all exceptions thrown by the controller class.
17. What starter Maven dependencies do you use?
Some of the following dependencies are used
spring-boot-starter-activemq spring-boot-starter-security
This helps to add fewer dependencies and reduce version conflicts.
18. What is a CSRF attack?
CSRF represents cross station request forgery. This is an attack that forces the end user to perform unwanted operations on the currently authenticated web application. CSRF attacks are specifically targeted at state change requests, not data theft, because attackers cannot view responses to forged requests.
19. What is WebSockets?
Page 461 of 485
Websocket is a computer communication protocol that provides a full duplex communication channel through a single TCP connection.
1. Websocket is bidirectional – using websocket, a client or server can initiate message sending.
2. Websocket is full duplex – client and server communication are independent of each other.
3. Single TCP connection – the initial connection uses HTTP and then upgrades this connection to a socket based connection. This single connection is then used for all future communications
4. Light – websocket message data exchange is much lighter than http.
20. What is AOP?
In the process of software development, functions that span multiple points of an application are called cross cutting problems. These cross cutting issues are different from the main business logic of the application. Therefore, separating these crosscutting concerns from business logic is the place for aspect oriented programming (AOP).
Page 462 of 485
21. What is Apache Kafka?
Apache Kafka is a distributed publish subscribe messaging system. It is a scalable, fault-tolerant publish subscribe messaging system that enables us to build distributed applications. This is a top-level Apache project. Kafka is suitable for offline and online message consumption.
22. How do we monitor all spring boot microservices?
Spring boot provides monitor endpoints to monitor the metrics of individual microservices. These endpoints are useful for obtaining information about Applications (such as whether they have been started) and whether their components (such as databases) are functioning properly. However, one of the main disadvantages or difficulties of using monitors is that we must open the knowledge points of the application separately to understand its status or health. Imagine a microservice involving 50 applications, and the administrator will have to hit the execution terminal of all 50 applications.
To help us deal with this situation, we will use
Open source projects. It is built on the spring boot actuator, which provides a Web UI that enables us to visualize the metrics of multiple applications.
Spring cloud interview questions
1. What is spring cloud?
Spring cloud streaming application launcher is a spring integrated application based on spring boot, which provides integration with external systems. Spring cloud task, a short-lived microservice framework, is used to quickly build applications that perform limited data processing.
Page 463 of 485
2. What are the advantages of using spring cloud?
When using spring boot to develop distributed microservices, we face the following problems
1. Complexity associated with distributed systems – this overhead includes network problems, delay overhead, bandwidth problems, and security problems.
2. Service discovery – service discovery tools manage how processes and services in a cluster find and talk to each other. It involves a service directory in which services are registered, and then services in the directory can be found and connected.
3. Redundancy – the problem of redundancy in distributed systems.
4. Load balancing — load balancing improves the distribution of workloads across multiple computing resources, such as computers, computer clusters, network links, central processing units, or disk drives.
5. Performance issues – performance issues due to various operational overhead.
6. Deployment complexity – Requirements for Devops skills.
3. What does service registration and discovery mean? How is spring cloud implemented?
When we start a project, we usually do all the configuration in the properties file. As more and more services are developed and deployed, adding and modifying these properties becomes more complex. Some services may decline and some locations may change. Manually changing properties can cause problems. Eureka service registration and discovery can help in this case. Since all services are registered on the Eureka server and the lookup is completed by calling the Eureka server, there is no need to process any changes and processing of the service location.
4. What is the meaning of load balancing?
Page 464 of 485
In computing, load balancing can improve the workload distribution across computers, computer clusters, network links, central processing units or disk drives. Load balancing aims to optimize resource usage, maximize throughput, minimize response time, and avoid overload of any single resource. Using multiple components for load balancing instead of a single component may improve reliability and availability through redundancy. Load balancing usually involves dedicated software or hardware, such as multi-layer switches or domain name system server processes.
5. What is hystrix? How does it achieve fault tolerance?
Hystrix is a delay and fault-tolerant library designed to isolate the access points of remote systems, services and third-party libraries, stop cascading failures when failures are inevitable, and realize resilience in complex distributed systems.
Generally, many microservices are involved in the system developed using microservice architecture. These microservices collaborate with each other.
Think about the following micro services
Suppose that if the microservice 9 in the figure above fails, we will propagate an exception using the traditional method. But this will still cause the whole system to crash.
Page 465 of 485
As the number of microservices increases, this problem becomes more complex. The number of microservices can be as high as 1000. This is where hystrix appears. We will use the fallback method function of hystrix in this case. We have two services, employee consumer, that use the services exposed by employee consumer.
The simplified diagram is shown below
Now suppose that for some reason, the service exposed by employee producer will throw an exception. In this case, we define a fallback method using hystrix. This fallback method should have the same return type as the exposed service. If an exception occurs in the exposed service, the fallback method returns some values.
6. What is a hystrix circuit breaker? Do we need it?
For some reason, the employee consumer public service throws an exception. In this case, using hystrix, we define a fallback method. If an exception occurs in the exposed service, the fallback method returns some default values.
Page 466 of 485
If the exception in firstpage method() continues to occur, the hystrix circuit will be interrupted, and the employee users will skip the firstpage method together and call the fallback method directly. The purpose of the circuit breaker is to allow time for the first page method or other methods that the first page method may call and cause abnormal recovery. What may happen is that in the case of low load, the problems causing exceptions have better recovery opportunities.
7. What is Netflix feign? What are its advantages?
Feign is a java client binder inspired by retrofit, jaxrs-2.0 and websocket. Feign’s first goal is to unify the complexity of constraint denominators into HTTP APIs without considering its stability. In the example of employee consumer, we use the rest Service exposed by employee producer using the rest template.
Page 467 of 485
But we have to write a lot of code to perform the following steps
1. Use the ribbon for load balancing.
2. Get the service instance, and then get the basic URL.
3. Use rest templates to use services. The previous code is as follows
@Controller public class ConsumerControllerClient {
@Autowired private LoadBalancerClient loadBalancer;
public void getEmployee() throws RestClientException, IOException {
ServiceInstance serviceInstance=loadBalancer.choose(“employee-producer”);
System.out.println(serviceInstance.getUri());
String baseUrl=serviceInstance.getUri().toString();
baseUrl=baseUrl+”/employee”;
RestTemplate restTemplate = new RestTemplate(); ResponseEntity<String> response=null; try{ response=restTemplate.exchange(baseUrl, HttpMethod.GET, getHeaders(),String.class);
Page 468 of 485
}catch (Exception ex) { System.out.println(ex); } System.out.println(response.getBody());
The previous code has the opportunity of exceptions such as nullpointer, which is not optimal. We’ll see how to use Netflix feign to make calls easier and cleaner. If the Netflix ribbon dependency is also in the classpath, feign will also be responsible for load balancing by default.
8. What is spring cloud bus? Do we need it?
Consider the following situation: we have multiple applications that use spring cloud config to read properties, and spring cloud config reads these properties from GIT.
In the following example, multiple employee producer modules obtain the property registered by Eureka from the employee config module.
Page 469 of 485
What happens if you assume that the Eureka registration property in Git is changed to point to another Eureka server. In this case, we will have to restart the service to get the updated properties.
There is another way to use the actuator endpoint / refresh. But we will have to call this URL separately for each module. For example, if employee producer 1 is deployed on port 8080, http: / / localhost: 8080 / refresh is called. Similarly, for employee producer 2, http: / / localhost: 8081 / refresh, etc. It’s troublesome again. This is where spring cloud bus works.
Page 470 of 485
Spring cloud bus provides the function of refreshing configuration across multiple instances. Therefore, in the above example, if we refresh employee producer 1, all other required modules will be refreshed automatically. This is particularly useful if we have multiple microservices up and running. This is achieved by connecting all microservices to a single message broker. Whenever the instance is refreshed, this event will subscribe to all microservices listening on this agent, and they will also be refreshed. You can refresh any single instance by using endpoint / bus / refresh.
Rabbitmq interview questions
1. What is rabbitmq
A message queuing technology using AMQP advanced message queuing protocol, the biggest feature is that the consumer does not need to ensure the existence of the provider, and realizes a high degree of decoupling between services
Page 471 of 485
2. Why use rabbitmq
1. In the distributed system, it has a series of advanced functions such as asynchronous, peak shaving, load balancing and so on; 2. With a persistent mechanism, process messages and information in the queue can also be saved. 3. Realize the decoupling between consumers and producers. 4. For high concurrency scenarios, using message queue can change synchronous access into serial access to a certain amount of current limit, which is conducive to database operation. 5. The message queue can be used to achieve the effect of asynchronous ordering. During queuing, the background will place logical orders.
3. Scenarios using rabbitmq
1. Asynchronous communication between services 2. Sequential consumption 3. Scheduled tasks 4. Peak shaving request
4. How do I ensure that messages are sent to rabbitmq correctly? How to ensure message reception
The recipient consumed the message?
Sender confirmation mode
If the channel is set to confirm mode (sender confirmation mode), all messages published on the channel will be assigned a unique ID. Once the message is delivered to the destination queue or written to disk (persistent message), the channel will send an acknowledgement to the producer (including the unique ID of the message). If rabbitmq has an internal error that causes the message to be lost, a NACK (not acknowledged) message will be sent.
Page 472 of 485
The sender confirmation mode is asynchronous, and the producer application can continue to send messages while waiting for confirmation. When the confirmation message reaches the producer application, the callback method of the producer application is triggered to process the confirmation message.
Receiver confirmation mechanism
Receiver message confirmation mechanism
After receiving each message, the consumer must confirm it (message reception and message confirmation are two different operations). Rabbitmq can safely delete a message from the queue only if the consumer confirms the message. The timeout mechanism is not used here. Rabbitmq only confirms whether to resend the message through the disconnection of the consumer. That is, rabbitmq gives the consumer enough time to process messages as long as the connection is not interrupted. Ensure the final consistency of data;
Several special cases are listed below
If the consumer receives a message and disconnects or unsubscribes before confirmation, rabbitmq will think that the message has not been distributed, and then redistribute it to the consumer of the next subscription. (there may be a hidden danger of repeated consumption of messages, which needs to be de duplicated) if a consumer receives a message but does not confirm the message and the connection is not disconnected, rabbitmq thinks that the consumer is busy and will not distribute more messages to the consumer.
5. How to avoid repeated delivery or consumption of messages?
During message production, MQ generates an inner MSG ID for the messages sent by each producer as the basis for de duplication (message delivery fails and retransmission) to avoid duplicate messages entering the queue;
When consuming a message, it is required that the message body must have a bizid (globally unique for the same business, such as payment ID, order ID, post ID, etc.) as the basis for de duplication to avoid repeated consumption of the same message.
Page 473 of 485
6. What transmission is the message based on?
Because the creation and destruction of TCP connections are expensive, and the number of concurrent connections is limited by system resources, it will cause a performance bottleneck. Rabbitmq uses channels to transmit data. A channel is a virtual connection established in a real TCP connection, and there is no limit to the number of channels on each TCP connection.
7. How are messages distributed?
If at least one consumer subscribes to the queue, the message will be sent to the consumer in a round robin manner. Each message will be distributed to only one subscribed consumer (provided that the consumer can process the message normally and confirm it). Multi consumption can be realized through routing
8. How do messages route?
Message provider – > Routing – > when one or more queued messages are published to the switch, the message will have a routing key, which is set when the message is created. The queue can be bound to the switch through the queue routing key. After the message arrives at the switch, rabbitmq will match the routing key of the message with the routing key of the queue (there are different routing rules for different switches);
The commonly used exchangers are mainly divided into the following three types
Fanout: if the switch receives a message, it will be broadcast to all bound queues. Direct: if the routing keys match exactly, the message will be delivered to the corresponding queue topic: it can enable messages from different sources to reach the same queue. When using the topic switch, you can use wildcards
Page 474 of 485
9. How to ensure that messages are not lost?
Message persistence. Of course, the premise is that the queue must persist rabbitmq. The way to ensure that persistent messages can be recovered from server restart is to write them to a persistent log file on the disk. When a persistent message is published to the persistent switch, rabbit will send a response after the message is submitted to the log file. Once the consumer consumes a persistent message from the persistent queue, rabbitmq marks the message in the persistence log as waiting for garbage collection. If rabbitmq restarts the persistent message before it is consumed, rabbitmq will automatically rebuild the switch and queue (and binding), and republish the messages in the persistent log file to the appropriate queue.
10. What are the benefits of using rabbitmq?
1. Highly decoupled between services 2. High asynchronous communication performance 3. Traffic peak shaving
11. Rabbitmq cluster
Mirror cluster mode
The queue you create, whether metadata or messages in the queue, will exist on multiple instances. Then every time you write a message to the queue, it will automatically send the message to the queues of multiple instances for message synchronization.
The advantage is that if any of your machines goes down, it’s okay. Other machines can be used. The disadvantages are: first, the performance overhead is too large. Message synchronization of all machines leads to network bandwidth pressure and consumption
Page 475 of 485
Heavy! Second, if you play like this, there is no scalability. If a queue is heavily loaded, you add machines, and the new machines also contain all the data of the queue. There is no way to linearly expand your queue
12. Disadvantages of MQ
Reduced system availability
The more external dependencies introduced by the system, the easier it is to hang up. Originally, you are a system calling the interfaces of the three BCD systems. The four human ABCD systems are fine. There is no problem. You add an MQ. What if MQ hangs up? MQ hangs up, the whole system crashes, and you’re finished.
As the complexity of the system increases and MQ is added, how can you ensure that messages are not consumed repeatedly? How to deal with message loss? How to ensure the order of message delivery? Big head, big head, a lot of problems, endless pain
Consistency problem
After the system a has processed it, it returns successfully. People think your request has been successful; But the problem is, if the database writing of the three BCD systems and the two BD systems is successful, the database writing of the C system fails. What’s the whole? Your data is inconsistent.
Therefore, message queuing is actually a very complex architecture. You have many advantages when you introduce it, but you also have to make various additional technical solutions and architectures to avoid its disadvantages. At best, after that, you will find that the system complexity has increased by an order of magnitude, perhaps 10 times. But at the critical moment, I still have to use the Kafka interview question
1. How to get a list of topic topics
Page 476 of 485
bin/kafka-topics.sh –list –zookeeper localhost:2181
2. What is the command line for producers and consumers?
The producer publishes a message on the topic: bin / kafka-console-producer.sh — broker list 192.168.43.49:9092 — Topic Hello Kafka. Note that the IP here is the configuration of listeners in server.properties. The next step for each new line is to enter a new message. Consumer acceptance message: bin / kafka-console-consumer.sh — zookeeper localhost: 2181 — Topic Hello Kafka — from beginning
3. Is the consumer pushing or pulling?
Kafka initially considered whether customers should pull messages from brokers or brokers should push messages to consumers, that is, pull or push. In this regard, Kafka follows a traditional design common to most message systems: the producer pushes messages to the broker, and the consumer pulls messages from the broker.
Some message systems, such as scribe and Apache flume, adopt the push mode to push messages to downstream consumers. This has both advantages and disadvantages: the broker determines the rate of message push, which is not easy to deal with consumers with different consumption rates. Message systems are committed to making consumers consume messages at the maximum rate and the fastest. Unfortunately, in the push mode, when the push rate of the broker is much higher than the consumption rate of the consumer, the consumer may collapse. Finally, Kafka chose the traditional pull mode.
Page 477 of 485
Another advantage of pull mode is that consumers can decide whether to pull data from brokers in batches. The push mode must decide whether to push each message immediately or batch after caching without knowing the consumption capacity and consumption strategy of downstream consumers. If a lower push rate is adopted to avoid consumer crash, it may cause waste by pushing fewer messages at a time. In the pull mode, consumers can determine these strategies according to their consumption ability.
A disadvantage of pull is that if the broker has no messages available for consumption, the consumer will continue to poll in the loop until new messages arrive. To avoid this, Kafka has a parameter that allows the consumer to block the arrival of new messages (of course, it can also block the number of messages to a certain amount, so that they can be sent in batches).
4. Talk about Kafka maintaining consumption status tracking
Most message systems maintain records of message consumption on the broker side: after a message is distributed to the consumer, the broker will mark it immediately or wait for the notification from the customer. In this way, messages can also be deleted immediately after consumption to reduce space occupation.
But is there any problem? If a message is marked as consumed immediately after it is sent, once the consumer fails to process the message (such as program crash), the message will be lost. In order to solve this problem, many message systems provide another function: when the message is sent, it is only marked as sent, and it is marked as consumed only after receiving the notification that the consumer has consumed successfully. Although this solves the problem of message loss, it creates new problems. First, if the consumer processes the message successfully but fails to send a response to the broker, the message will be consumed twice. The second problem is that the broker must maintain the state of each message, and lock the message first, then change the state, and then release the lock every time. This trouble comes again, not to mention maintaining a large amount of status data. For example, if a message is sent but no notification of successful consumption is received, the message will always be locked. Kafka adopts different strategies. Topic is divided into several partitions, and each partition is consumed by only one consumer at the same time. This means that the position of the message consumed by each partition in the log is only a simple integer: offset. This makes it easy to mark the consumption status of each partition. It only needs an integer. In this way, the tracking of consumption status is very simple.
Page 478 of 485
This brings another advantage: consumers can adjust offset to an older value to re consume old messages. This seems incredible to the traditional message system, but it is really very useful. Who stipulates that a message can only be consumed once?
5. Let’s talk about master-slave synchronization**
https://blog.csdn.net/honglei…
6. Why do you need a message system? Can’t MySQL meet the requirements?
1. Decoupling:
It allows you to extend or modify the processes on both sides independently, as long as you ensure that they comply with the same interface constraints.
2. Redundancy:
Message queuing avoids the risk of data loss by persisting data until they have been completely processed. In the “insert get delete” paradigm adopted by many message queues, before deleting a message from the queue, your processing system needs to clearly indicate that the message has been processed, so as to ensure that your data is safely saved until you use it.
3. Scalability:
Because message queuing decouples your processing process, it is easy to increase the frequency of message queuing and processing, as long as you add another processing process.
4. Flexibility & peak processing capacity:
Page 479 of 485
In the case of a sharp increase in traffic, applications still need to continue to play a role, but such burst traffic is not common. It would be a huge waste to put resources on standby to handle such peak visits. Using message queuing can make key components withstand the sudden access pressure without completely crashing due to sudden overloaded requests.
5. Recoverability:
Failure of some components of the system will not affect the whole system. Message queuing reduces the coupling between processes, so even if a process processing messages hangs, the messages added to the queue can still be processed after the system recovers.
6. Sequence assurance:
In most usage scenarios, the order of data processing is very important. Most message queues are sorted and can ensure that the data will be processed in a specific order. (Kafka guarantees the order of messages in a partition)
7. Buffer:
It helps to control and optimize the speed of data flow through the system and solve the inconsistency between the processing speed of production messages and consumption messages.
8. Asynchronous communication:
Many times, users do not want or need to process messages immediately. Message queuing provides an asynchronous processing mechanism that allows users to put a message on the queue without processing it immediately. Put as many messages into the queue as you want, and then process them when needed.
7. What is the role of zookeeper for Kafka?
Page 480 of 485
Zookeeper is an open source and high-performance coordination service, which is used for Kafka’s distributed applications.
Zookeeper is mainly used to communicate between different nodes in the cluster
In Kafka, it is used to submit the offset, so if the node fails in any case, it can get the offset from the previous submission
In addition, it also performs other activities, such as leader detection, distributed synchronization, configuration management, identifying when new nodes leave or connect, clustering, node real-time status, and so on.
8. What are three transaction definitions for data transmission?
Like the transaction definitions of mqtt, there are three types.
(1) At most once: the message will not be sent repeatedly. It can be transmitted at most once, but it may not be transmitted at all
(2) At least once: the message will not be missed and transmitted at least once, but it may also be transmitted repeatedly
(3) Exactly once: no transmission will be missed or repeated. Each message will be transmitted once and only once, which is expected
9. What are the two conditions for Kafka to judge whether a node is still alive?
(1) The node must be able to maintain the connection with zookeeper. Zookeeper checks the connection of each node through the heartbeat mechanism (2) if the node is a follower, it must be able to synchronize the leader’s write operation in time, and the delay cannot be too long
Page 481 of 485
10. There are three key differences between Kafka and traditional MQ messaging systems
(1) . Kafka persistent logs, which can be repeatedly read and retained indefinitely (2). Kafka is a distributed system: it runs in a cluster mode, can scale flexibly, and improves fault tolerance and high availability internally by replicating data (3). Kafka supports real-time streaming processing
11. Let’s talk about the three mechanisms of Kafka’s ack
Request.required.acks has three values 0 1 – 1 (all)
0: the producer will not wait for the broker’s ack. This delay is the lowest, but the guarantee of storage is the weakest. When the server hangs up, data will be lost.
1: The server will wait for the ACK value leader copy to confirm the receipt of the message and send the ACK. However, if the leader hangs up and does not ensure whether the replication is completed, the new leader will also cause data loss.
-1 (all): the server will not receive the ACK sent by the leader until all copies of the follower receive the data, so that the data will not be lost
12. How can a consumer submit an offset by an app without automatically submitting it? Set auto.commit.offset to false, and after processing a batch of messages, commit sync () or commit async (), that is:
ConsumerRecords<> records = consumer.poll(); for (ConsumerRecord<> record : records){ 。。。 tyr{
Page 482 of 485
consumer.commitSync() } 。。。 }
13. Consumer failure, how to solve the livelock problem?
The “livelock” occurs because it continuously sends heartbeat, but it is not processed. To prevent consumers from always holding partitions in this case, we use the max.poll.interval.ms activity detection mechanism. On this basis, if you call the poll more frequently than the maximum interval, the client will actively leave the group so that other consumers can take over the partition. When this happens, you will see that the offset submission fails (commitfailedexception caused by calling commitsync()). This is a security mechanism that ensures that only active members can submit offsets. So to stay in the group, you must keep calling poll.
The consumer provides two configuration settings to control the poll loop:
Max.poll.interval.ms: increasing the poll interval can provide consumers with more time to process the returned messages (the messages returned by calling poll (long) are usually a batch). The disadvantage is that a higher value will delay group rebalancing.
Max.poll.records: this setting limits the number of messages returned per poll call, which makes it easier to predict the maximum value to be processed per poll interval. By adjusting this value, you can reduce the poll interval and reduce the number of rebalancing packets
These options are not sufficient for unpredictable message processing time. The recommended way to handle this situation is to move the message processing to another thread and let the consumer continue to call poll. However, care must be taken to ensure that the submitted offset does not exceed the actual position. In addition, you must disable automatic submission and manually submit the offset for the record only after the thread completes processing (depending on you). Also note that you need to pause the partition and will not receive new messages from the poll. Let the thread process the messages returned before (if your processing capacity is slower than pulling messages, creating a new thread will cause your machine memory overflow).
Page 483 of 485
14. How to control the location of consumption
Kafka uses seek (topic partition, long) to specify a new consumption location. Special methods for finding the oldest and latest offsets reserved by the server are also available (seektobegining (Collection) and seektoend (Collection))
15. How to ensure the smoothness of messages when Kafka is distributed (not stand-alone)
Sequential consumption?
The unit of Kafka distribution is partition. The same partition is organized by a write ahead log, so the FIFO order can be guaranteed. Order cannot be guaranteed between different partitions. However, most users can define it through message key, because messages of the same key can be sent to the same partition only.
When sending a message in Kafka, you can specify three parameters (topic, partition, key). Parton and key are optional. If you specify a partition, all messages sent to the same partition are ordered. On the consumer side, Kafka guarantees that one partition can only be consumed by one consumer. Or if you specify a key (such as order ID), all messages with the same key will be sent to the same partition.
16. What is Kafka’s high availability mechanism?
This question is more systematic. You can answer the system characteristics of Kafka, the relationship between leader and follower, and the order of message reading and writing.
https://www.cnblogs.com/qingy…
Page 484 of 485
https://www.tuicool.com/artic…
https://yq.aliyun.com/article…
17. How Kafka reduces data loss
https://www.cnblogs.com/huxi2…
18. How does Kafka not consume duplicate data? For example, deduction, we can’t repeat it
Buckle.
In fact, we still have to think in combination with business. Here are some ideas:
For example, if you want to write a data to the library, you first check it according to the primary key. If you have all the data, don’t insert it. Update it. For example, if you write redis, it’s no problem. Anyway, it’s set every time, natural idempotency. For example, if you are not in the above two scenarios, it is a little more complicated. When you need to ask the producer to send each data, you need to add a globally unique ID, such as an order ID, and then after you consume it here, first check it in redis according to this ID. have you consumed it before? If you haven’t consumed it, you can handle it, and then write this ID to redis. If you spend too much, don’t deal with it. Just make sure you don’t deal with the same message again. For example, based on the unique key of the database to ensure that duplicate data will not be repeatedly inserted into multiple entries. Because there is only one key constraint, duplicate data insertion will only report errors and will not lead to dirty data in the database.


  1. string ↩