Question 1: what do you think of when you see this picture?
(PS: screenshot from programming ideas)
answer:
This picture is written byMap
pointCollection
OfProduces
That’s not to sayMap
yesCollection
A subclass (subinterface) of theMap
OfKeySet
One of the views obtained isCollection
Child interface of.
We can see that the collection has two basic interfaces:Map
andCollection
. But I personally thinkMap
It’s not a set. It’s probably more appropriate to call it a mapping because of itsKeySet
A view is aSet
Type, so let’s treat it as a set.
Collection
InheritedIterator
Interface, andIterator
Is to provide us with an iterator that can only traverse the elements of the collection backward, that is, all implementationsCollection
All classes can be usedIterator
Traverser to traverse.
There is one for each interfaceAbstract
This subclass includes some default implementations. When we customize a class, we need to inherit this abstract class, and then rewrite its methods according to our different needs.
From the perspective of containers, there are only four types of containers:Map
,Queue
,Set
,List
。
Question 2: list common collections and give a brief introduction
A:
- ArrayList: an index sequence that can grow and shrink dynamically
- LinkedList: an efficient way to insert and delete from any locationOrdered sequence
- Arraydeque: a double ended queue implemented by circular array
- HashSet: an unordered set without repeating elements
- TreeSet: an ordered set
- Enumset: a set containing values of enumeration types
- Linkedhashset: a set that can remember the insertion order of elements
- PriorityQueue: a collection that allows efficient deletion of minimum elements
- HashMap: a data structure for storing key / value association
- Treemap: a mapping table with ordered key values
- Enummap: a mapping table whose key values belong to enumeration type
- LinkedHashMap: a mapping table that can remember the order of adding key / value items
- Weakhashmap: a mapping table whose value can be recycled by garbage collection period after it is useless
- Identityhashmap: a mapping table that compares key values with = = instead of equals
- Vector: it is less used at present, because of obsolete design concept and performance problems, it is replaced by ArrayList
- Hashtable: thread asynchronization can be replaced by HashMap, and synchronization can be replaced by concurrent HashMap
Question 3: what do you think about iterator
From the bird’s-eye view, we can see that all the implementationsCollection
All subclasses of are inheritedIterable
Interface. This interface provides aiterator()
Method to construct aIterator
Interface object. We can then use this iterator object to access the elements in the collection in turn.
Iterator generalusage methodThat’s true:
Collection<String> c = ...;
Iterator<String> iter = c.iterator();
while (iter.hasNext()) {
String s = iter.next();
System.out.println(s);
}
Or something like this:
//It is suitable for JDK1.8 and later versions
iter.forEachRemaining(element -> System.out.println(element));
The properties of iteratorsnext()
working principleThat’s true:
An iterator is a position between two collection elements when we callnext()
Method, the iterator pointer will pass an element and return the element just passed. Therefore, when the iterator pointer is in the last element, it will throw aNoSuchElementException
It’s abnormal. So, in callingnext()
You need to callhasNext()
To determine whether the iterator of this set has reached the last element.
By callingnext()
The method is OKone by oneTo access each element in the collection in the same order as the containerdata structureAbout, for exampleArrayList
Starting from the index value, each iteration will add 1 to the index value. For HashSet, which is a hash table set, it will appear in a random order.
Iterator
There is also aremove()
Method, which actually removes theThe element returned from the last call to the next() methodLet me show youremove()
How to use the method
Collection<String> c = ...;
Iterator<String> iter = c.iterator();
iter.next();
iter.remove();
You can delete the first element in the collection, but note that if we need to delete two elements,mustDo it in this way.
iter.remove();
iter.next();
iter.remove();
You can’t do that:
iter.remove();
iter.remove();
becausenext()
Methods andremove()
There are differences between methodsdependenceIf you callremove
No previous callnext
Will throw out oneIllegalStateException
It’s abnormal.
Question 4: what do you know about collection?
As you can see, as a top-level framework,Collection
Just inheritedIterable
Next, let’s look at the interfaceIterable
Source code, to see what the harvest.
public interface Iterable<T> {
Iterator<T> iterator();
default void forEach(Consumer<? super T> action) {
Objects.requireNonNull(action);
for (T t : this) {
action.accept(t);
}
}
default Spliterator<T> spliterator() {
return Spliterators.spliteratorUnknownSize(iterator(), 0);
}
}
You can see that there are three methods in this interface, among whichiterator()
Method can provide us with an iterator, which has been mentioned in the previous tutorial, andforEach()
Method provides a functional interface parameter that we can uselambda
Expressions are used together:
Collection<String> collection = ...;
collection.forEach(String s -> System.out.println(s));
In this way, each value can be obtained, and its underlying implementation is enhancedfor
Loops, in fact, are iterators to traverse, because the compiler will strengthen themfor
The loop is compiled as an iterative traversal.Spliterator()
yes1.8
The new method, literally separable iterator, is different from the previous oneiterator()
You need to iterate sequentially,Spliterator()
It can be divided into several small iterators for parallel operation, which can not only realize multithreading operation and improve efficiency, but also avoid the disadvantages of ordinary iteratorsfail-fast
(fail-fast
The mechanism isjava
An error mechanism in a collection. When multiple threads operate on the contents of the same collection, there may be problemsfail-fast
Event) mechanism.Spliterator()
Can cooperate with1.8
Newly addedStream()
The implementation of parallel stream greatly improves the processing efficiency.
Collection()
There are 17 interface methods provided in theObject
In this way. Next, let’s look at the functions of these methods
-
size()
, returns the number of elements currently stored in the collection. -
isEmpty()
, if there are no elements in the collection, returns true. -
contains(Object obj)
, if the collection contains an object equal to obj, returns true. -
iterator()
Returns the iterator of this collection. -
toArray()
Returns an array of objects for this collection -
toArray(T[] arrayToFill)
Returns the object array of the collection. If arraytofill is large enough, the elements in the collection will be filled into the array. The remaining space is filled with null; otherwise, a new array is allocated with the same member type as arraytofill, the length of which is equal to the size of the collection, and the collection elements are filled. -
add(Object element)
, adds an element to the collection, and returns true if the collection is changed due to this call. -
remove(Object obj)
, delete the object equal to obj from the collection, and return true if any matching object is deleted. -
containsAll(Collection<?> other)
, if the collection contains all the elements in the other collection, returns true. -
addAll(Collection<? extends E> other)
To add all the elements in the other collection to the collection. If the collection is changed due to this call, true is returned. -
removeAll(Collection<?> other)
To remove all elements from the other collection. If the collection is changed due to this call, returns true. -
removeIf(Predicate<? super E> filter)
, delete all elements that filter returns true from this collection, and return true if the collection is changed due to this call. -
retainAll(Collection<?> other)
To remove all elements from this collection that are different from those in the other collection. If the collection is changed due to this call, returns true. -
clear()
To remove all elements from this collection. -
spliterator()
To return several small iterators after segmentation. -
stream()
To return the stream object for this collection. -
parallelStream()
Returns the parallel stream object of this collection.
As the first level collection interface,Collection
It provides some excuses for basic operations, and can be implemented throughIterable
Interface gets an iterator to traverse the elements in the collection.
Question 5: what about abstract collection?
AsCollection
Its methods are based on iterators. Here are just a few points that need special attention in the source code,
TAG 1 :
As an object, an array needs a certain amount of memory to store the object header information. The maximum memory occupied by the object header information cannot exceed 8 bytes.
TAG 2 :
finishToArray(T[] r, Iterator<?> it)
Method is used to expand the array. When the array index points to the last element + 1, expand the array: create an array with the size of (Cap + cap / 2 + 1), and then copy the contents of the original array to the new array. Before expansion, it is necessary to judge whether the array length overflows or not. The iterator here is the method from the upper layer(toArray(T[] t)
)And the iterator has been partially executed, rather than iterating from scratch
TAG 3 :
hugeCapacity(int minCapacity)
Method is used to determine whether the container has exceeded the default maximum value of the collection class(Integer.MAX_VALUE -8
)Generally, we use this method less, and we will use it laterArrayList
Class learning, seeArrayList
Dynamic expansion uses this method.
TAG 4 :
thereadd(E)
Method throws an exception by default, because if we want to modify an immutable collection, we throw an exceptionUnsupportedOperationException
It’s normal behavior, like when you useCollections.unmodifiableXXX()
Method to process a collection, and then call the modification method of the collection(add
,remove
,set…
), will report this error. thereforeAbstractCollection.add(E)
Throwing this error is quasi compliant.
Question 6: can you elaborate on the implementation of toArray?
High energy early warning: no more nonsense, go directly to the source code
/**
*An array with equal space is allocated, and then the array elements are assigned values in turn
*/
public Object[] toArray() {
//New equal size array
Object[] r = new Object[size()];
Iterator<E> it = iterator();
for (int i = 0; i < r.length; i++) {
//Determine whether the traversal ends, in case the collection becomes smaller when multithreading
if (! it.hasNext())
return Arrays.copyOf(r, i);
r[i] = it.next();
}
//Judge whether the traversal is not finished, so as to prevent the collection from becoming larger and expanding during multithreading operation
return it.hasNext() ? finishToArray(r, it) : r;
}
/**
*In the processing of the 'toArray (t [] a)' method of a generic method, the size of the parameter array will be determined first,
*If the space is enough, use the parameter as the element storage, if not enough, allocate a new one.
*The same is true in the loop. If the parameter a can be stored, it returns A. if it cannot be allocated again.
*/
@SuppressWarnings("unchecked")
public <T> T[] toArray(T[] a) {
int size = size();
//When the length of array A is greater than or equal to a, a is directly assigned to R. otherwise, the reflection API is used to obtain an array of size
T[] r = a.length >= size ? a :
(T[])java.lang.reflect.Array
.newInstance(a.getClass().getComponentType(), size);
Iterator<E> it = iterator();
for (int i = 0; i < r.length; i++) {
//Determine whether the traversal ends
if (! it.hasNext()) {
//If a = = R, empty each value of R and return a
if (a == r) {
r[i] = null;
} else if (a.length < i) {
//If the length of a is less than R, call Arrays.copyOf Copy to get a new array
return Arrays.copyOf(r, i);
} else {
System.arraycopy(r, 0, a, 0, i);
if (a.length > i) {
a[i] = null;
}
}
return a;
}
//If the traversal ends, the value obtained by the iterator is assigned to R
r[i] = (T)it.next();
}
//Judge whether the traversal is not finished, so as to prevent the collection from becoming larger and expanding during multithreading operation
return it.hasNext() ? finishToArray(r, it) : r;
}
/**
*Set the maximum value of the container
*/
private static final int MAX_ARRAY_SIZE = Integer.MAX_VALUE - 8;
/**
*For dynamic expansion
*/
@SuppressWarnings("unchecked")
private static <T> T[] finishToArray(T[] r, Iterator<?> it) {
int i = r.length;
while (it.hasNext()) {
int cap = r.length;
if (i == cap) {
int newCap = cap + (cap >> 1) + 1;
if (newCap - MAX_ARRAY_SIZE > 0)
newCap = hugeCapacity(cap + 1);
r = Arrays.copyOf(r, newCap);
}
r[i++] = (T)it.next();
}
return (i == r.length) ? r : Arrays.copyOf(r, i);
}
private static int hugeCapacity(int minCapacity) {
if (minCapacity < 0) // overflow
throw new OutOfMemoryError
("Required array size too large");
return (minCapacity > MAX_ARRAY_SIZE) ?
Integer.MAX_VALUE :
MAX_ARRAY_SIZE;
}
To help understand, I putArrays.copyOf(r.i)
The source code is also posted:
//The parameter original represents the generic array that you pass in and need to copy. Newlength copies the size of the array
public static <T> T[] copyOf(T[] original, int newLength) {
return (T[]) copyOf(original, newLength, original.getClass());
}
public static <T,U> T[] copyOf(U[] original, int newLength, Class<? extends T[]> newType) {
@SuppressWarnings("unchecked")
T[] copy = ((Object)newType == (Object)Object[].class)
? (T[]) new Object[newLength]
: (T[]) Array.newInstance(newType.getComponentType(), newLength);
System.arraycopy(original, 0, copy, 0,
Math.min(original.length, newLength));
return copy;
}
We can observe the call.System.arraycopy
Method. In order to keep an inquisitive attitude, we went to the source code of this method:
//In SRC array, the number of copied elements is length, starting from the element with index of srcpos and copying to the position with index of destpos in dest
public static native void arraycopy(Object src, int srcPos, Object dest, int destPos,int length);
You can see that this way is made up of keywordsnative
The method of modification, thennative
What is the meaning of modification?native
Keyword indicates that the modification method is an original ecological method, and the corresponding implementation of the method is not in the current file, but in the files implemented in other languages (such as C and C + +). Java language itself can’t access and operate the bottom layer of the operating system, but it can call other languages through JNI interface to access the bottom layer.
JNI is a Java Native Interface and a native programming interface. It is a part of Java Software Development Kit (SDK). JNI allows java code to use code and code libraries written in other languages. The invocation API (part of JNI) can be used to embed Java virtual machine (JVM) into native applications, allowing programmers to call java code from within native code.
Then let’s analyze ittoArray()
Through the English annotation in the original source code,toArray
The resulting array is the same as the originalcollection
It doesn’t matter. We can modify each reference value of the array without affecting the original collection. This seems redundant, but considering that ArrayList is actually implemented based on array, this restriction ensures that even if ArrayList is converted to array, it should allocate a new array instead of returning the original array.
If we are in the case of single thread operation and the collection collection size remains unchanged, we should normally execute toreturn it.hasNext() ? finishToArray(r, it) : r;
This statement ends, but considering that in the process of copying,collection
The collection of may change, either larger or smaller, so the method increases the handling of this situation. That’s why each loop has to judge whether the collection has been traversed, and finally judgecollection
If so, you need to reallocate space for the array.
Normally, we don’t do ithugeCapacity
But as a framework, it reflects the rigor of design.
Question 7: list, one of the most used sets, what do you think of it
List
Is inherited fromCollection
It provides aOrderIn this collection, we can use the index to get the values in the collection. At the same time, we can also use the index to get the values in the collectioniterator To access elements in a collection, the first method is called random access, because we can access elements in any order, and using iterators must access elements in order.
Compared to its parent interfaceCollection
And it didn’t change much, but because ofList
It is an ordered collection, so it provides some index based operations
get(int index)
: gets the element in the collection whose index is equal to index
set(int index, E element)
: assign the element with index equal to index in the collection to element
add(int index, E element)
: inserts an element in the collection where the index is equal to index, and adds 1 to the index of the current element and its subsequent elements.
remove(int index)
: delete the element at the specified index position, and subtract 1 from the index of the element at the end of the position
indexOf(Object o)
: gets the index of the object o in the collection
lastIndexOf(Object o)
: get the index value of the last occurrence of the object o in the collection. If the object does not exist in the collection, return – 1.
At the same time, it provides an exampleIterator
Sub interface ofListIterator
Based on this iterator, we implement two default methodsreplaceAll(UnaryOperator<E> operator)
andsort(Comparator<? super E> c)
。
replaceAll(UnaryOperator<E> operator)
Here and thereString
In classreplaceAll()
The method is different. The receiving parameter here is a functional interface. Let’s take a look at the source code of this functional interface
package java.util.function;
@FunctionalInterface
public interface UnaryOperator<T> extends Function<T, T> {
static <T> UnaryOperator<T> identity() {
return t -> t;
}
}
The usage is as follows:
List<String> strList = new ArrayList<>();
strList.add("Hungary");
strList.add("Foolish");
strList.replaceAll(t -> "Stay " + t);
strList.forEach(s -> System.out.println(s));
The print result is
Stay Hungary Stay Foolish
andsort(Comparator<? super E> c)
What we pass in is also a functional interface. We can customize the sorting rules and call this method to sort
List<Human> humans = Lists.newArrayList(new Human("Sarah", 10), new Human("Jack", 12));
humans.sort((Human h1, Human h2) -> h1.getName().compareTo(h2.getName()))
Here isArrays.sort
Source code, you can see the use of the merge algorithm andTimSort
Algorithm to sort.
public static <T> void sort(T[] a, Comparator<? super T> c) {
if (c == null) {
sort(a);
} else {
if (LegacyMergeSort.userRequested)
legacyMergeSort(a, c);
else
TimSort.sort(a, 0, a.length, c, null, 0, 0);
}
}
Question 8: just now you talked about listiterator. Can you elaborate on it
As we mentioned earlier,ListIterator
AsIterator
The sub interface of, give an ordered setList
Provides an iterator under the linked list structure. Next, let’s take a lookListIterator
Source code:
andIterator
The difference is,ListIterator
Add some operations based on linked list data structure and methods that can be used to traverse linked list reversely
hasPrevious()
: when the list is iterated backward, there are still elements available for access. Returntrue。
previous()
: returns the previous object. If it has reached the top of the list, aNoSuchElementException
abnormal
nextIndex()
: returns the element index that will be returned by the next call to the next method
previousIndex()
: returns the index of the element that will be returned by the next call to the previous method
add(E newElement)
: adds an element before the current position.
set(E newElement)
: replaces the last element accessed by next or previous with a new element. If the list structure is modified after the last call to next or previous, anIllegalStateException
Abnormal.
Question 9: about the abstract list
AbstractList
It’s implementationList
Interface is an abstract class, its status andList
be similar toAbstractCollection
AndCollection
, colleagues,AbstractList
InheritedAbstractCollection
, and forList
Interface gives some default implementations. And it’s aimed atRandom access to stored dataIf you need to use sequential access to store data, there is anotherAbstractSequentialList
yesAbstractList
It should be given priority in sequential access.
Next, let’s take a lookAbstractList
Source code, look at him forList
Interface compared toAbstractCollection
The different implementation methods are given.
AbstractList
Source code in the structure is divided into two internal iterators, two internal classes andAbstractList
Some of its implementations are based on internal classes and internal iteratorsItr
andListItr
To complete, the following is part of the source code analysis (due to space reasons, can not put all, can only throw a brick to attract jade, write part)
//Since the collection is immutable, any operation that may change the elements of the collection will throw an unsupported operation exception ()
public E set(int index, E element) {
throw new UnsupportedOperationException();
}
//Gets the index of an element in the collection
public int indexOf(Object o) {
//Here are the implementation classes of iterator and listiterator iterator provided by abstractlist, which are ITR and listitr respectively. Here we call a method that instantiates listitr
ListIterator<E> it = listIterator();
if (o == null) {
while (it.hasNext())
if (it.next()==null)
return it.previousIndex();
} else {
while (it.hasNext())
if (o.equals(it.next()))
return it.previousIndex();
}
//Returns - 1 if the element does not exist in the collection
return -1;
}
/**
*Internal implementation of the iterator interface implementation class ITR
*/
private class Itr implements Iterator<E> {
//Cursor position
int cursor = 0;
//The cursor position of the last iteration to the element, if it is the end, will be set to - 1
int lastRet = -1;
//Concurrency flag. If the two values are inconsistent, it indicates that a concurrency operation has occurred and an error will be reported
int expectedModCount = modCount;
//Delete the element passed by the previous iterator
public void remove() {
if (lastRet < 0)
throw new IllegalStateException();
checkForComodification();
try {
//Call the remove method that needs subclasses to implement
AbstractList.this.remove(lastRet);
if (lastRet < cursor)
cursor--;
//After each deletion, set lastret to - 1 to prevent continuous deletion
lastRet = -1;
//Assign the number of changes to the iterator to modify the structure of the object. This will be explained in detail below
expectedModCount = modCount;
} catch (IndexOutOfBoundsException e) {
//If the index is out of bounds, it indicates that a concurrent operation has occurred, so a concurrent operation exception is thrown.
throw new ConcurrentModificationException();
}
}
//Determine whether concurrent operations have occurred
final void checkForComodification() {
if (modCount != expectedModCount)
throw new ConcurrentModificationException();
}
}
//Listitr is the implementation class of listiterator inherited from ITR
private class ListItr extends Itr implements ListIterator<E> {
//Get the element of the previous bit. Here, there will be pictures to help understand
public E previous() {
checkForComodification();
try {
//This is slightly different from the writing method of the parent class. First, subtract one from the position of the cursor
int i = cursor - 1;
E previous = get(i);
//Because you need to return the previous element, the cursor value here is actually the same as the cursor position in the last iteration
lastRet = cursor = i;
return previous;
} catch (IndexOutOfBoundsException e) {
checkForComodification();
throw new NoSuchElementException();
}
}
//Set element
public void set(E e) {
if (lastRet < 0)
throw new IllegalStateException();
checkForComodification();
try {
//The default location is the element that the iterator passed the last time
AbstractList.this.set(lastRet, e);
expectedModCount = modCount;
} catch (IndexOutOfBoundsException ex) {
throw new ConcurrentModificationException();
}
}
//Add element
public void add(E e) {
checkForComodification();
try {
//Set the added position to the current cursor position
int i = cursor;
AbstractList.this.add(i, e);
//Here, lastret is set to - 1, that is, the added elements are not allowed to be deleted immediately
lastRet = -1;
//After adding, move the cursor to the
cursor = i + 1;
//Unification of iterator concurrency flag and collection concurrency flag
expectedModCount = modCount;
} catch (IndexOutOfBoundsException ex) {
//If there is an index out of bounds, it indicates that a concurrent operation has occurred
throw new ConcurrentModificationException();
}
}
}
//Cut sublist
public List<E> subList(int fromIndex, int toIndex) {
//Is random access supported
return (this instanceof RandomAccess ?
new RandomAccessSubList<>(this, fromIndex, toIndex) :
new SubList<>(this, fromIndex, toIndex));
}
//Using iterators to segment elements from a collection
protected void removeRange(int fromIndex, int toIndex) {
ListIterator<E> it = listIterator(fromIndex);
for (int i=0, n=toIndex-fromIndex; i<n; i++) {
it.next();
it.remove();
}
}
}
//Sublist, an internal class inherited from abstractlist, represents part of its parent class
class SubList<E> extends AbstractList<E> {
private final AbstractList<E> l;
private final int offset;
private int size;
//Construct a sublist based on the parent class
SubList(AbstractList<E> list, int fromIndex, int toIndex) {
if (fromIndex < 0)
throw new IndexOutOfBoundsException("fromIndex = " + fromIndex);
if (toIndex > list.size())
throw new IndexOutOfBoundsException("toIndex = " + toIndex);
if (fromIndex > toIndex)
throw new IllegalArgumentException("fromIndex(" + fromIndex +
") > toIndex(" + toIndex + ")");
l = list;
offset = fromIndex;
size = toIndex - fromIndex;
//The number of modifications (concurrency flag) is consistent with the parent class
this.modCount = l.modCount;
}
//In fact, it is the set method and get method of the calling parent class
public E set(int index, E element) {
rangeCheck(index);
checkForComodification();
return l.set(index+offset, element);
}
public void add(int index, E element) {
rangeCheckForAdd(index);
checkForComodification();
//In fact, it is still added on the parent class
l.add(index+offset, element);
this.modCount = l.modCount;
//Then put size + 1
size++;
}
}
//Compared with the sublist inner class, there is an additional flag whether it can be accessed randomly
class RandomAccessSubList<E> extends SubList<E> implements RandomAccess {
RandomAccessSubList(AbstractList<E> list, int fromIndex, int toIndex) {
super(list, fromIndex, toIndex);
}
public List<E> subList(int fromIndex, int toIndex) {
return new RandomAccessSubList<>(this, fromIndex, toIndex);
}
}
Question 10: the relationship between index and cursor
Here I draw a picture, and then compare with this picture, let’s have a look again
ListItr
Some codes in
//The index value of the next bit is equal to the cursor value
public int nextIndex() {
return cursor;
}
//The index value of the previous bit is equal to the cursor value minus one
public int previousIndex() {
//In fact, I don't understand why I don't check the index out of bounds..
return cursor-1;
}
Suppose the iterator now runs to1
When the iterator is in this position, it is easy to call thenextIndex()
Method gets 1, while callingpreviousIndex
All you get is 0. This is completely in line with our logic. Next, let’s look at itprevious()
Method source code:
//Get the element of the previous bit. Here, there will be pictures to help understand
public E previous() {
checkForComodification();
try {
//This is slightly different from the writing method of the parent class. First, subtract one from the position of the cursor
int i = cursor - 1;
E previous = get(i);
//Because you need to return the previous element, the cursor value here is actually the same as the cursor position in the last iteration
lastRet = cursor = i;
return previous;
} catch (IndexOutOfBoundsException e) {
checkForComodification();
throw new NoSuchElementException();
}
}
In fact, I have doubts when I analyze it (whylastRet
be equal tocursor
, andItr
Innext()
Method is being implementedcursor
It’s actually equal tolastRet - 1
)After drawing the graph and analyzing the relationship between index and cursor, I came to realize it,
therelastRet
Represents the cursor position of the last iteration element, so let’s take an example when the iterator4
The location of the time usedprevious()
Method, the position of the iterator is in the3
And the cursor position of the last iteration element is the same3
And if you usenext()
Method, after use, the position of the iterator is5
And the last iteration did4
. This also confirmsnextIndex()
andpreviousIndex()
The logic of the game.
Question 11: expectedmodcount and modcount
A:
We can see from the source code
//This variable is transient, that is to say, it does not need to be stored during serialization
protected transient int modCount = 0;
This variable represents the number of structural modifications of the current collection object. Each modification will be performed by adding 1, andexpectedModCount
Represents the number of times an iterator makes structural changes to an object, so that each time it makes structural changes, it willexpectedModCount
andmodCount
Comparison, if equal, indicates that there is no other iterator to modify the object. If it is not equal, it indicates that a concurrent operation has occurred and an exception will be thrown. And sometimes they don’t judge in this way:
//Delete the element passed by the previous iterator
public void remove() {
if (lastRet < 0)
throw new IllegalStateException();
checkForComodification();
try {
//Call the remove method that needs subclasses to implement
AbstractList.this.remove(lastRet);
if (lastRet < cursor)
cursor--;
//After each deletion, set lastret to - 1 to prevent continuous deletion
lastRet = -1;
//Assign the number of changes to the iterator to modify the structure of the object. This will be explained in detail below
expectedModCount = modCount;
} catch (IndexOutOfBoundsException e) {
//If the index is out of bounds, it indicates that a concurrent operation has occurred, so a concurrent operation exception is thrown.
throw new ConcurrentModificationException();
}
}
The design here is to synchronize the number of changes with the iterator object after the deletion operation, although it is done at the beginning of the methodcheckForComodification()
Method, but the worry is that concurrent operations will occur when the deletion operation is performed again, so we do it heretry...catch...
When an index out of bounds exception occurs, it means that a concurrent operation must have occurred, so an exception is thrownConcurrentModificationException()
。
Question 12: About sublist and random access sublist
A:
By reading the source code, we can know that this class is actually a gnawing family. Basically, the method is to add directlyoffset
To call the method of the parent class, andRandomAccessSubList
It’s just based on thatRandomAccess
This interface is just a symbolic interface to mark whether it can be accessed randomly.
Question 13: talk about the ArrayList vector in ancient times
A:
Vector
Is a collection of dynamic arrays, that is, the length of the array can be automatically increased, it isThread synchronization (Security)In other words, only one thread can write at the same timeVector
It can avoid the inconsistency caused by multithreading at the same time, but it consumes more resources.
Due to the serious consumption of resources, it has gradually disappeared in the dust of history, and replaced by the implementation based on dynamic arrayArrayList
。
Question 14: simply speaking, stack
A:
Stack(Stack
)YesVector
It implements a standardLIFO stack。
public class Stack<E> extends Vector<E> {
/**
*Nonparametric construction of stack
*/
public Stack() {
}
/**
*Push the item to the top of the stack
*/
public E push(E item) {
addElement(item);
return item;
}
/**
*Removes the object at the top of the stack and returns it as the value of this function.
*@ return the removed object
*/
public synchronized E pop() {
E obj;
int len = size();
obj = peek();
removeElementAt(len - 1);
return obj;
}
/**
*Look at the objects at the top of the stack
* @return
*/
public synchronized E peek() {
int len = size();
if (len == 0) {
throw new EmptyStackException();
}
return elementAt(len - 1);
}
/**
*Test whether the stack is empty
* @return
*/
public boolean empty() {
return size() == 0;
}
/**
*Returns the position of the object in the stack, based on 1
*@ param O object to find location
* @return
*/
public synchronized int search(Object o) {
int i = lastIndexOf(o);
if (i >= 0) {
return size() - i;
}
return -1;
}
/**
*Version ID
*/
private static final long serialVersionUID = 1224463164541339165L;
}
Stack
Inherited fromVector
, indicating that it is also implemented through arrays,Not a linked list. andVector
Class has all the features it has.
Question 15: what’s your understanding of ArrayList source code
A:
ArrayList
AndVector
Very similar, they are all collections based on array implementation, and can be dynamically expanded, except thatVector
It’s synchronous. It needs more resources, and it’s older. It has some disadvantages, so we’re now using it moreArrayList
Instead ofVector
. Next, we encountered some problems in the process of reading the source codeArrayList
The results were analyzed.
Let’s start with the constructor
/**
*Share empty array object
*/
private static final Object[] EMPTY_ELEMENTDATA = {};
/**
*And the difference is,
*The first time you add an element, you know whether the elementdata is initialized from an empty constructor or a parametric constructor.
*/
private static final Object[] DEFAULTCAPACITY_EMPTY_ELEMENTDATA = {};
/**
*Array object used to hold collection elements
*/
transient Object[] elementData;
/**
*If the parameter is 0, empty is called_ Elementdata
*
*@ param initialCapacity the initialization length of the collection
*@ throws illegalargumentexception throws the error if the parameter is less than 0
*/
public ArrayList(int initialCapacity) {
if (initialCapacity > 0) {
this.elementData = new Object[initialCapacity];
} else if (initialCapacity == 0) {
this.elementData = EMPTY_ELEMENTDATA;
} else {
throw new IllegalArgumentException("Illegal Capacity: "+
initialCapacity);
}
}
/**
*Parameterless construction, calling the default database_ EMPTY_ Elementdata
*/
public ArrayList() {
this.elementData = DEFAULTCAPACITY_EMPTY_ELEMENTDATA;
}
Here, two new empty constant arrays are created, which are respectively used to construct the parameter with initial length of 0ArrayList
Instance and nonparametricArrayList
Example, the actual default length of the nonparametric constructor here is 10, while the initial length of the parameter is related to the parameter. These two constant empty arrays play a more marking role, which is used to divide different situations in the later dynamic expansion.
/**
*Enhance the capacity of the ArrayList object container to ensure that it can provide the minimum capacity required by the container to store data
*
*@ param mincapacity minimum required capacity
*/
public void ensureCapacity(int minCapacity) {
//As can be seen here, if it is the default parameterless construction, the minimum capacity is 10; if not, the minimum capacity is 0. Here reflects the rigor of code design!
int minExpand = (elementData != DEFAULTCAPACITY_EMPTY_ELEMENTDATA) ? 0: DEFAULT_CAPACITY;
//If the minimum required capacity is greater than the initial minimum capacity of the container, call the expansion method, which shows the effect of two different constants
if (minCapacity > minExpand) {
ensureExplicitCapacity(minCapacity);
}
}
/**
*Calculate minimum required capacity
*@ param elementdata the array to be calculated
*Minimum required capacity expected by @ param mincapacity
*@ return minimum required capacity
*/
private static int calculateCapacity(Object[] elementData, int minCapacity) {
//Determine whether the container is a default parameterless construction. If not, return the mincapacity directly
if (elementData == DEFAULTCAPACITY_EMPTY_ELEMENTDATA) {
//If so, returns the maximum of the default capacity and the expected minimum required capacity
return Math.max(DEFAULT_CAPACITY, minCapacity);
}
return minCapacity;
}
/**
*Method of expanding capacity to minimum required capacity
*@ param mincapacity minimum capacity
*/
private void ensureCapacityInternal(int minCapacity) {
ensureExplicitCapacity(calculateCapacity(elementData, minCapacity));
}
/**
*The parameter here is the minimum capacity after calculation
*Minimum capacity of @ param mincapacity after calculation
*/
private void ensureExplicitCapacity(int minCapacity) {
//For questions about modcount, see the implementation in the abstract list
modCount++;
//Here is a comparison between the minimum capacity and the length of the array. If the minimum capacity is greater than the length of the array, the capacity will be expanded
if (minCapacity - elementData.length > 0) {
grow(minCapacity);
}
}
/**
*The maximum size of the array
*As an object, an array needs a certain amount of memory to store the object header information. The maximum memory occupied by the object header information can not exceed 8 bytes
*/
private static final int MAX_ARRAY_SIZE = Integer.MAX_VALUE - 8;
/**
*Dynamic expansion, to ensure that the capacity of the array can hold all the elements
*
*@ param minimum capacity
*/
private void grow(int minCapacity) {
//First, get the capacity of the current array
int oldCapacity = elementData.length;
//Expand the array by 50%. For example, the original capacity is 4, and the expanded capacity is 4 + 4 / 2 = 6
int newCapacity = oldCapacity + (oldCapacity >> 1);
//If the capacity after expansion is less than the minimum required capacity
if (newCapacity - minCapacity < 0) {
//Take the minimum capacity as the capacity of the container directly
newCapacity = minCapacity;
}
//If the capacity after expansion is greater than the maximum capacity of the array
if (newCapacity - MAX_ARRAY_SIZE > 0) {
//The minimum required capacity after processing is regarded as the new capacity, and the maximum capacity does not exceed the maximum value of integer
newCapacity = hugeCapacity(minCapacity);
}
//Use Arrays.copyOf The original array is copied to a new capacity array, and the copy result is returned to the original array to complete the dynamic expansion.
elementData = Arrays.copyOf(elementData, newCapacity);
}
/**
*The method to prevent the memory overflow caused by the excessive capacity of the array, the program will not go here basically, just in case
* @param minCapacity
* @return
*/
private static int hugeCapacity(int minCapacity) {
if (minCapacity < 0) {
throw new OutOfMemoryError();
}
//If the minimum required capacity is greater than the maximum capacity of the array, the maximum value of integer is returned; otherwise, the maximum capacity of the array is returned
return (minCapacity > MAX_ARRAY_SIZE) ?
Integer.MAX_VALUE :
MAX_ARRAY_SIZE;
}
From the source code analysis of the dynamic expansion part above, we can see the role of these two empty constant arrays. Here, we will encounter another problemArrayList
In the process of capacity expansion, the capacity is expanded according to the proportion of 50%. There is a problem here. The length of the expanded array must be greater than the length of the array, which will cause a waste of space and resources. At this time, the following methods can be used.
/**
*Clearing the space occupied by the empty elements in the collection will generally generate the free space after the dynamic expansion
*/
public void trimToSize() {
modCount++;
//If the number of data in the array is less than the space occupied by the array, it means that extra space is generated
if (size < elementData.length) {
//If there is no data, an empty is returned_ Otherwise, the data will be copied to a new size array and assigned to the original array
elementData = (size == 0)
? EMPTY_ELEMENTDATA
: Arrays.copyOf(elementData, size);
}
}
Next, let’s look at how to get itArrayList
The elements in,
/**
*Returns the element on an index of the array used to store elements
*@ param index needs to return the index of the element
*@ return returns the element in the index position
*/
@SuppressWarnings("unchecked")
E elementData(int index) {
return (E) elementData[index];
}
/**
*Returns the element at the specified location in the collection
*
*@ param index needs to return the index of the element
*@ return the element at the specified position in the set
*@ throws indexoutofboundsexception throws the exception when the index exceeds the length of the collection
*/
@Override
public E get(int index) {
//This step calls the method of checking index out of bounds
rangeCheck(index);
//This step calls the elementdata () method above. In essence, it takes the data from the array used to store the data according to the index
return elementData(index);
}
It can be seen that, in essence, the bottom layer is realized through arrays. When it comes to the operation of arrays, we must talk about this method that frequently appears in the source code
System.arraycopy(Object[] src, int srcPos, Object[] dest, int destPos, int length)
The meanings of these parameters are as follows:
SRC: source array;
Srcpos: the starting position of the source array to be copied;
Dest: destination array;
Destpos: the starting position of the destination array;
Length: the length of the copy.
We’ll see a problem,ArrayList
There are tworemove
method
/**
*Delete an element at an index position
*
*@ param index the index of the element to be deleted
*@ return returns the deleted element
*@ throws indexoutofboundsexception throws the exception when the index exceeds the length of the collection
*/
@Override
public E remove(int index) {
//First, check the index out of bounds
rangeCheck(index);
//Since this operation will cause structural changes, we need to change modcount + 1
modCount++;
//Gets the element originally located at this location, which is used to return the
E oldValue = elementData(index);
//Gets the index one bit before the deleted index
int numMoved = size - index - 1;
if (numMoved > 0) {
//The principle here is elementdata = {1, 2, 3, 4} = >
//Delete the element with index 1, then use the element {1} of 0 (nummoved) as the header, and splice the part after {3,4} (index + 1) to the original index position = = >
// {1, 3, 4},
System.arraycopy(elementData, index+1, elementData, index,
numMoved);
}
//Set the last bit to null = = > {1,3,4, null}, and then delete the size-1
elementData[--size] = null;
return oldValue;
}
/**
*Delete the specified element in the collection, and return true if the collection contains the element
*
*@ param o deleted element
*@ return returns true if the set contains the specified element
*/
@Override
public boolean remove(Object o) {
//There are two cases: null and not null
if (o == null) {
for (int index = 0; index < size; index++) {
//Use = = judgment when null
if (elementData[index] == null) {
//The method called here is actually similar to the above method, except that there is no return value and no index judgment
fastRemove(index);
return true;
}
}
} else {
for (int index = 0; index < size; index++) {
//Use equals to judge if it is not null
if (o.equals(elementData[index])) {
fastRemove(index);
return true;
}
}
}
return false;
}
/**
*There is no judgment of index crossing or return of deleted value. Other principles are similar to remove (int index)
*@ param index is the index of the deleted element. The reason why we don't need to judge here is that we have already made a judgment when calling this method, and there is no possibility of crossing the boundary
*/
private void fastRemove(int index) {
modCount++;
int numMoved = size - index - 1;
if (numMoved > 0) {
System.arraycopy(elementData, index+1, elementData, index,
numMoved);
}
elementData[--size] = null;
}
It can be seen that the difference between the two deletion methods is that one is to delete according to the index of the element and return whether the deletion is successful, while the other is to delete according to the direct index and return the deleted element. Speaking of deletion, we will also see a deleted elementprivate
EmbellishedbatchRemove(Collection<?> c, boolean complement)
Method, and the private method is called byremoveAll(Collection<?> c)
andretainAll(Collection<?> c)
And the difference between these two methods is that one is to take the intersection, and the other is to take the elements outside the intersection, which are just opposite methods.
/**
*Removes the intersection of the specified collection and collection
*
*@ param C collection that needs to be judged with the collection
*@ return returns true if this operation changes the structure of the collection
*@ throws ClassCastException if the element type of the collection is inconsistent with that of the collection, the exception is thrown
*@ throws NullPointerException if the parameter collection is null, a null pointer exception is thrown
*/
@Override
public boolean removeAll(Collection<?> c) {
//First, non null check is performed
Objects.requireNonNull(c);
//Call the encapsulated batch deletion method. When the parameter passed in here is false, the intersection is deleted
return batchRemove(c, false);
}
/**
*Delete elements other than the intersection of the collection element and the collection
*
*@ param C needs the collection object to keep the elements in the collection
*@ return if this operation changes the set, return true
*@ throws ClassCastException if the element type of the collection is inconsistent with that of the collection, the exception is thrown
*@ throws NullPointerException throw the exception if the element in the collection is empty
*/
@Override
public boolean retainAll(Collection<?> c) {
Objects.requireNonNull(c);
//Call the encapsulated batch deletion method. When the parameter passed in is true, the intersection is reserved
return batchRemove(c, true);
}
/**
*Methods of batch deletion
*@ param C collection object that needs to be compared with the original collection
*When @ param complex is false, the intersection is deleted. When @ param complex is true, the intersection is taken and the others are deleted
* @return
*/
private boolean batchRemove(Collection<?> c, boolean complement) {
//Now I write a small example to help you understand
//Suppose that the original set array is {1,2,3,4}, and C is {2,3}
final Object[] elementData = this.elementData;
int r = 0, w = 0;
boolean modified = false;
try {
//size = 4
for (; r < size; r++) {
//a. When the complexity is false, r = 0 and 3, it will enter the loop
//b. When the complexity is true and R = 1 and 2, it will enter the loop
if (c.contains(elementData[r]) == complement) {
//r = 0 w = 0 elementData[0] = elementData[0] {1,2,3,4}
//r = 3 w = 1 elementData[1] = elementData[3] {1,4,3,4}
// r = 1 w = 0 elementData[0] = elementData[1] {2,2,3,4}
//r = 2 w = 1 elementData[1] = elementData[2] {2,3,3,4}
elementData[w++] = elementData[r];
}
}
} finally {
//If the contains method uses the procedure to report an exception and assigns the remaining elements to the collection, it will not enter the code block if no exception occurs
if (r != size) {
System.arraycopy(elementData, r,
elementData, w,
size - r);
w += size - r;
}
// w = 2
if (w != size) {
for (int i = w; i < size; i++) {
//a. Elementdata [2] = null, elementdata [3] = null {1,4, null, null}, will the null element be recycled by the garbage collector?
//b. elmentData[2] = null, elementData[3] = null {2,3,null,null}
elementData[i] = null;
}
//Modification times + 2
modCount += size - w;
//The current number of arrays is the number of qualified elements
size = w;
//Returns the flag of successful operation
modified = true;
}
}
return modified;
}
Question 16: let’s give a brief introduction to map
A:
Map
Is an interface that represents an object that maps keys to values. A map cannot contain duplicate keys. Each key can be mapped to at most one value.
Map
There are three types of interfacecollection
View, allowing theKey set, value set or key value mapping relation setView the content of a mapping in the form of. mappingorderDefined as the iterator at the end of the mapcollection
Returns the order of its elements on the view. Some mapping implementations explicitly guarantee their order, such asTreeMap
Other mapping implementations do not guarantee order, such asHashMap
Class.
Question 17: what kind of sparks can be produced by the combination of map and lambda
A:
Traversal:
/**
*Traverse the collection. The parameter here is a functional interface, which can be used gracefully with lambda expressions
*@ param action, functional interface
*/
default void forEach(BiConsumer<? super K, ? super V> action) {
Objects.requireNonNull(action);
//In fact, the essence is to use entryset() to get the key value pairs and then traverse them
for (Entry<K, V> entry : entrySet()) {
K k;
V v;
try {
k = entry.getKey();
v = entry.getValue();
} catch(IllegalStateException ise) {
throw new ConcurrentModificationException(ise);
}
action.accept(k, v);
}
}
sort
/**
*Sort by mapped key
*/
public static <K extends Comparable<? super K>, V> Comparator<Entry<K,V>> comparingByKey() {
return (Comparator<Entry<K, V>> & Serializable)
(c1, c2) -> c1.getKey().compareTo(c2.getKey());
}
/**
*Sort by mapped key through specified comparer
*/
public static <K, V> Comparator<Entry<K, V>> comparingByKey(Comparator<? super K> cmp) {
Objects.requireNonNull(cmp);
return (Comparator<Entry<K, V>> & Serializable)
(c1, c2) -> cmp.compare(c1.getKey(), c2.getKey());
}
First of allMap
Sub interface ofEntry
IncomparingByKey()
Method. The function of this method is to sort by mapped key. Next, let’s see how to use it
public class Test {
public static void main(String[] args) {
Map<String, String> map = new HashMap<String,String>();
map.put("A","test1");
map.put("B","test2");
map.put("E","test5");
map.put("D","test4");
map.put("C","test3");
Stream<Map.Entry<String, String>> sorted = map.entrySet().stream().sorted(Map.Entry.comparingByKey());
Stream<Map.Entry<String, String>> sorted2 = map.entrySet().stream().sorted(Map.Entry.comparingByKey(String::compareTo));
sorted.forEach(entry -> System.out.println(entry.getValue()));
System.out.println("===============");
sorted2.forEach(entry -> System.out.println(entry.getValue()));
}
}
The output results are as follows
test1
test2
test3
test4
test5
===============
test1
test2
test3
test4
test5
Replacement:
/**
*All key value pairs in the map are evaluated and the returned result is overridden as value
* map.replaceAll((k,v)->((String)k).length());
*Operation performed by @ param function, functional interface
*/
default void replaceAll(BiFunction<? super K, ? super V, ? extends V> function) {
...
}
/**
*If and only if the key exists and the corresponding value is not equal to oldvalue, newvalue is used as the new associated value of the key, and the return value is whether it has been replaced.
*@ param key the key associated with the specified value
*@ param oldvalue expects the value associated with the specified key
*@ param newvalue the value associated with the specified key
*@ return returns true if the value is replaced
*/
default boolean replace(K key, V oldValue, V newValue) {
...
}
/**
*The entry for the specified key can only be replaced when the target is mapped to a value.
*@ param key the key associated with the specified value
*@ param value the value associated with the specified key
*@ return the previous value associated with the specified key. If there is no key mapping, null is returned
*/
default V replace(K key, V value) {
...
}
demo:
public static void main(String[] args) {
Map<String, String> map = new HashMap<String,String>();
map.put("A","test1");
map.put("B","test2");
map.replaceAll((s, s2) -> {
return s + s2;
});
printMap(map);
map.replace("A","test1");
printMap(map);
map.replace("A","test2","test1");
printMap(map);
map.replace("A","test1","test2");
printMap(map);
}
public static void printMap(Map<String,String> map){
map.forEach((key, value) -> System.out.print(key + ":" + value + " "));
System.out.println();
}
Print results:
A:Atest1 B:Btest2
A:test1 B:Btest2
A:test1 B:Btest2
A:test2 B:Btest2
compute:
/**
*If the specified key is not yet associated with a value (or mapped to null), try to use the given mapping function to calculate its value and enter it into this mapping, unless null.
*@ param key specifies the key with which the value is associated
*The @ param mappingfunction calculates the value of the function
*@ return the current (existing or calculated) value associated with the specified key. If the calculated value is empty, it is null
*/
default V computeIfAbsent(K key, Function<? super K, ? extends V> mappingFunction) {
...
}
/**
*If the value of the specified key exists and is not empty, a new mapping is attempted for the given key and its current mapping value.
*@ param key specifies the key with which the value is associated
*The @ param remappingfunction calculates the value of the function
*@ return the new value related to the specified key. If not, it will be null
*/
default V computeIfPresent(K key, BiFunction<? super K, ? super V, ? extends V> remappingFunction) {
...
}
/**
*Attempts to compute the mapping of the specified key and its current mapping value (null if there is no current mapping).
*@ param key specifies the key with which the value is associated
*The @ param remappingfunction calculates the value of the function
*@ return the new value related to the specified key. If not, it will be null
*/
default V compute(K key, BiFunction<? super K, ? super V, ? extends V> remappingFunction) {
...
}
Now, let’s take a look at how these three methods are used and their differences.
public static void main(String[] args) {
Map<String, String> map = new HashMap<String,String>();
map.put("A","test1");
map.put("B","test2");
map.compute("A", (key, value) -> { return key + value;});
printMap(map);
//Because there is "a" in the set, there is no corresponding operation here
map.computeIfAbsent("A", (key) -> { return key + 2;});
printMap(map);
//Here, because there is no "C" in the set, the assignment is performed
map.computeIfAbsent("C", (key) -> { return key + 2;});
printMap(map);
//Here, due to the existence of "a" in the set, according to the method definition, the original value will be returned after accounting
map.computeIfPresent("A", (key, value) -> { return key + value;});
printMap(map);
//Here, because there is no "d", according to the method definition, no operation is done
map.computeIfPresent("D", (key, value) -> { return key + value;});
printMap(map);
}
public static void printMap(Map<String,String> map){
map.forEach((key, value) -> System.out.print(key + ":" + value + " "));
System.out.println();
}
Output results:
A:Atest1 B:test2
A:Atest1 B:test2
A:Atest1 B:test2 C:C2
A:AAtest1 B:test2 C:C2
A:AAtest1 B:test2 C:C2
Others
/**
*If the value of the key in the set is null or the key value pair does not exist, the parameter value is used to override
*@ param key if the key exists and is not null, return the value corresponding to the key. If it does not exist, call put (key, value)
*@ param value if the value corresponding to the key does not exist or is null, the value is corresponding to the key
*@ return returns the substituted value
*/
default V putIfAbsent(K key, V value) {
...
}
/**
*Delete only when both key and value match.
*@ param key the key of the deleted mapping relationship
*@ param value the value of the deleted mapping relationship
*Do you want to delete the information returned by @ return
*/
default boolean remove(Object key, Object value) {
...
}
/**
*If the specified key is not already associated with a value or null, it is associated with the given non null value.
*@ param key combines the key with which the value is associated
*@ param value is the non null value to merge with the existing value associated with the key, or if there is no existing value or null value associated with the key, it is associated with the key
*The @ param remappingfunction recalculates the value (if any)
*@ return the new value associated with the specified key. If no value is associated with the key, null is returned
*/
default V merge(K key, V value,
BiFunction<? super V, ? super V, ? extends V> remappingFunction) {
}
Next, let’s look at an example
public static void main(String[] args) {
Map<String, String> map = new HashMap<String,String>();
map.put("A","test1");
map.put("B","test2");
map.putIfAbsent("A","test2");
map.putIfAbsent("C","test3");
printMap(map);
map.remove("A","test1");
printMap(map);
map.merge("A","test1",(oldValue, newValue) ->{
return oldValue + newValue;
} );
printMap(map);
map.merge("A","test4",(oldValue, newValue) ->{
return newValue;
} );
printMap(map);
}
The output is:
A:test1 B:test2 C:test3
B:test2 C:test3
A:test1 B:test2 C:test3
A:test4 B:test2 C:test3
Question 18: what do you know about the source code of set
A:
Set
InheritedCollection
Interface, which itself is also an interface, represents a container type that cannot have duplicate elements. To be more precise, a collection does not contain a pair of elementse1
ande2
So thate1.equals(e2)
。
adoptSet
We can find that,Set
It’s based onMap
Implementation, soSet
It does not guarantee that the data is in the same order as when it is stored, and it does not allow null values or duplicate values. Now let’s take a lookSet
What methods do they offer us.
first,Set
Provide some interfaces about its own properties:
/**
*Returns the number of elements in a set
*Number of elements in @ return set
*/
int size();
/**
*Returns true if the set does not contain any elements
*@ return returns true if the set does not contain any elements
*/
boolean isEmpty();
Of course, it also provides an interface to query the existence of elements in the collection
/**
*Returns true if the set contains the specified element
*The element specified by @ param o
*@ return returns true if the set contains the specified element.
*/
boolean contains(Object o);
/**
*Returns true if the set contains all the elements of the specified collection.
*If the specified collection is also a set, true is returned when the collection is a subset of the set.
*@ param C checks whether the collection is included in this set
*@ return returns true if the set contains all the elements in the specified collection
*/
boolean containsAll(Collection<?> c);
There are also several interfaces for structural operation of elements. It should be noted that when adding an element, if the element already exists in the collection, the addition will fail and a false will be returned.
/**
*If the specified element does not already exist in the set, the element is added
*@ param e added element
*@ return if the element exists in the set, the addition fails and returns false
*/
boolean add(E e);
/**
*If all the elements in the collection are not specified in the set, they are added to the set
*If the specified collection is also a set, the addall operation will actually modify the set,
*So its value is a union of two sets. If the specified collection is modified while the operation is in progress, the behavior of the operation is uncertain.
* @param c
* @return
*/
boolean addAll(Collection<? extends E> c);
/**
*If the specified element exists in the set, remove it (optional operation).
*@ param o deleted element
*@ return returns true if the set contains the specified object
*/
boolean remove(Object o);
/**
*Only those elements in the set contained in the specified collection are retained. In other words, only the intersection of the two is taken, and the rest is ignored
*The set of @ param C and set to judge
*@ return returns true if the set is changed due to a call
*/
boolean retainAll(Collection<?> c);
/**
*Remove the elements in the set that are contained in the specified collection, that is, take all elements except the intersection
*The set of @ param C and set to judge
*@ return returns true if the set is changed due to a call
*/
boolean removeAll(Collection<?> c);
/**
*Remove all elements in this set, and the set will be empty when this call returns.
*/
void clear();
Set
Provides a default instance of getting a cutable iterator through theSpliterators
Method
/**
*A cutable iterator, which returns an instance of the cutable iterator of the set
* @return
*/
@Override
default Spliterator<E> spliterator() {
return Spliterators.spliterator(this, Spliterator.DISTINCT);
}
Question 19: what do you know about the source code of abstractset
A:
From the source code, we can see that,AbstractSet
Provides three methods of rewriting, which areequals
,hashCode
,removeAll
These three methods,equals
andhashCode
How to rewrite it is not explained here. Let’s take a lookremoveAll
/**
*Removes all elements contained in the specified collection from this set
*If the specified collection is also a set, this operation effectively modifies the set so that its value becomes the asymmetric difference set of two sets.
*
*@ param C contains the collection of elements to be removed from this set
*@ return returns true if the set is changed due to a call
*/
@Override
public boolean removeAll(Collection<?> c) {
Objects.requireNonNull(c);
boolean modified = false;
//By calling the size method on this set and the specified collection, this implementation can determine which is smaller.
if (size() > c.size()) {
//If there are fewer elements in this set, the implementation iterates over this set, checking each element returned by the iterator in turn to see if it is included in the specified collection.
for (Iterator<?> i = c.iterator(); i.hasNext(); ) {
//If it is included, it is removed from this set using the remove method of the iterator.
modified |= remove(i.next());
}
} else {
//If there are fewer elements in the specified collection, the implementation iterates over the specified collection and uses the remove method of this set to remove each element returned by the iterator from this set.
for (Iterator<?> i = iterator(); i.hasNext(); ) {
if (c.contains(i.next())) {
i.remove();
modified = true;
}
}
}
return modified;
}
Question 20: last question: About HashMap
A:
Speaking ofHashMap
It’s probably the container class that we use most to store K-V data. As its name says, it’s based on hash table. The power of hash table is that the time complexity of searching is O (1), because each object has a corresponding index, and we can access the object directly according to the index of the object, And this index is the hash value of our object.
In Java, hashing is done throughLinked list + arrayTo implement, each linked list can be called a bucket, and the location of the object is calculated by the hash value of the object, and then with the total number of buckets (that is, the length of the HashMap), the result is to save the bucket index of this element. If two objects have the same hash value, it will appearHash conflictAt this time, you need to compare the new object with the object in the linked list (bucket) to see if the object already exists. If it doesn’t exist, add a new one.
However, there is a problem here. If the number of buckets is very limited (for example, there are only three buckets), but the amount of data is very large, for example, there are 10000 data, which will lead to very serious hash conflicts. At this time, JDK The version after 8 provides us with a new idea. When the length of the linked list is greater than 8, the subsequent elements will be stored in the red black tree (also known as the balanced binary tree), which can greatly improve our query efficiency.
static final int TREEIFY_THRESHOLD = 8;
structure
First, let’s look at the constructor of the source code
As you can see, there are four kinds of constructors in the source code. The first one is the constructor for a given initialization map length (buckets) and loading factor. The loading factor is used to determine when to rehash the hash table. For example, the initialization loading factor is 0.75. When 75% of the positions in the table have been filled with elements, the table will rehash with double buckets. If the initial value is not set, the default value will be adopted (length is 16, filling factor is 0.75)
static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16
static final int MAXIMUM_CAPACITY = 1 << 30;
static final float DEFAULT_LOAD_FACTOR = 0.75f;
The fourth represents the construction of a HashMap object containing the map.
Node
By observing the source code, we can find thatHashMap
It’s based on aNode
The inner class is implemented as the backboneNode
yesEntry
An implementation of.
Node
OfhashCode()
The implementation of the method is as follows
public final int hashCode() {
return Objects.hashCode(key) ^ Objects.hashCode(value);
}
The reason why XOR operation is carried out here is to make the hash more uniform, so as to reduce the number of hash conflicts.
aboutTreeNode
It’s about the realization of red black tree. I won’t give you more space to explain it. I will learn more about the data structure and algorithm later.
On the implementation of get and put
Let’s start with someget()
The implementation of the method is as follows
public V get(Object key) {
Node<K,V> e;
return (e = getNode(hash(key), key)) == null ? null : e.value;
}
Here, we can see that we first find the corresponding node through the key and the calculated hash value, and then get the value or null of the node.
static final int hash(Object key) {
int h;
return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16);
}
The hash algorithm here is also to make the hash more uniform and reduce the number of hash conflicts
The implementation can be divided into the following steps
- According to the hash value, the corresponding index (n – 1) & hash can be calculated directly.
- Judge whether the key of the first existing node is equal to the key of the query. If equal, the node is returned directly.
- Traverse the node
- Judge whether the structure of the set is a linked list or a red black tree. If it is a red black tree, call the internal method to find the value corresponding to the key.
- If the structure is a linked list, the value corresponding to the corresponding key is obtained through traversal.
Then, let’s seeput()
The implementation of the method is as follows
public V put(K key, V value) {
return putVal(hash(key), key, value, false, true);
}
First, let’s explain the parameters in the methodboolean onlyIfAbsent
It means that the key is inserted only when the original value corresponding to the key is null, that is to say, if the value exists before, it will not be covered by the new put element. The rest of the process andget()
The ideas of the methods are similar to each other in a great degree. Here, we need to pay attention to the enclosed places,
After the insertion is successful, you need to judge whether you need to convert to a red black tree, because after the insertion, the length of the linked list is increased by 1, and thebinCount
It does not include new nodes, so the critical threshold should be reduced by 1. Called when the new length satisfies the conversion conditiontreeifyBin
Method to convert the linked list into a red black tree.
Conclusion
The content of the collection has come to an end here. I believe I have read this book with all my heart and soulTwenty questions, there will be a lot of new harvest, if you learn, please give me oneLike + followThat’s rightOriginatorThe greatest support and help.
The same skin bag, the soul of one in a million, here is Shanhe, a different writer.