HashMap
HashMap
1 | public class HashMap<K,V> extends AbstractMap<K,V> |
Hash table based implementation of the Map interface.
This implementation provides all of the optional map operations, and permits null values and the null key.
(The HashMap class is roughly equivalent to Hashtable, except that it is unsynchronized and permits nulls.)
This class makes no guarantees as to the order of the map; in particular, it does not guarantee that the order will remain constant over time.
AbstractMap
1 | public abstract class AbstractMap<K,V> implements Map<K,V> |
To implement an unmodifiable map, the programmer needs only to extend this class and provide an implementation for the entrySet method, which returns a set-view of the map’s mappings. Typically, the returned set will, in turn, be implemented atop AbstractSet. This set should not support the add or remove methods, and its iterator should not support the remove method.(实现不可变的Map只需实现entrySet方法,并且返回Set和其Iterator不能实现add与remove方法)
To implement a modifiable map, the programmer must additionally override this class’s put method (which otherwise throws an UnsupportedOperationException), and the iterator returned by entrySet().iterator() must additionally implement its remove method.
unmodifiable:
entrySet()
modifiable:entrySet(),put(),entrySet().iterator(),iterator.remove()
代码节选
1 | public abstract class AbstractMap<K,V> implements Map<K,V> { |
实现交由entrySet.Iterator完成,而entrySet没有给出实现,需要子类提供实现
HashMap
initial capacityandload factor
multi-threads modify themap structure–> synchronized
This implementation provides constant-time performance for the basic operations (get and put), assuming the hash function disperses the elements properly among the buckets.(在key均匀分布时才有查询时间O(n))
Iteration over collection views requires time proportional to the “capacity” of the HashMap instance (the number of buckets) plus its size (the number of key-value mappings).(集合迭代需要与capacity等比例的时间);Thus, it’s very important **not to set the initial capacity too high (or the load factor too low) ** if iteration performance is important.
An instance of HashMap has two parameters that affect its performance: initial capacity and load factor.
The capacity is the number of buckets in the hash table, and the initial capacity is simply the capacity at the time the hash table is created.
The load factor is a measure of how full the hash table is allowed to get before its capacity is automatically increased.
When the number of entries in the hash table exceeds the product of the load factor and the current capacity, the hash table is rehashed (that is, internal data structures are rebuilt) so that the hash table has approximately twice the number of buckets.
If many mappings are to be stored in a HashMap instance, creating it with a sufficiently large capacity will allow the mappings to be stored more efficiently than letting it perform automatic rehashing as needed to grow the table.
Note that using many keys with the same {@code hashCode()} is a sure way to slow down performance of any hash table.
To ameliorate impact, when keys are {@link Comparable}, this class may use comparison order among keys to help break ties.
modified structurally(Exception)
1 | If no such object exists, the map should be "wrapped" using the { Collections#synchronizedMap Collections.synchronizedMap} method. |
code
Abstract(简介)
1 | public class HashMap<K,V> extends AbstractMap<K,V> |
Node-Iterator(entrySet method)
1 | static class Node<K,V> implements Map.Entry<K,V> { |
数据存储在table中 index: hash&(capacity-1) ; modify structurally: pay attention to the expectedModCount & modCount
LinkedHashMap 与 TreeNode相关后续介绍
在EntrySet中仅有remove方法 –> HashMap#removeNode
Put–Remove
put()与remove()方法具体实现交由putVal()和removeNode()
hash值
1 | //创建新节点的hash值计算,因此Node.hash 可能为负 |
Put–Remove具体操作
1 | //初始化&空间不够时,重建table[扩容] |
关于扩容,链表元素:深入解析HashMap原理(基于JDK1.8)
避免不必要的扩容:
假设要存储1000个
设1024 but 1024*0.75<1000 : so set 2048
….
Serializable(Write/Read)
1 | private void writeObject(java.io.ObjectOutputStream s) throws IOException { |
later
about TreeNode … …