[TOC]

文章参考 https://tech.meituan.com/2016/06/24/java-hashmap.html

文章参考:https://mp.weixin.qq.com/s/h5SJ1yuiqGVjfKqExQ3Zng

开始文章之前,我们先来思考几个问题:

1、初始化的容量为什么是16?
2、存储元素的数组的长度为什么都是2的幂次倍?
3、JDK7之后,为什么引入了红黑树??引入红黑树有什么作用??
3、红黑树的转化策略
4、HashMap的树形化的阈值为什么是8。列表化的阈值为什么是6.
5、HashMap的树形化的时机,还有什么限制??

以上这个问题的答案:肯定都是为了性能撒!!

概述

HashMap是基于拉链法实现的一个散列表,它存储的内容是键值对(key-value)映射,内部由数组和链表和红黑树实现。

HashMap 继承于AbstractMap,实现了Map、Cloneable、java.io.Serializable接口。
HashMap 的实现不是同步的,这意味着它不是线程安全的。如果我们想要保证同步,可以考虑使用HashTable或者是ConcurrentHashMap。

HashMap的key、value都可以为null。此外,HashMap中的映射不是有序的。

在JDK1.7中和JDK1.8中有所区别:

在JDK1.7中,由”数组+链表“组成,数组是HashMap的主体,链表则是主要为了解决哈希冲突而存在的。

在JDK1.8中,有“数组+链表+红黑树”组成。当链表过长,则会严重影响HashMap的性能,红黑树搜索时间复杂度是O(logn),而链表是O(n)。因此,JDK1.8对数据结构做了进一步的优化,引入了红黑树,链表和红黑树在达到一定条件会进行转换:

  • 当链表超过8且数组长度(数据总量)超过64才会转为红黑树
  • 将链表转换成红黑树前会判断,如果当前数组的长度小于64,那么会选择先进行数组扩容,而不是转换为红黑树,以减少搜索时间。

image-20220330121632219

下面我们来对照源码类分析一下Hashmap的源码,来分析HashMap的工作原理。

首先,我们来看一下,HashMap的构造函数

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48

/**
* 默认的加载因子
*/
static final float DEFAULT_LOAD_FACTOR = 0.75f;

// 无参构造函数
public HashMap() {
// 初始化填充因子.默认为0.75f
this.loadFactor = DEFAULT_LOAD_FACTOR;
}

// 传入初始化容量的含参构造函数
public HashMap(int initialCapacity) {
// 调用HashMap(int, float)型构造函数
this(initialCapacity, DEFAULT_LOAD_FACTOR);
}

// 自定义初始化容量和默认填充因子的构造函数
public HashMap(int initialCapacity, float loadFactor) {
// 初始容量不能小于0,否则报错
if (initialCapacity < 0)
throw new IllegalArgumentException("Illegal initial capacity: " +
initialCapacity);
// 初始容量不能大于最大值,否则为最大值
if (initialCapacity > MAXIMUM_CAPACITY)
initialCapacity = MAXIMUM_CAPACITY;
// 填充因子不能小于或等于0,不能为非数字
if (loadFactor <= 0 || Float.isNaN(loadFactor))
throw new IllegalArgumentException("Illegal load factor: " +
loadFactor);
// 初始化填充因子
this.loadFactor = loadFactor;
/// 这个方法可以将任意一个整数转换大约的最小成2的幂次方。
// 例如输入10,则会返回16。
// 另外,有人可能疑惑,不是说threshold是 数组容量 * loadFactor得到的吗?
// 是的,但是在第一次put操作,扩充数组时,会将这个threshold作为数组容量,然后再重新计算这个值。
// 这个方法的具体实现我们下面解析
this.threshold = tableSizeFor(initialCapacity);
}

// 以Map作为参数的构造函数
public HashMap(Map<? extends K, ? extends V> m) {
// 初始化填充因子
this.loadFactor = DEFAULT_LOAD_FACTOR;
// 将m中的所有元素添加至HashMap中,这个方法我们下面分析
putMapEntries(m, false);
}

下面我们来看一下类的属性

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
// 默认的初始容量是16,为什么默认是16,后面我们会解释道
static final int DEFAULT_INITIAL_CAPACITY = 1 << 4;
// 最大容量 2的30次幂
static final int MAXIMUM_CAPACITY = 1 << 30;
// 默认的填充因子
static final float DEFAULT_LOAD_FACTOR = 0.75f;
// 当桶(bucket)上的结点数大于这个值时会转成红黑树
static final int TREEIFY_THRESHOLD = 8;
// 当桶(bucket)上的结点数小于这个值时树转链表
static final int UNTREEIFY_THRESHOLD = 6;

// 桶中结构转化为红黑树对应的table的最小大小
static final int MIN_TREEIFY_CAPACITY = 64;
// 存储元素的数组,总是2的幂次倍。Node则是链表节点对象。
transient Node<k,v>[] table;
// 存放具体元素的集
transient Set<map.entry<k,v>> entrySet;
// 存放元素的个数,注意这个不等于数组的长度。
transient int size;
// 每次扩容和更改map结构的计数器
transient int modCount;
// 临界值 当实际大小(容量*填充因子,但是第一次不是)超过临界值时,会进行扩容
int threshold;
// 填充因子
final float loadFactor;

4、Hasp的重点方法解析
请注意。我们是以JDK1.8.0_181版本来分析的。不同的JDK版本可能在实现有不同,但是基本思想都是一致的。
4.1、HashMap的put() 存放元素

  • 这个是非常重要的方法!!!

  • 所以我们着重分析一下

  • 这个方法是将对应的value与对应的key进行关联,存放到Map中

  • 如果原来Map中有对应key的value.那么便会更新为入参value

    1
    2
    3
    4
    5
    6
    7
     * */
    public V put(K key, V value) {
    // 这个方法调用了putVal()。入参的时候调用hash(key)这个方法代码中有讲解
    return putVal(hash(key), key, value, false, true);
    }

    /**
  • 从上面我们可以看到put()方法调用了这个方法。

  • onlyIfAbsent:当存入键值对时,如果该key已存在,是否覆盖它的value。false为覆盖,true为不覆盖 *参考putIfAbsent()方法。

  • evict:用于子类LinkedHashMap。 也就是说在hashmap是没有用的

  • 下面我们简单介绍一下这个方法:

  •      1.检查数组是否为空,执行resize()扩充;
    
  •      2.通过hash值计算数组索引,获取该索引位的首节点。 
    
  •      3.如果首节点为null(没发生碰撞) ,
    
  •      直接添加节点到该索引位 (bucket) 。 
    
  •      4.如果首节点不为null (发生碰撞) ,
    
  •      那么有3种情况 :
    
  •          ① key和首节点的key相同,覆盖old value (保证key的唯一性) ;
    
  •      否则执行②或③ 
    
  •          ② 如果首节点是红黑树节点(TreeNode),将键值对添加到红黑树。 
    
  •          ③ 如果首节点是链表,将键值对添加到链表。添加之后会判断链表长度是否到达TREEIFY_THRESHOLD - 1这个阈值,“尝试”将链表转换成红黑树。 
    
  • 5.最后判断当前元素个数是否大于threshold,扩充数组。

  • 我们根据key来获取Node的节点数据
    */
    public V get(Object key) {
    Node<K,V> e;
    // 根据key来计算对应的Hash值,通过key的hash值和key来获取对应的Value
    // 下面我们来看getNode方法
    return (e = getNode(hash(key), key)) == null ? null : e.value;
    }

/**

  • 获取HashMap的Node节点
    */
    final Node<K,V> getNode(int hash, Object key) {
    //tab:内部数组 first: 索引位首节点 n: 数组长度 k: 索引位首节点的key
    Node<K,V>[] tab; Node<K,V> first, e; int n; K k;

    //数组不为null 数组长度大于0 定位到数组中的索引处。并判断索引位首节点不为null
    if ((tab = table) != null && (n = tab.length) > 0 &&

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    // i = (n - 1) & hash ,n是数组长度,hash就是通过hash()方法进行高低位异或运算得出来的hash值。 //这个表达式就是hash值的取模运算,上面已经说过当除数为2的次方时,可以用与运算提高性能。
    // 所以这里也就是解决了我们为什么要将数组长度取2的N次方

    (first = tab[(n - 1) & hash]) != null) {

    // 如果索引位首节点的hash==key的hash 或者 key和索引位首节点的k相同
    if (first.hash == hash && // always check first node
    ((k = first.key) == key || (key != null && key.equals(k))))
    // 返回索引位首节点(值对象)
    return first;
    //
    if ((e = first.next) != null) {
    if (first instanceof TreeNode)
    // 如果是红黑色则到红黑树中查找.
    // getNode的方法比较简单,下面我们就来看getTreeNode的方法实现
    return ((TreeNode<K,V>)first).getTreeNode(hash, key);
    // 否则就在链表中进行遍历朝朝。找到就返回e
    do {
    if (e.hash == hash &&
    ((k = e.key) == key || (key != null && key.equals(k))))
    return e;
    } while ((e = e.next) != null);
    }

    }
    return null;
    }
    4.3 resize() 数组扩容

    // 遍历原数组
    for (int j = 0; j < oldCap; ++j) {
    // 取出首节点
    HashMap.Node<K,V> e;
    if ((e = oldTab[j]) != null) {
    oldTab[j] = null;
    // 如果链表只有一个节点,那么直接重新计算索引存入新数组。
    if (e.next == null)
    newTab[e.hash & (newCap - 1)] = e;
    // 如果该节点是红黑树,执行split方法,和链表类似的处理。
    else if (e instanceof HashMap.TreeNode)
    ((HashMap.TreeNode<K,V>)e).split(this, newTab, j, oldCap);

    // 此时节点是链表
    else { // preserve order
    // loHead,loTail为原链表的节点,索引不变。
    HashMap.Node<K,V> loHead = null, loTail = null;
    // hiHeadm, hiTail为新链表节点,原索引 + 原数组长度。
    HashMap.Node<K,V> hiHead = null, hiTail = null;
    HashMap.Node<K,V> next;

    // 遍历链表
    do {
    next = e.next;
    // 新增bit为0的节点,存入原链表。
    if ((e.hash & oldCap) == 0) {
    if (loTail == null)
    loHead = e;
    else
    loTail.next = e;
    loTail = e;
    }
    // 新增bit为1的节点,存入新链表。
    else {
    if (hiTail == null)
    hiHead = e;
    else
    hiTail.next = e;
    hiTail = e;
    }
    } while ((e = next) != null);
    // 原链表存回原索引位
    if (loTail != null) {
    loTail.next = null;
    newTab[j] = loHead;
    }
    // 新链表存到:原索引位 + 原数组长度
    if (hiTail != null) {
    hiTail.next = null;
    newTab[j + oldCap] = hiHead;
    }
    }
    }
    }
    }
    return newTab;
    }
    扩充数组不单单只是让数组长度翻倍,将原数组中的元素直接存入新数组中这么简单。
    因为元素的索引是通过hash&(n - 1)得到的,那么数组的长度由n变为2n,重新计算的索引就可能和原来的不一样了。
    在jdk1.7中,是通过遍历每一个元素,每一个节点,重新计算他们的索引值,存入新的数组中,称为rehash操作。

而java1.8对此进行了一些优化,没有了rehash操作。因为当数组长度是通过2的次方扩充的,那么会发现以下规律:

元素的位置要么是在原位置,要么是在原位置再移动2次幂的位置。因此,在扩充HashMap的时候,不需要像JDK1.7的实现那样重新计算hash,只需要看看原来的hash值新增的那个bit是1还是0就好了,是0的话索引没变,是1的话索引变成“原索引+oldCap”。

先计算新数组的长度和新的阈值(threshold),然后将旧数组的内容迁移到新数组中,和1.7相比不需要执行rehash操作。因为以2次幂扩展的数组可以简单通过新增的bit判断索引位。

4.4、树形化的treeifyBin() 方法。
Code
1
2
3
4
5
6
7
8
9
10
// 把链表转换为红黑树
final void treeifyBin(Node<K,V>[] tab, int hash) {
int n, index; Node<K,V> e;
// 如果当前数组容量太小(小于64),放弃转换,扩充数组。
if (tab == null || (n = tab.length) < MIN_TREEIFY_CAPACITY)
resize();
} else if ((e = tab[index = (n - 1) & hash]) != null) {
// 将链表转成红黑树…
}
}
HashMap在jdk1.8之后引入了红黑树的概念,表示若桶中链表元素超过8时,会自动转化成红黑树;若桶中元素小于等于6时,树结构还原成链表形式。
红黑树的平均查找长度是log(n),长度为8,查找长度为log(8)=3,链表的平均查找长度为n/2,当长度为8时,平均查找长度为8/2=4,这才有转换成树的必要;链表长度如果是小于等于6,6/2=3,虽然速度也很快的,但是转化为树结构和生成树的时间并不会太短。

五、HashMap的源码分析

public class HashMap<K,V> extends AbstractMap<K,V>
    implements Map<K,V>, Cloneable, Serializable {
private static final long serialVersionUID = 362498820763181265L;

/**
 * 默认的初始化容量是16,必须是2的幂次方(只有为什么我们下面会讲)。
 */
static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16

/**
 * 容量最大为2的30次方
 */
static final int MAXIMUM_CAPACITY = 1 << 30;

/**
 * The load factor used when none specified in constructor.
 * 如果未特殊指定加载因子的话,默认为0.75.
 * 至于为什么是0.75f。这是时间与空间综合考虑的经验值
 */
static final float DEFAULT_LOAD_FACTOR = 0.75f;

/**
 *  树形化阈值,当桶(bucket)上的结点数大于这个值时会转成红黑树
 */
static final int TREEIFY_THRESHOLD = 8;

/**
 *  解除树形化阈值,当桶(bucket)上的结点数小于这个值时树转链表
 */
static final int UNTREEIFY_THRESHOLD = 6;

/**
 * 树形化阈值的第二条件。当数组的长度小于这个值时,
 * 就算树形化阈达标,链表也不会转化为红黑树,而是优先扩容数组resize()。
 */
static final int MIN_TREEIFY_CAPACITY = 64;

/**
 * HashMap的数据存储Key-Value的节点对象。
 */
static class Node<K,V> implements Map.Entry<K,V> {
    final int hash;
    final K key;
    V value;
    // 链表的下一个节点
    Node<K,V> next;
    // 节点的构造函数
    Node(int hash, K key, V value, Node<K,V> next) {
        this.hash = hash;
        this.key = key;
        this.value = value;
        this.next = next;
    }

    public final K getKey()        { return key; }
    public final V getValue()      { return value; }
    public final String toString() { return key + "=" + value; }

    public final int hashCode() {
        return Objects.hashCode(key) ^ Objects.hashCode(value);
    }

    public final V setValue(V newValue) {
        V oldValue = value;
        value = newValue;
        return oldValue;
    }

    public final boolean equals(Object o) {
        if (o == this)
            return true;
        if (o instanceof Map.Entry) {
            Map.Entry<?,?> e = (Map.Entry<?,?>)o;
            if (Objects.equals(key, e.getKey()) &&
                Objects.equals(value, e.getValue()))
                return true;
        }
        return false;
    }
}

/* ---------------- Static utilities -------------- */

/**
 * 上面的代码只是用hashCode的高16位与低16位进行异或运算。 
 * hash() 方法就是将hashCode进一步的混淆,增加其 “随机度” ,试图减少插入HashMap时的hash冲突 。
 * 计算HashMap的key的hashCode的值。至于计算这个Key的hash的值的算法
 * 我们可以理解为这是考虑到位扩展的速度、实用性以及尽量让hash散列集合理分布
 * 这些因素之后的权衡做法
 */
static final int hash(Object key) {
    int h;
    return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16);
}

/**
 * Returns x's Class if it is of the form "class C implements
 * Comparable<C>", else null.
 */
static Class<?> comparableClassFor(Object x) {
    if (x instanceof Comparable) {
        Class<?> c; Type[] ts, as; Type t; ParameterizedType p;
        if ((c = x.getClass()) == String.class) // bypass checks
            return c;
        if ((ts = c.getGenericInterfaces()) != null) {
            for (int i = 0; i < ts.length; ++i) {
                if (((t = ts[i]) instanceof ParameterizedType) &&
                    ((p = (ParameterizedType)t).getRawType() ==
                     Comparable.class) &&
                    (as = p.getActualTypeArguments()) != null &&
                    as.length == 1 && as[0] == c) // type arg is c
                    return c;
            }
        }
    }
    return null;
}

/**
 * Returns k.compareTo(x) if x matches kc (k's screened comparable
 * class), else 0.
 */
@SuppressWarnings({"rawtypes","unchecked"}) // for cast to Comparable
static int compareComparables(Class<?> kc, Object k, Object x) {
    return (x == null || x.getClass() != kc ? 0 :
            ((Comparable)k).compareTo(x));
}

/**
 * 返回给定大于容量值的最小2次幂。用来计算扩容的临界值
 * 跟大神学习了。用位运算代替取模预算(据说提升了5~8倍)。
 * 我们可以通过位运算来计算大约某个数的最小二次幂。
 *  >>> : 无符号右移,忽略符号位,空位都以0补齐
 * 
 */
static final int tableSizeFor(int cap) {
    int n = cap - 1;
    n |= n >>> 1;
    n |= n >>> 2;
    n |= n >>> 4;
    n |= n >>> 8;
    n |= n >>> 16;
    return (n < 0) ? 1 : (n >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1;
}

/* ---------------- Fields -------------- */

/**
 * HashMap的节点数组,当我们第一次使用的时候初始化大小
 * 当需要的时候,我们会进行扩容
 */
transient Node<K,V>[] table;

/**
 * Holds cached entrySet(). Note that AbstractMap fields are used
 * for keySet() and values().
 */
transient Set<Map.Entry<K,V>> entrySet;

// HashMap中键值对的数目
transient int size;

/**
 * HashMap结构被修改的次数。
 * 结构修改是指改变HashMap中映射的数量或修改其内部结构的次数(例如,rehash)。
 */
transient int modCount;

/**
 * 数组扩容阈值。即:HashMap数组总容量 * 加载因子。当前容量大于或等于该值时会执行扩容** resize() **。
 * 扩容的容量为当前 * HashMap 总容量的两倍。比如,当前 HashMap 的总容量为 16 ,那么扩容之后为 32 。
 */需要调整大小的阈值。容量*加载因子
int threshold;

/**
 * 加载因子
 */
final float loadFactor;

/* ---------------- Public operations -------------- */

/**
 * 这个方法是map作为参数的构造方法里面调用(clone方法也有调用)
 */
final void putMapEntries(Map<? extends K, ? extends V> m, boolean evict) {
    // 传入的Map的元素个数
    int s = m.size();
    // 元素个数大于0,我们开始执行下面的操作
    if (s > 0) {
        // 如果Hash桶的数组为null
        if (table == null) { // pre-size
            float ft = ((float)s / loadFactor) + 1.0F;
            int t = ((ft < (float)MAXIMUM_CAPACITY) ?
                     (int)ft : MAXIMUM_CAPACITY);
            if (t > threshold)
                threshold = tableSizeFor(t);
        }
        else if (s > threshold)
            resize();
        for (Map.Entry<? extends K, ? extends V> e : m.entrySet()) {
            K key = e.getKey();
            V value = e.getValue();
            putVal(hash(key), key, value, false, evict);
        }
    }
}

/**
 * Returns the number of key-value mappings in this map.
 *
 * @return the number of key-value mappings in this map
 */
public int size() {
    return size;
}

/**
 * Returns <tt>true</tt> if this map contains no key-value mappings.
 *
 * @return <tt>true</tt> if this map contains no key-value mappings
 */
public boolean isEmpty() {
    return size == 0;
}

/**
 * 我们根据key来获取Node的节点数据
 */
public V get(Object key) {
    Node<K,V> e;
    // 根据key来计算对应的Hash值,通过key的hash值和key来获取对应的Value
    // 下面我们来看getNode方法
    return (e = getNode(hash(key), key)) == null ? null : e.value;
}

/**
 * 获取HashMap的Node节点
 */
final Node<K,V> getNode(int hash, Object key) {
    //tab:内部数组  first: 索引位首节点 n: 数组长度 k: 索引位首节点的key
    Node<K,V>[] tab; Node<K,V> first, e; int n; K k;
    
    //数组不为null 数组长度大于0 定位到数组中的索引处。并判断索引位首节点不为null
    if ((tab = table) != null && (n = tab.length) > 0 &&
       
        // i = (n - 1) & hash ,n是数组长度,hash就是通过hash()方法进行高低位异或运算得出来的hash值。 //这个表达式就是hash值的取模运算,上面已经说过当除数为2的次方时,可以用与运算提高性能。
        // 所以这里也就是解决了我们为什么要将数组长度取2的N次方
        
        (first = tab[(n - 1) & hash]) != null) {
        
        // 如果索引位首节点的hash==key的hash 或者 key和索引位首节点的k相同
        if (first.hash == hash && // always check first node
            ((k = first.key) == key || (key != null && key.equals(k))))
             // 返回索引位首节点(值对象)
            return first;
        //  
        if ((e = first.next) != null) {
            if (first instanceof TreeNode)
                // 如果是红黑色则到红黑树中查找.
                // getNode的方法比较简单,下面我们就来看getTreeNode的方法实现
                return ((TreeNode<K,V>)first).getTreeNode(hash, key);
            // 否则就在链表中进行遍历朝朝。找到就返回e    
            do {
                if (e.hash == hash &&
                    ((k = e.key) == key || (key != null && key.equals(k))))
                    return e;
            } while ((e = e.next) != null);
        }
    }
    return null;
}

/**
 * Returns <tt>true</tt> if this map contains a mapping for the
 * specified key.
 *
 * @param   key   The key whose presence in this map is to be tested
 * @return <tt>true</tt> if this map contains a mapping for the specified
 * key.
 */
public boolean containsKey(Object key) {
    return getNode(hash(key), key) != null;
}

/**
 * 这个是非常重要的方法!!!
 * 所以我们着重分析一下
 * 这个方法是将对应的value与对应的key进行关联,存放到Map中
 * 如果原来Map中有对应key的value.那么便会更新为入参value
 */
public V put(K key, V value) {
    // 这个方法调用了putVal()。入参的时候调用hash(key)这个方法代码中有讲解
    return putVal(hash(key), key, value, false, true);
}

/**
 * 从上面我们可以看到put()方法调用了这个方法。
 * onlyIfAbsent:当存入键值对时,如果该key已存在,是否覆盖它的value。false为覆盖,true为不覆盖 *参考putIfAbsent()方法。
 * evict:用于子类LinkedHashMap。 也就是说在hashmap是没有用的


​ * 下面我们简单介绍一下这个方法:
​ * 1.检查数组是否为空,执行resize()扩充;
​ * 2.通过hash值计算数组索引,获取该索引位的首节点。
​ * 3.如果首节点为null(没发生碰撞) ,
​ * 直接添加节点到该索引位 (bucket) 。
​ * 4.如果首节点不为null (发生碰撞) ,
​ * 那么有3种情况 :
​ * ① key和首节点的key相同,覆盖old value (保证key的唯一性) ;
​ * 否则执行②或③
​ * ② 如果首节点是红黑树节点(TreeNode),将键值对添加到红黑树。
​ * ③ 如果首节点是链表,将键值对添加到链表。添加之后会判断链表长度是否到达TREEIFY_THRESHOLD - 1这个阈值,“尝试”将链表转换成红黑树。
​ * 5.最后判断当前元素个数是否大于threshold,扩充数组。
​ */
​ final V putVal(int hash, K key, V value, boolean onlyIfAbsent,
​ boolean evict) {
​ // 声明hash桶的内部数组临时变量tab
​ // p:hash对应的索引位中的首节点
​ // n:内部数组的长度 i:hash对应的索引位
​ Node<K,V>[] tab; Node<K,V> p; int n, i;
​ // 条件一:table为null 或者table数组长度为0。
​ if ((tab = table) == null || (n = tab.length) == 0)
​ n = (tab = resize()).length; // 执行扩容操作,并计算数组长度
​ // i = (n-1)&hash 计算index值。所以当我们取值的时候,也是从这个Index
​ // 如果为null 那么通过newNode来构建hash桶的这个节点
​ if ((p = tab[i = (n - 1) & hash]) == null)
​ // 创建一个新的节点
​ tab[i] = newNode(hash, key, value, null);
​ else {
​ Node<K,V> e; K k;
​ //节点key存在,直接覆盖value
​ if (p.hash == hash &&
​ ((k = p.key) == key || (key != null && key.equals(k))))
​ e = p;
​ // 如果判断当前链是红黑树
​ else if (p instanceof TreeNode)
​ // 则执行红黑树的插入值的相关操作
​ e = ((TreeNode<K,V>)p).putTreeVal(this, tab, hash, key, value);
​ // 此时首节点为链表,如果链表中存在该键值对,直接覆盖value。
​ // 如果不存在,则在末端插入键值对。然后判断链表是否大于等于7,尝试转换成红黑树。
​ // 注意此处使用“尝试”,因为在treeifyBin方法中还会判断当前数组容量是否到达64,
​ // 否则会放弃次此转换,优先扩充数组容量。
​ else {
​ // 如果p节点不是为空,也不是为红黑树,那就是普通的链表
​ // 走到这里,hash碰撞了。检查链表中是否包含key,或将键值对添加到链表末尾
​ for (int binCount = 0; ; ++binCount) {
​ // p.next == null,到达链表末尾,添加新节点,如果长度足够,转换成树结构。
​ if ((e = p.next) == null) {
​ p.next = newNode(hash, key, value, null);
​ //链表长度大于7转换为红黑树进行处理(大于等于7)
​ if (binCount >= TREEIFY_THRESHOLD - 1) // -1 for 1st
​ treeifyBin(tab, hash);
​ break;
​ }
​ // 如果对应的Key已经存在则直接覆盖Value即可
​ if (e.hash == hash &&
​ ((k = e.key) == key || (key != null && key.equals(k))))
​ break;
​ p = e;
​ }
​ }
​ if (e != null) { // existing mapping for key
​ V oldValue = e.value;
​ if (!onlyIfAbsent || oldValue == null)
​ e.value = value;
​ afterNodeAccess(e);
​ return oldValue;
​ }
​ }
​ ++modCount;
​ // 如果超过最大的扩容林接着,则进行扩容
​ if (++size > threshold)
​ resize();
​ afterNodeInsertion(evict);
​ return null;
​ }

final HashMap.Node<K,V>[] resize() {
HashMap.Node<K,V>[] oldTab = table;
int oldCap = (oldTab == null) ? 0 : oldTab.length;
int oldThr = threshold;
int newCap, newThr = 0;
if (oldCap > 0) {
// 如果数组已经是最大长度,不进行扩充。
if (oldCap >= MAXIMUM_CAPACITY) {
threshold = Integer.MAX_VALUE;
return oldTab;
}
// 否则数组容量扩充一倍。(2的N次方)
else if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY &&
oldCap >= DEFAULT_INITIAL_CAPACITY)
newThr = oldThr << 1; // double threshold
}
// 如果数组还没创建,但是已经指定了threshold(这种情况是带参构造创建的对象),threshold的值为数组长度
// 在 “构造函数” 那块内容进行过说明。
else if (oldThr > 0) // initial capacity was placed in threshold
newCap = oldThr;
// 这种情况是通过无参构造创建的对象
else { // zero initial threshold signifies using defaults
newCap = DEFAULT_INITIAL_CAPACITY;
newThr = (int)(DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY);
}
// 可能是上面newThr = oldThr << 1时,最高位被移除了,变为0。
if (newThr == 0) {
float ft = (float)newCap * loadFactor;
newThr = (newCap < MAXIMUM_CAPACITY && ft < (float)MAXIMUM_CAPACITY ?
(int)ft : Integer.MAX_VALUE);
}
threshold = newThr;

// 到了这里,新的数组长度已经被计算出来,创建一个新的数组。
@SuppressWarnings({"rawtypes","unchecked"})
HashMap.Node<K,V>[] newTab = (HashMap.Node<K,V>[])new HashMap.Node[newCap];
table = newTab;

// 下面代码是将原来数组的元素转移到新数组中。问题在于,数组长度发生变化。 
// 那么通过hash%数组长度计算的索引也将和原来的不同。
// jdk 1.7中是通过重新计算每个元素的索引,重新存入新的数组,称为rehash操作。
// 这也是hashMap无序性的原因之一。而现在jdk 1.8对此做了优化,非常的巧妙。
if (oldTab != null) {
    
    // 遍历原数组
    for (int j = 0; j < oldCap; ++j) {
        // 取出首节点
        HashMap.Node<K,V> e;
        if ((e = oldTab[j]) != null) {
            oldTab[j] = null;
            // 如果链表只有一个节点,那么直接重新计算索引存入新数组。
            if (e.next == null)
                newTab[e.hash & (newCap - 1)] = e;
            // 如果该节点是红黑树,执行split方法,和链表类似的处理。
            else if (e instanceof HashMap.TreeNode)
                ((HashMap.TreeNode<K,V>)e).split(this, newTab, j, oldCap);
            
            // 此时节点是链表
            else { // preserve order
                // loHead,loTail为原链表的节点,索引不变。
                HashMap.Node<K,V> loHead = null, loTail = null;
                // hiHeadm, hiTail为新链表节点,原索引 + 原数组长度。
                HashMap.Node<K,V> hiHead = null, hiTail = null;
                HashMap.Node<K,V> next;
                
               // 遍历链表
                do {
                    next = e.next;
                    // 新增bit为0的节点,存入原链表。
                    if ((e.hash & oldCap) == 0) {
                        if (loTail == null)
                            loHead = e;
                        else
                            loTail.next = e;
                        loTail = e;
                    }
                    // 新增bit为1的节点,存入新链表。
                    else {
                        if (hiTail == null)
                            hiHead = e;
                        else
                            hiTail.next = e;
                        hiTail = e;
                    }
                } while ((e = next) != null);
                // 原链表存回原索引位
                if (loTail != null) {
                    loTail.next = null;
                    newTab[j] = loHead;
                }
                // 新链表存到:原索引位 + 原数组长度
                if (hiTail != null) {
                    hiTail.next = null;
                    newTab[j + oldCap] = hiHead;
                }
            }
        }
    }
}
return newTab;

}

/**
 * 这个方法是将链表转化为红黑树
 *      如果当前数组容量太小(小于64),放弃转换,扩充数组。
 */
final void treeifyBin(Node<K,V>[] tab, int hash) {
    int n, index; Node<K,V> e;
    // 如果当前数组容量太小(小于64),放弃转换,扩充数组。
    if (tab == null || (n = tab.length) < MIN_TREEIFY_CAPACITY)
        resize();
    else if ((e = tab[index = (n - 1) & hash]) != null) {
        // 将链表转化为红黑树
        TreeNode<K,V> hd = null, tl = null;
        do {
            TreeNode<K,V> p = replacementTreeNode(e, null);
            if (tl == null)
                hd = p;
            else {
                p.prev = tl;
                tl.next = p;
            }
            tl = p;
        } while ((e = e.next) != null);
        if ((tab[index] = hd) != null)
            hd.treeify(tab);
    }
}

/**
 * Copies all of the mappings from the specified map to this map.
 * These mappings will replace any mappings that this map had for
 * any of the keys currently in the specified map.
 *
 * @param m mappings to be stored in this map
 * @throws NullPointerException if the specified map is null
 */
public void putAll(Map<? extends K, ? extends V> m) {
    putMapEntries(m, true);
}

/**
 * Removes the mapping for the specified key from this map if present.
 *
 * @param  key key whose mapping is to be removed from the map
 * @return the previous value associated with <tt>key</tt>, or
 *         <tt>null</tt> if there was no mapping for <tt>key</tt>.
 *         (A <tt>null</tt> return can also indicate that the map
 *         previously associated <tt>null</tt> with <tt>key</tt>.)
 */
public V remove(Object key) {
    Node<K,V> e;
    return (e = removeNode(hash(key), key, null, false, true)) == null ?
        null : e.value;
}

/**
 * Implements Map.remove and related methods
 *
 * @param hash hash for key
 * @param key the key
 * @param value the value to match if matchValue, else ignored
 * @param matchValue if true only remove if value is equal
 * @param movable if false do not move other nodes while removing
 * @return the node, or null if none
 */
final Node<K,V> removeNode(int hash, Object key, Object value,
                           boolean matchValue, boolean movable) {
    Node<K,V>[] tab; Node<K,V> p; int n, index;
    if ((tab = table) != null && (n = tab.length) > 0 &&
        (p = tab[index = (n - 1) & hash]) != null) {
        Node<K,V> node = null, e; K k; V v;
        if (p.hash == hash &&
            ((k = p.key) == key || (key != null && key.equals(k))))
            node = p;
        else if ((e = p.next) != null) {
            if (p instanceof TreeNode)
                node = ((TreeNode<K,V>)p).getTreeNode(hash, key);
            else {
                do {
                    if (e.hash == hash &&
                        ((k = e.key) == key ||
                         (key != null && key.equals(k)))) {
                        node = e;
                        break;
                    }
                    p = e;
                } while ((e = e.next) != null);
            }
        }
        if (node != null && (!matchValue || (v = node.value) == value ||
                             (value != null && value.equals(v)))) {
            if (node instanceof TreeNode)
                ((TreeNode<K,V>)node).removeTreeNode(this, tab, movable);
            else if (node == p)
                tab[index] = node.next;
            else
                p.next = node.next;
            ++modCount;
            --size;
            afterNodeRemoval(node);
            return node;
        }
    }
    return null;
}

/**
 * Removes all of the mappings from this map.
 * The map will be empty after this call returns.
 */
public void clear() {
    Node<K,V>[] tab;
    modCount++;
    if ((tab = table) != null && size > 0) {
        size = 0;
        for (int i = 0; i < tab.length; ++i)
            tab[i] = null;
    }
}

/**
 * Returns <tt>true</tt> if this map maps one or more keys to the
 * specified value.
 *
 * @param value value whose presence in this map is to be tested
 * @return <tt>true</tt> if this map maps one or more keys to the
 *         specified value
 */
public boolean containsValue(Object value) {
    Node<K,V>[] tab; V v;
    if ((tab = table) != null && size > 0) {
        for (int i = 0; i < tab.length; ++i) {
            for (Node<K,V> e = tab[i]; e != null; e = e.next) {
                if ((v = e.value) == value ||
                    (value != null && value.equals(v)))
                    return true;
            }
        }
    }
    return false;
}

/**
 * Returns a {@link Set} view of the keys contained in this map.
 * The set is backed by the map, so changes to the map are
 * reflected in the set, and vice-versa.  If the map is modified
 * while an iteration over the set is in progress (except through
 * the iterator's own <tt>remove</tt> operation), the results of
 * the iteration are undefined.  The set supports element removal,
 * which removes the corresponding mapping from the map, via the
 * <tt>Iterator.remove</tt>, <tt>Set.remove</tt>,
 * <tt>removeAll</tt>, <tt>retainAll</tt>, and <tt>clear</tt>
 * operations.  It does not support the <tt>add</tt> or <tt>addAll</tt>
 * operations.
 *
 * @return a set view of the keys contained in this map
 */
public Set<K> keySet() {
    Set<K> ks = keySet;
    if (ks == null) {
        ks = new KeySet();
        keySet = ks;
    }
    return ks;
}

final class KeySet extends AbstractSet<K> {
    public final int size()                 { return size; }
    public final void clear()               { HashMap.this.clear(); }
    public final Iterator<K> iterator()     { return new KeyIterator(); }
    public final boolean contains(Object o) { return containsKey(o); }
    public final boolean remove(Object key) {
        return removeNode(hash(key), key, null, false, true) != null;
    }
    public final Spliterator<K> spliterator() {
        return new KeySpliterator<>(HashMap.this, 0, -1, 0, 0);
    }
    public final void forEach(Consumer<? super K> action) {
        Node<K,V>[] tab;
        if (action == null)
            throw new NullPointerException();
        if (size > 0 && (tab = table) != null) {
            int mc = modCount;
            for (int i = 0; i < tab.length; ++i) {
                for (Node<K,V> e = tab[i]; e != null; e = e.next)
                    action.accept(e.key);
            }
            if (modCount != mc)
                throw new ConcurrentModificationException();
        }
    }
}

/**
 * Returns a {@link Collection} view of the values contained in this map.
 * The collection is backed by the map, so changes to the map are
 * reflected in the collection, and vice-versa.  If the map is
 * modified while an iteration over the collection is in progress
 * (except through the iterator's own <tt>remove</tt> operation),
 * the results of the iteration are undefined.  The collection
 * supports element removal, which removes the corresponding
 * mapping from the map, via the <tt>Iterator.remove</tt>,
 * <tt>Collection.remove</tt>, <tt>removeAll</tt>,
 * <tt>retainAll</tt> and <tt>clear</tt> operations.  It does not
 * support the <tt>add</tt> or <tt>addAll</tt> operations.
 *
 * @return a view of the values contained in this map
 */
public Collection<V> values() {
    Collection<V> vs = values;
    if (vs == null) {
        vs = new Values();
        values = vs;
    }
    return vs;
}

final class Values extends AbstractCollection<V> {
    public final int size()                 { return size; }
    public final void clear()               { HashMap.this.clear(); }
    public final Iterator<V> iterator()     { return new ValueIterator(); }
    public final boolean contains(Object o) { return containsValue(o); }
    public final Spliterator<V> spliterator() {
        return new ValueSpliterator<>(HashMap.this, 0, -1, 0, 0);
    }
    public final void forEach(Consumer<? super V> action) {
        Node<K,V>[] tab;
        if (action == null)
            throw new NullPointerException();
        if (size > 0 && (tab = table) != null) {
            int mc = modCount;
            for (int i = 0; i < tab.length; ++i) {
                for (Node<K,V> e = tab[i]; e != null; e = e.next)
                    action.accept(e.value);
            }
            if (modCount != mc)
                throw new ConcurrentModificationException();
        }
    }
}

/**
 * Returns a {@link Set} view of the mappings contained in this map.
 * The set is backed by the map, so changes to the map are
 * reflected in the set, and vice-versa.  If the map is modified
 * while an iteration over the set is in progress (except through
 * the iterator's own <tt>remove</tt> operation, or through the
 * <tt>setValue</tt> operation on a map entry returned by the
 * iterator) the results of the iteration are undefined.  The set
 * supports element removal, which removes the corresponding
 * mapping from the map, via the <tt>Iterator.remove</tt>,
 * <tt>Set.remove</tt>, <tt>removeAll</tt>, <tt>retainAll</tt> and
 * <tt>clear</tt> operations.  It does not support the
 * <tt>add</tt> or <tt>addAll</tt> operations.
 *
 * @return a set view of the mappings contained in this map
 */
public Set<Map.Entry<K,V>> entrySet() {
    Set<Map.Entry<K,V>> es;
    return (es = entrySet) == null ? (entrySet = new EntrySet()) : es;
}

final class EntrySet extends AbstractSet<Map.Entry<K,V>> {
    public final int size()                 { return size; }
    public final void clear()               { HashMap.this.clear(); }
    public final Iterator<Map.Entry<K,V>> iterator() {
        return new EntryIterator();
    }
    public final boolean contains(Object o) {
        if (!(o instanceof Map.Entry))
            return false;
        Map.Entry<?,?> e = (Map.Entry<?,?>) o;
        Object key = e.getKey();
        Node<K,V> candidate = getNode(hash(key), key);
        return candidate != null && candidate.equals(e);
    }
    public final boolean remove(Object o) {
        if (o instanceof Map.Entry) {
            Map.Entry<?,?> e = (Map.Entry<?,?>) o;
            Object key = e.getKey();
            Object value = e.getValue();
            return removeNode(hash(key), key, value, true, true) != null;
        }
        return false;
    }
    public final Spliterator<Map.Entry<K,V>> spliterator() {
        return new EntrySpliterator<>(HashMap.this, 0, -1, 0, 0);
    }
    public final void forEach(Consumer<? super Map.Entry<K,V>> action) {
        Node<K,V>[] tab;
        if (action == null)
            throw new NullPointerException();
        if (size > 0 && (tab = table) != null) {
            int mc = modCount;
            for (int i = 0; i < tab.length; ++i) {
                for (Node<K,V> e = tab[i]; e != null; e = e.next)
                    action.accept(e);
            }
            if (modCount != mc)
                throw new ConcurrentModificationException();
        }
    }
}

// Overrides of JDK8 Map extension methods

@Override
public V getOrDefault(Object key, V defaultValue) {
    Node<K,V> e;
    return (e = getNode(hash(key), key)) == null ? defaultValue : e.value;
}

@Override
public V putIfAbsent(K key, V value) {
    return putVal(hash(key), key, value, true, true);
}

@Override
public boolean remove(Object key, Object value) {
    return removeNode(hash(key), key, value, true, true) != null;
}

@Override
public boolean replace(K key, V oldValue, V newValue) {
    Node<K,V> e; V v;
    if ((e = getNode(hash(key), key)) != null &&
        ((v = e.value) == oldValue || (v != null && v.equals(oldValue)))) {
        e.value = newValue;
        afterNodeAccess(e);
        return true;
    }
    return false;
}

@Override
public V replace(K key, V value) {
    Node<K,V> e;
    if ((e = getNode(hash(key), key)) != null) {
        V oldValue = e.value;
        e.value = value;
        afterNodeAccess(e);
        return oldValue;
    }
    return null;
}

@Override
public V computeIfAbsent(K key,
                         Function<? super K, ? extends V> mappingFunction) {
    if (mappingFunction == null)
        throw new NullPointerException();
    int hash = hash(key);
    Node<K,V>[] tab; Node<K,V> first; int n, i;
    int binCount = 0;
    TreeNode<K,V> t = null;
    Node<K,V> old = null;
    if (size > threshold || (tab = table) == null ||
        (n = tab.length) == 0)
        n = (tab = resize()).length;
    if ((first = tab[i = (n - 1) & hash]) != null) {
        if (first instanceof TreeNode)
            old = (t = (TreeNode<K,V>)first).getTreeNode(hash, key);
        else {
            Node<K,V> e = first; K k;
            do {
                if (e.hash == hash &&
                    ((k = e.key) == key || (key != null && key.equals(k)))) {
                    old = e;
                    break;
                }
                ++binCount;
            } while ((e = e.next) != null);
        }
        V oldValue;
        if (old != null && (oldValue = old.value) != null) {
            afterNodeAccess(old);
            return oldValue;
        }
    }
    V v = mappingFunction.apply(key);
    if (v == null) {
        return null;
    } else if (old != null) {
        old.value = v;
        afterNodeAccess(old);
        return v;
    }
    else if (t != null)
        t.putTreeVal(this, tab, hash, key, v);
    else {
        tab[i] = newNode(hash, key, v, first);
        if (binCount >= TREEIFY_THRESHOLD - 1)
            treeifyBin(tab, hash);
    }
    ++modCount;
    ++size;
    afterNodeInsertion(true);
    return v;
}

public V computeIfPresent(K key,
                          BiFunction<? super K, ? super V, ? extends V> remappingFunction) {
    if (remappingFunction == null)
        throw new NullPointerException();
    Node<K,V> e; V oldValue;
    int hash = hash(key);
    if ((e = getNode(hash, key)) != null &&
        (oldValue = e.value) != null) {
        V v = remappingFunction.apply(key, oldValue);
        if (v != null) {
            e.value = v;
            afterNodeAccess(e);
            return v;
        }
        else
            removeNode(hash, key, null, false, true);
    }
    return null;
}

@Override
public V compute(K key,
                 BiFunction<? super K, ? super V, ? extends V> remappingFunction) {
    if (remappingFunction == null)
        throw new NullPointerException();
    int hash = hash(key);
    Node<K,V>[] tab; Node<K,V> first; int n, i;
    int binCount = 0;
    TreeNode<K,V> t = null;
    Node<K,V> old = null;
    if (size > threshold || (tab = table) == null ||
        (n = tab.length) == 0)
        n = (tab = resize()).length;
    if ((first = tab[i = (n - 1) & hash]) != null) {
        if (first instanceof TreeNode)
            old = (t = (TreeNode<K,V>)first).getTreeNode(hash, key);
        else {
            Node<K,V> e = first; K k;
            do {
                if (e.hash == hash &&
                    ((k = e.key) == key || (key != null && key.equals(k)))) {
                    old = e;
                    break;
                }
                ++binCount;
            } while ((e = e.next) != null);
        }
    }
    V oldValue = (old == null) ? null : old.value;
    V v = remappingFunction.apply(key, oldValue);
    if (old != null) {
        if (v != null) {
            old.value = v;
            afterNodeAccess(old);
        }
        else
            removeNode(hash, key, null, false, true);
    }
    else if (v != null) {
        if (t != null)
            t.putTreeVal(this, tab, hash, key, v);
        else {
            tab[i] = newNode(hash, key, v, first);
            if (binCount >= TREEIFY_THRESHOLD - 1)
                treeifyBin(tab, hash);
        }
        ++modCount;
        ++size;
        afterNodeInsertion(true);
    }
    return v;
}

@Override
public V merge(K key, V value,
               BiFunction<? super V, ? super V, ? extends V> remappingFunction) {
    if (value == null)
        throw new NullPointerException();
    if (remappingFunction == null)
        throw new NullPointerException();
    int hash = hash(key);
    Node<K,V>[] tab; Node<K,V> first; int n, i;
    int binCount = 0;
    TreeNode<K,V> t = null;
    Node<K,V> old = null;
    if (size > threshold || (tab = table) == null ||
        (n = tab.length) == 0)
        n = (tab = resize()).length;
    if ((first = tab[i = (n - 1) & hash]) != null) {
        if (first instanceof TreeNode)
            old = (t = (TreeNode<K,V>)first).getTreeNode(hash, key);
        else {
            Node<K,V> e = first; K k;
            do {
                if (e.hash == hash &&
                    ((k = e.key) == key || (key != null && key.equals(k)))) {
                    old = e;
                    break;
                }
                ++binCount;
            } while ((e = e.next) != null);
        }
    }
    if (old != null) {
        V v;
        if (old.value != null)
            v = remappingFunction.apply(old.value, value);
        else
            v = value;
        if (v != null) {
            old.value = v;
            afterNodeAccess(old);
        }
        else
            removeNode(hash, key, null, false, true);
        return v;
    }
    if (value != null) {
        if (t != null)
            t.putTreeVal(this, tab, hash, key, value);
        else {
            tab[i] = newNode(hash, key, value, first);
            if (binCount >= TREEIFY_THRESHOLD - 1)
                treeifyBin(tab, hash);
        }
        ++modCount;
        ++size;
        afterNodeInsertion(true);
    }
    return value;
}

@Override
public void forEach(BiConsumer<? super K, ? super V> action) {
    Node<K,V>[] tab;
    if (action == null)
        throw new NullPointerException();
    if (size > 0 && (tab = table) != null) {
        int mc = modCount;
        for (int i = 0; i < tab.length; ++i) {
            for (Node<K,V> e = tab[i]; e != null; e = e.next)
                action.accept(e.key, e.value);
        }
        if (modCount != mc)
            throw new ConcurrentModificationException();
    }
}

@Override
public void replaceAll(BiFunction<? super K, ? super V, ? extends V> function) {
    Node<K,V>[] tab;
    if (function == null)
        throw new NullPointerException();
    if (size > 0 && (tab = table) != null) {
        int mc = modCount;
        for (int i = 0; i < tab.length; ++i) {
            for (Node<K,V> e = tab[i]; e != null; e = e.next) {
                e.value = function.apply(e.key, e.value);
            }
        }
        if (modCount != mc)
            throw new ConcurrentModificationException();
    }
}

/* ------------------------------------------------------------ */
// Cloning and serialization

/**
 * Returns a shallow copy of this <tt>HashMap</tt> instance: the keys and
 * values themselves are not cloned.
 *
 * @return a shallow copy of this map
 */
@SuppressWarnings("unchecked")
@Override
public Object clone() {
    HashMap<K,V> result;
    try {
        result = (HashMap<K,V>)super.clone();
    } catch (CloneNotSupportedException e) {
        // this shouldn't happen, since we are Cloneable
        throw new InternalError(e);
    }
    result.reinitialize();
    result.putMapEntries(this, false);
    return result;
}

// These methods are also used when serializing HashSets
final float loadFactor() { return loadFactor; }
final int capacity() {
    return (table != null) ? table.length :
        (threshold > 0) ? threshold :
        DEFAULT_INITIAL_CAPACITY;
}

/**
 * Save the state of the <tt>HashMap</tt> instance to a stream (i.e.,
 * serialize it).
 *
 * @serialData The <i>capacity</i> of the HashMap (the length of the
 *             bucket array) is emitted (int), followed by the
 *             <i>size</i> (an int, the number of key-value
 *             mappings), followed by the key (Object) and value (Object)
 *             for each key-value mapping.  The key-value mappings are
 *             emitted in no particular order.
 */
private void writeObject(java.io.ObjectOutputStream s)
    throws IOException {
    int buckets = capacity();
    // Write out the threshold, loadfactor, and any hidden stuff
    s.defaultWriteObject();
    s.writeInt(buckets);
    s.writeInt(size);
    internalWriteEntries(s);
}

/**
 * Reconstitute the {@code HashMap} instance from a stream (i.e.,
 * deserialize it).
 */
private void readObject(java.io.ObjectInputStream s)
    throws IOException, ClassNotFoundException {
    // Read in the threshold (ignored), loadfactor, and any hidden stuff
    s.defaultReadObject();
    reinitialize();
    if (loadFactor <= 0 || Float.isNaN(loadFactor))
        throw new InvalidObjectException("Illegal load factor: " +
                                         loadFactor);
    s.readInt();                // Read and ignore number of buckets
    int mappings = s.readInt(); // Read number of mappings (size)
    if (mappings < 0)
        throw new InvalidObjectException("Illegal mappings count: " +
                                         mappings);
    else if (mappings > 0) { // (if zero, use defaults)
        // Size the table using given load factor only if within
        // range of 0.25...4.0
        float lf = Math.min(Math.max(0.25f, loadFactor), 4.0f);
        float fc = (float)mappings / lf + 1.0f;
        int cap = ((fc < DEFAULT_INITIAL_CAPACITY) ?
                   DEFAULT_INITIAL_CAPACITY :
                   (fc >= MAXIMUM_CAPACITY) ?
                   MAXIMUM_CAPACITY :
                   tableSizeFor((int)fc));
        float ft = (float)cap * lf;
        threshold = ((cap < MAXIMUM_CAPACITY && ft < MAXIMUM_CAPACITY) ?
                     (int)ft : Integer.MAX_VALUE);

        // Check Map.Entry[].class since it's the nearest public type to
        // what we're actually creating.
        SharedSecrets.getJavaOISAccess().checkArray(s, Map.Entry[].class, cap);
        @SuppressWarnings({"rawtypes","unchecked"})
        Node<K,V>[] tab = (Node<K,V>[])new Node[cap];
        table = tab;

        // Read the keys and values, and put the mappings in the HashMap
        for (int i = 0; i < mappings; i++) {
            @SuppressWarnings("unchecked")
                K key = (K) s.readObject();
            @SuppressWarnings("unchecked")
                V value = (V) s.readObject();
            putVal(hash(key), key, value, false, false);
        }
    }
}

/* ------------------------------------------------------------ */
// iterators

abstract class HashIterator {
    Node<K,V> next;        // next entry to return
    Node<K,V> current;     // current entry
    int expectedModCount;  // for fast-fail
    int index;             // current slot

    HashIterator() {
        expectedModCount = modCount;
        Node<K,V>[] t = table;
        current = next = null;
        index = 0;
        if (t != null && size > 0) { // advance to first entry
            do {} while (index < t.length && (next = t[index++]) == null);
        }
    }

    public final boolean hasNext() {
        return next != null;
    }

    final Node<K,V> nextNode() {
        Node<K,V>[] t;
        Node<K,V> e = next;
        if (modCount != expectedModCount)
            throw new ConcurrentModificationException();
        if (e == null)
            throw new NoSuchElementException();
        if ((next = (current = e).next) == null && (t = table) != null) {
            do {} while (index < t.length && (next = t[index++]) == null);
        }
        return e;
    }

    public final void remove() {
        Node<K,V> p = current;
        if (p == null)
            throw new IllegalStateException();
        if (modCount != expectedModCount)
            throw new ConcurrentModificationException();
        current = null;
        K key = p.key;
        removeNode(hash(key), key, null, false, false);
        expectedModCount = modCount;
    }
}

final class KeyIterator extends HashIterator
    implements Iterator<K> {
    public final K next() { return nextNode().key; }
}

final class ValueIterator extends HashIterator
    implements Iterator<V> {
    public final V next() { return nextNode().value; }
}

final class EntryIterator extends HashIterator
    implements Iterator<Map.Entry<K,V>> {
    public final Map.Entry<K,V> next() { return nextNode(); }
}

/* ------------------------------------------------------------ */
// spliterators

static class HashMapSpliterator<K,V> {
    final HashMap<K,V> map;
    Node<K,V> current;          // current node
    int index;                  // current index, modified on advance/split
    int fence;                  // one past last index
    int est;                    // size estimate
    int expectedModCount;       // for comodification checks

    HashMapSpliterator(HashMap<K,V> m, int origin,
                       int fence, int est,
                       int expectedModCount) {
        this.map = m;
        this.index = origin;
        this.fence = fence;
        this.est = est;
        this.expectedModCount = expectedModCount;
    }

    final int getFence() { // initialize fence and size on first use
        int hi;
        if ((hi = fence) < 0) {
            HashMap<K,V> m = map;
            est = m.size;
            expectedModCount = m.modCount;
            Node<K,V>[] tab = m.table;
            hi = fence = (tab == null) ? 0 : tab.length;
        }
        return hi;
    }

    public final long estimateSize() {
        getFence(); // force init
        return (long) est;
    }
}

static final class KeySpliterator<K,V>
    extends HashMapSpliterator<K,V>
    implements Spliterator<K> {
    KeySpliterator(HashMap<K,V> m, int origin, int fence, int est,
                   int expectedModCount) {
        super(m, origin, fence, est, expectedModCount);
    }

    public KeySpliterator<K,V> trySplit() {
        int hi = getFence(), lo = index, mid = (lo + hi) >>> 1;
        return (lo >= mid || current != null) ? null :
            new KeySpliterator<>(map, lo, index = mid, est >>>= 1,
                                    expectedModCount);
    }

    public void forEachRemaining(Consumer<? super K> action) {
        int i, hi, mc;
        if (action == null)
            throw new NullPointerException();
        HashMap<K,V> m = map;
        Node<K,V>[] tab = m.table;
        if ((hi = fence) < 0) {
            mc = expectedModCount = m.modCount;
            hi = fence = (tab == null) ? 0 : tab.length;
        }
        else
            mc = expectedModCount;
        if (tab != null && tab.length >= hi &&
            (i = index) >= 0 && (i < (index = hi) || current != null)) {
            Node<K,V> p = current;
            current = null;
            do {
                if (p == null)
                    p = tab[i++];
                else {
                    action.accept(p.key);
                    p = p.next;
                }
            } while (p != null || i < hi);
            if (m.modCount != mc)
                throw new ConcurrentModificationException();
        }
    }

    public boolean tryAdvance(Consumer<? super K> action) {
        int hi;
        if (action == null)
            throw new NullPointerException();
        Node<K,V>[] tab = map.table;
        if (tab != null && tab.length >= (hi = getFence()) && index >= 0) {
            while (current != null || index < hi) {
                if (current == null)
                    current = tab[index++];
                else {
                    K k = current.key;
                    current = current.next;
                    action.accept(k);
                    if (map.modCount != expectedModCount)
                        throw new ConcurrentModificationException();
                    return true;
                }
            }
        }
        return false;
    }

    public int characteristics() {
        return (fence < 0 || est == map.size ? Spliterator.SIZED : 0) |
            Spliterator.DISTINCT;
    }
}

static final class ValueSpliterator<K,V>
    extends HashMapSpliterator<K,V>
    implements Spliterator<V> {
    ValueSpliterator(HashMap<K,V> m, int origin, int fence, int est,
                     int expectedModCount) {
        super(m, origin, fence, est, expectedModCount);
    }

    public ValueSpliterator<K,V> trySplit() {
        int hi = getFence(), lo = index, mid = (lo + hi) >>> 1;
        return (lo >= mid || current != null) ? null :
            new ValueSpliterator<>(map, lo, index = mid, est >>>= 1,
                                      expectedModCount);
    }

    public void forEachRemaining(Consumer<? super V> action) {
        int i, hi, mc;
        if (action == null)
            throw new NullPointerException();
        HashMap<K,V> m = map;
        Node<K,V>[] tab = m.table;
        if ((hi = fence) < 0) {
            mc = expectedModCount = m.modCount;
            hi = fence = (tab == null) ? 0 : tab.length;
        }
        else
            mc = expectedModCount;
        if (tab != null && tab.length >= hi &&
            (i = index) >= 0 && (i < (index = hi) || current != null)) {
            Node<K,V> p = current;
            current = null;
            do {
                if (p == null)
                    p = tab[i++];
                else {
                    action.accept(p.value);
                    p = p.next;
                }
            } while (p != null || i < hi);
            if (m.modCount != mc)
                throw new ConcurrentModificationException();
        }
    }

    public boolean tryAdvance(Consumer<? super V> action) {
        int hi;
        if (action == null)
            throw new NullPointerException();
        Node<K,V>[] tab = map.table;
        if (tab != null && tab.length >= (hi = getFence()) && index >= 0) {
            while (current != null || index < hi) {
                if (current == null)
                    current = tab[index++];
                else {
                    V v = current.value;
                    current = current.next;
                    action.accept(v);
                    if (map.modCount != expectedModCount)
                        throw new ConcurrentModificationException();
                    return true;
                }
            }
        }
        return false;
    }

    public int characteristics() {
        return (fence < 0 || est == map.size ? Spliterator.SIZED : 0);
    }
}

static final class EntrySpliterator<K,V>
    extends HashMapSpliterator<K,V>
    implements Spliterator<Map.Entry<K,V>> {
    EntrySpliterator(HashMap<K,V> m, int origin, int fence, int est,
                     int expectedModCount) {
        super(m, origin, fence, est, expectedModCount);
    }

    public EntrySpliterator<K,V> trySplit() {
        int hi = getFence(), lo = index, mid = (lo + hi) >>> 1;
        return (lo >= mid || current != null) ? null :
            new EntrySpliterator<>(map, lo, index = mid, est >>>= 1,
                                      expectedModCount);
    }

    public void forEachRemaining(Consumer<? super Map.Entry<K,V>> action) {
        int i, hi, mc;
        if (action == null)
            throw new NullPointerException();
        HashMap<K,V> m = map;
        Node<K,V>[] tab = m.table;
        if ((hi = fence) < 0) {
            mc = expectedModCount = m.modCount;
            hi = fence = (tab == null) ? 0 : tab.length;
        }
        else
            mc = expectedModCount;
        if (tab != null && tab.length >= hi &&
            (i = index) >= 0 && (i < (index = hi) || current != null)) {
            Node<K,V> p = current;
            current = null;
            do {
                if (p == null)
                    p = tab[i++];
                else {
                    action.accept(p);
                    p = p.next;
                }
            } while (p != null || i < hi);
            if (m.modCount != mc)
                throw new ConcurrentModificationException();
        }
    }

    public boolean tryAdvance(Consumer<? super Map.Entry<K,V>> action) {
        int hi;
        if (action == null)
            throw new NullPointerException();
        Node<K,V>[] tab = map.table;
        if (tab != null && tab.length >= (hi = getFence()) && index >= 0) {
            while (current != null || index < hi) {
                if (current == null)
                    current = tab[index++];
                else {
                    Node<K,V> e = current;
                    current = current.next;
                    action.accept(e);
                    if (map.modCount != expectedModCount)
                        throw new ConcurrentModificationException();
                    return true;
                }
            }
        }
        return false;
    }

    public int characteristics() {
        return (fence < 0 || est == map.size ? Spliterator.SIZED : 0) |
            Spliterator.DISTINCT;
    }
}

/* ------------------------------------------------------------ */
// LinkedHashMap support
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
/*
* The following package-protected methods are designed to be
* overridden by LinkedHashMap, but not by any other subclass.
* Nearly all other internal methods are also package-protected
* but are declared final, so can be used by LinkedHashMap, view
* classes, and HashSet.
*/

// Create a regular (non-tree) node
Node<K,V> newNode(int hash, K key, V value, Node<K,V> next) {
return new Node<>(hash, key, value, next);
}

// For conversion from TreeNodes to plain nodes
Node<K,V> replacementNode(Node<K,V> p, Node<K,V> next) {
return new Node<>(p.hash, p.key, p.value, next);
}

// Create a tree bin node
TreeNode<K,V> newTreeNode(int hash, K key, V value, Node<K,V> next) {
return new TreeNode<>(hash, key, value, next);
}

// For treeifyBin
TreeNode<K,V> replacementTreeNode(Node<K,V> p, Node<K,V> next) {
return new TreeNode<>(p.hash, p.key, p.value, next);
}

/**
* Reset to initial default state. Called by clone and readObject.
*/
void reinitialize() {
table = null;
entrySet = null;
keySet = null;
values = null;
modCount = 0;
threshold = 0;
size = 0;
}

// Callbacks to allow LinkedHashMap post-actions
void afterNodeAccess(Node<K,V> p) { }
void afterNodeInsertion(boolean evict) { }
void afterNodeRemoval(Node<K,V> p) { }

// Called only from writeObject, to ensure compatible ordering.
void internalWriteEntries(java.io.ObjectOutputStream s) throws IOException {
Node<K,V>[] tab;
if (size > 0 && (tab = table) != null) {
for (int i = 0; i < tab.length; ++i) {
for (Node<K,V> e = tab[i]; e != null; e = e.next) {
s.writeObject(e.key);
s.writeObject(e.value);
}
}
}
}

/* ------------------------------------------------------------ */
// Tree bins

/**
* Entry for Tree bins. Extends LinkedHashMap.Entry (which in turn
* extends Node) so can be used as extension of either regular or
* linked node.
*/
static final class TreeNode<K,V> extends LinkedHashMap.Entry<K,V> {
TreeNode<K,V> parent; // red-black tree links
TreeNode<K,V> left;
TreeNode<K,V> right;
TreeNode<K,V> prev; // needed to unlink next upon deletion
boolean red;
TreeNode(int hash, K key, V val, Node<K,V> next) {
super(hash, key, val, next);
}

/**
* Returns root of tree containing this node.
*/
final TreeNode<K,V> root() {
for (TreeNode<K,V> r = this, p;;) {
if ((p = r.parent) == null)
return r;
r = p;
}
}

/**
* Ensures that the given root is the first node of its bin.
*/
static <K,V> void moveRootToFront(Node<K,V>[] tab, TreeNode<K,V> root) {
int n;
if (root != null && tab != null && (n = tab.length) > 0) {
int index = (n - 1) & root.hash;
TreeNode<K,V> first = (TreeNode<K,V>)tab[index];
if (root != first) {
Node<K,V> rn;
tab[index] = root;
TreeNode<K,V> rp = root.prev;
if ((rn = root.next) != null)
((TreeNode<K,V>)rn).prev = rp;
if (rp != null)
rp.next = rn;
if (first != null)
first.prev = root;
root.next = first;
root.prev = null;
}
assert checkInvariants(root);
}
}

/**
* 进行红黑树的遍历
*/
final TreeNode<K,V> find(int h, Object k, Class<?> kc) {
// 获取当前的节点
TreeNode<K,V> p = this;
do {
// 定义节点hash值、节点key值
int ph, dir; K pk;
// 获取左右子节点
TreeNode<K,V> pl = p.left, pr = p.right, q;
// 获取节点hash值 若是大于传入的hash值。则
if ((ph = p.hash) > h)
p = pl;
else if (ph < h)
p = pr;
else if ((pk = p.key) == k || (k != null && k.equals(pk)))
return p;
else if (pl == null)
p = pr;
else if (pr == null)
p = pl;
else if ((kc != null ||
(kc = comparableClassFor(k)) != null) &&
(dir = compareComparables(kc, k, pk)) != 0)
p = (dir < 0) ? pl : pr;
else if ((q = pr.find(h, k, kc)) != null)
return q;
else
p = pl;
} while (p != null);
return null;
}

/**
* Calls find for root node.
* 这个就是进行红黑树遍历的逻辑。
*/
final TreeNode<K,V> getTreeNode(int h, Object k) {
// 如果parent不为null.调用获取root 若是根节点直接遍历
return ((parent != null) ? root() : this).find(h, k, null);
}

/**
* Tie-breaking utility for ordering insertions when equal
* hashCodes and non-comparable. We don't require a total
* order, just a consistent insertion rule to maintain
* equivalence across rebalancings. Tie-breaking further than
* necessary simplifies testing a bit.
*/
static int tieBreakOrder(Object a, Object b) {
int d;
if (a == null || b == null ||
(d = a.getClass().getName().
compareTo(b.getClass().getName())) == 0)
d = (System.identityHashCode(a) <= System.identityHashCode(b) ?
-1 : 1);
return d;
}

/**
* Forms tree of the nodes linked from this node.
* @return root of tree
*/
final void treeify(Node<K,V>[] tab) {
TreeNode<K,V> root = null;
for (TreeNode<K,V> x = this, next; x != null; x = next) {
next = (TreeNode<K,V>)x.next;
x.left = x.right = null;
if (root == null) {
x.parent = null;
x.red = false;
root = x;
}
else {
K k = x.key;
int h = x.hash;
Class<?> kc = null;
for (TreeNode<K,V> p = root;;) {
int dir, ph;
K pk = p.key;
if ((ph = p.hash) > h)
dir = -1;
else if (ph < h)
dir = 1;
else if ((kc == null &&
(kc = comparableClassFor(k)) == null) ||
(dir = compareComparables(kc, k, pk)) == 0)
dir = tieBreakOrder(k, pk);

TreeNode<K,V> xp = p;
if ((p = (dir <= 0) ? p.left : p.right) == null) {
x.parent = xp;
if (dir <= 0)
xp.left = x;
else
xp.right = x;
root = balanceInsertion(root, x);
break;
}
}
}
}
moveRootToFront(tab, root);
}

/**
* Returns a list of non-TreeNodes replacing those linked from
* this node.
*/
final Node<K,V> untreeify(HashMap<K,V> map) {
Node<K,V> hd = null, tl = null;
for (Node<K,V> q = this; q != null; q = q.next) {
Node<K,V> p = map.replacementNode(q, null);
if (tl == null)
hd = p;
else
tl.next = p;
tl = p;
}
return hd;
}

/**
* Tree version of putVal.
* 如果该节点是红黑树,往红黑树插入数据
* 参数:当前的hashMap hash桶数组。hash key value
*/
final TreeNode<K,V> putTreeVal(HashMap<K,V> map, Node<K,V>[] tab,
int h, K k, V v) {
Class<?> kc = null;
boolean searched = false;
// 获取根节点
TreeNode<K,V> root = (parent != null) ? root() : this;

for (TreeNode<K,V> p = root;;) {
int dir, ph; K pk;
// 计算根节点的hash值
if ((ph = p.hash) > h)
dir = -1;
else if (ph < h)
dir = 1;
else if ((pk = p.key) == k || (k != null && k.equals(pk)))
return p;
else if ((kc == null &&
(kc = comparableClassFor(k)) == null) ||
(dir = compareComparables(kc, k, pk)) == 0) {
if (!searched) {
TreeNode<K,V> q, ch;
searched = true;
if (((ch = p.left) != null &&
(q = ch.find(h, k, kc)) != null) ||
((ch = p.right) != null &&
(q = ch.find(h, k, kc)) != null))
return q;
}
dir = tieBreakOrder(k, pk);
}

TreeNode<K,V> xp = p;
if ((p = (dir <= 0) ? p.left : p.right) == null) {
Node<K,V> xpn = xp.next;
TreeNode<K,V> x = map.newTreeNode(h, k, v, xpn);
if (dir <= 0)
xp.left = x;
else
xp.right = x;
xp.next = x;
x.parent = x.prev = xp;
if (xpn != null)
((TreeNode<K,V>)xpn).prev = x;
moveRootToFront(tab, balanceInsertion(root, x));
return null;
}
}
}

/**
* Removes the given node, that must be present before this call.
* This is messier than typical red-black deletion code because we
* cannot swap the contents of an interior node with a leaf
* successor that is pinned by "next" pointers that are accessible
* independently during traversal. So instead we swap the tree
* linkages. If the current tree appears to have too few nodes,
* the bin is converted back to a plain bin. (The test triggers
* somewhere between 2 and 6 nodes, depending on tree structure).
*/
final void removeTreeNode(HashMap<K,V> map, Node<K,V>[] tab,
boolean movable) {
int n;
if (tab == null || (n = tab.length) == 0)
return;
int index = (n - 1) & hash;
TreeNode<K,V> first = (TreeNode<K,V>)tab[index], root = first, rl;
TreeNode<K,V> succ = (TreeNode<K,V>)next, pred = prev;
if (pred == null)
tab[index] = first = succ;
else
pred.next = succ;
if (succ != null)
succ.prev = pred;
if (first == null)
return;
if (root.parent != null)
root = root.root();
if (root == null || root.right == null ||
(rl = root.left) == null || rl.left == null) {
tab[index] = first.untreeify(map); // too small
return;
}
TreeNode<K,V> p = this, pl = left, pr = right, replacement;
if (pl != null && pr != null) {
TreeNode<K,V> s = pr, sl;
while ((sl = s.left) != null) // find successor
s = sl;
boolean c = s.red; s.red = p.red; p.red = c; // swap colors
TreeNode<K,V> sr = s.right;
TreeNode<K,V> pp = p.parent;
if (s == pr) { // p was s's direct parent
p.parent = s;
s.right = p;
}
else {
TreeNode<K,V> sp = s.parent;
if ((p.parent = sp) != null) {
if (s == sp.left)
sp.left = p;
else
sp.right = p;
}
if ((s.right = pr) != null)
pr.parent = s;
}
p.left = null;
if ((p.right = sr) != null)
sr.parent = p;
if ((s.left = pl) != null)
pl.parent = s;
if ((s.parent = pp) == null)
root = s;
else if (p == pp.left)
pp.left = s;
else
pp.right = s;
if (sr != null)
replacement = sr;
else
replacement = p;
}
else if (pl != null)
replacement = pl;
else if (pr != null)
replacement = pr;
else
replacement = p;
if (replacement != p) {
TreeNode<K,V> pp = replacement.parent = p.parent;
if (pp == null)
root = replacement;
else if (p == pp.left)
pp.left = replacement;
else
pp.right = replacement;
p.left = p.right = p.parent = null;
}

TreeNode<K,V> r = p.red ? root : balanceDeletion(root, replacement);

if (replacement == p) { // detach
TreeNode<K,V> pp = p.parent;
p.parent = null;
if (pp != null) {
if (p == pp.left)
pp.left = null;
else if (p == pp.right)
pp.right = null;
}
}
if (movable)
moveRootToFront(tab, r);
}

/**
* Splits nodes in a tree bin into lower and upper tree bins,
* or untreeifies if now too small. Called only from resize;
* see above discussion about split bits and indices.
*
* @param map the map
* @param tab the table for recording bin heads
* @param index the index of the table being split
* @param bit the bit of hash to split on
*/
final void split(HashMap<K,V> map, Node<K,V>[] tab, int index, int bit) {
TreeNode<K,V> b = this;
// Relink into lo and hi lists, preserving order
TreeNode<K,V> loHead = null, loTail = null;
TreeNode<K,V> hiHead = null, hiTail = null;
int lc = 0, hc = 0;
for (TreeNode<K,V> e = b, next; e != null; e = next) {
next = (TreeNode<K,V>)e.next;
e.next = null;
if ((e.hash & bit) == 0) {
if ((e.prev = loTail) == null)
loHead = e;
else
loTail.next = e;
loTail = e;
++lc;
}
else {
if ((e.prev = hiTail) == null)
hiHead = e;
else
hiTail.next = e;
hiTail = e;
++hc;
}
}

if (loHead != null) {
if (lc <= UNTREEIFY_THRESHOLD)
tab[index] = loHead.untreeify(map);
else {
tab[index] = loHead;
if (hiHead != null) // (else is already treeified)
loHead.treeify(tab);
}
}
if (hiHead != null) {
if (hc <= UNTREEIFY_THRESHOLD)
tab[index + bit] = hiHead.untreeify(map);
else {
tab[index + bit] = hiHead;
if (loHead != null)
hiHead.treeify(tab);
}
}
}

/* ------------------------------------------------------------ */
// Red-black tree methods, all adapted from CLR

static <K,V> TreeNode<K,V> rotateLeft(TreeNode<K,V> root,
TreeNode<K,V> p) {
TreeNode<K,V> r, pp, rl;
if (p != null && (r = p.right) != null) {
if ((rl = p.right = r.left) != null)
rl.parent = p;
if ((pp = r.parent = p.parent) == null)
(root = r).red = false;
else if (pp.left == p)
pp.left = r;
else
pp.right = r;
r.left = p;
p.parent = r;
}
return root;
}

static <K,V> TreeNode<K,V> rotateRight(TreeNode<K,V> root,
TreeNode<K,V> p) {
TreeNode<K,V> l, pp, lr;
if (p != null && (l = p.left) != null) {
if ((lr = p.left = l.right) != null)
lr.parent = p;
if ((pp = l.parent = p.parent) == null)
(root = l).red = false;
else if (pp.right == p)
pp.right = l;
else
pp.left = l;
l.right = p;
p.parent = l;
}
return root;
}

static <K,V> TreeNode<K,V> balanceInsertion(TreeNode<K,V> root,
TreeNode<K,V> x) {
x.red = true;
for (TreeNode<K,V> xp, xpp, xppl, xppr;;) {
if ((xp = x.parent) == null) {
x.red = false;
return x;
}
else if (!xp.red || (xpp = xp.parent) == null)
return root;
if (xp == (xppl = xpp.left)) {
if ((xppr = xpp.right) != null && xppr.red) {
xppr.red = false;
xp.red = false;
xpp.red = true;
x = xpp;
}
else {
if (x == xp.right) {
root = rotateLeft(root, x = xp);
xpp = (xp = x.parent) == null ? null : xp.parent;
}
if (xp != null) {
xp.red = false;
if (xpp != null) {
xpp.red = true;
root = rotateRight(root, xpp);
}
}
}
}
else {
if (xppl != null && xppl.red) {
xppl.red = false;
xp.red = false;
xpp.red = true;
x = xpp;
}
else {
if (x == xp.left) {
root = rotateRight(root, x = xp);
xpp = (xp = x.parent) == null ? null : xp.parent;
}
if (xp != null) {
xp.red = false;
if (xpp != null) {
xpp.red = true;
root = rotateLeft(root, xpp);
}
}
}
}
}
}

static <K,V> TreeNode<K,V> balanceDeletion(TreeNode<K,V> root,
TreeNode<K,V> x) {
for (TreeNode<K,V> xp, xpl, xpr;;) {
if (x == null || x == root)
return root;
else if ((xp = x.parent) == null) {
x.red = false;
return x;
}
else if (x.red) {
x.red = false;
return root;
}
else if ((xpl = xp.left) == x) {
if ((xpr = xp.right) != null && xpr.red) {
xpr.red = false;
xp.red = true;
root = rotateLeft(root, xp);
xpr = (xp = x.parent) == null ? null : xp.right;
}
if (xpr == null)
x = xp;
else {
TreeNode<K,V> sl = xpr.left, sr = xpr.right;
if ((sr == null || !sr.red) &&
(sl == null || !sl.red)) {
xpr.red = true;
x = xp;
}
else {
if (sr == null || !sr.red) {
if (sl != null)
sl.red = false;
xpr.red = true;
root = rotateRight(root, xpr);
xpr = (xp = x.parent) == null ?
null : xp.right;
}
if (xpr != null) {
xpr.red = (xp == null) ? false : xp.red;
if ((sr = xpr.right) != null)
sr.red = false;
}
if (xp != null) {
xp.red = false;
root = rotateLeft(root, xp);
}
x = root;
}
}
}
else { // symmetric
if (xpl != null && xpl.red) {
xpl.red = false;
xp.red = true;
root = rotateRight(root, xp);
xpl = (xp = x.parent) == null ? null : xp.left;
}
if (xpl == null)
x = xp;
else {
TreeNode<K,V> sl = xpl.left, sr = xpl.right;
if ((sl == null || !sl.red) &&
(sr == null || !sr.red)) {
xpl.red = true;
x = xp;
}
else {
if (sl == null || !sl.red) {
if (sr != null)
sr.red = false;
xpl.red = true;
root = rotateLeft(root, xpl);
xpl = (xp = x.parent) == null ?
null : xp.left;
}
if (xpl != null) {
xpl.red = (xp == null) ? false : xp.red;
if ((sl = xpl.left) != null)
sl.red = false;
}
if (xp != null) {
xp.red = false;
root = rotateRight(root, xp);
}
x = root;
}
}
}
}
}

/**
* Recursive invariant check
*/
static <K,V> boolean checkInvariants(TreeNode<K,V> t) {
TreeNode<K,V> tp = t.parent, tl = t.left, tr = t.right,
tb = t.prev, tn = (TreeNode<K,V>)t.next;
if (tb != null && tb.next != t)
return false;
if (tn != null && tn.prev != t)
return false;
if (tp != null && t != tp.left && t != tp.right)
return false;
if (tl != null && (tl.parent != t || tl.hash > t.hash))
return false;
if (tr != null && (tr.parent != t || tr.hash < t.hash))
return false;
if (t.red && tl != null && tl.red && tr != null && tr.red)
return false;
if (tl != null && !checkInvariants(tl))
return false;
if (tr != null && !checkInvariants(tr))
return false;
return true;
}
}
}

上面,我们已经分析完了HashMap的源码,所以我们来解答一下上面提出的几个问题:

提问一:通过HashMap的扩容方法resize()方法。我们发现为什么扩容一直2倍扩容。而HashMap的默认初始容量也是16(2的4次幂)??

分析:肯定是为了性能撒!!!数组的初始容量为16,而容量是以2的次方扩充的,一是为了提高性能使用足够大的数组,二是为了能使用位运算代替取模预算(据说提升了5~8倍)。数组是否需要扩充是通过负载因子判断的,如果当前元素个数为数组容量的0.75时,就会扩充数组。这个0.75就是默认的负载因子,可由构造传入。我们也可以设置大于1的负载因子,这样数组就不会扩充,牺牲性能,节省内存。

提问二:JDK8中HashMap为什么引入了红黑树??引入红黑树有什么作用??

分析:肯定实为了性能撒!!!即使负载因子和Hash算法设计的再合理,也免不了会出现拉链过长的情况,一旦出现拉链过长,则会严重影响HashMap的性能。于是,在JDK1.8版本中,对数据结构做了进一步的优化,引入了红黑树。而当链表长度太长(默认超过8)时,链表就转换为红黑树,利用红黑树快速增删改查的特点提高HashMap的性能,其中会用到红黑树的插入、删除、查找等算法。
当插入新元素时,对于红黑树的判断如下:
判断table[i] 是否为treeNode,即table[i] 是否是红黑树,如果是红黑树,则直接在树中插入键值对,否则转向下面;
遍历table[i],判断链表长度是否大于8,大于8的话把链表转换为红黑树,在红黑树中执行插入操作,否则进行链表的插入操作;遍历过程中若发现key已经存在直接覆盖value即可;

提问三:为什么当桶中链表长度大于等于8之后,才转化为红黑树。为什么当长度小于等于6之后转化成为链表

分析:肯定是为了性能撒!!!!
红黑树的平均查找长度是log(n),长度为8,查找长度为log(8)=3,链表的平均查找长度为n/2,当长度为8时,平均查找长度为8/2=4,这才有转换成树的必要;链表长度如果是小于等于6,6/2=3,虽然速度也很快的,但是转化为树结构和生成树的时间并不会太短。
以6和8来作为平衡点是因为,中间有个差值7可以防止链表和树之间频繁的转换。假设,如果设计成链表个数超过8则链表转换成树结构,链表个数小于8则树结构转换成链表,如果一个HashMap不停的插入、删除元素,链表个数在8左右徘徊,就会频繁的发生树转链表、链表转树,效率会很低。

提问三:解决hash冲突的办法有哪些?HashMap用的哪种?

解决Hash冲突方法有:开放定址法、再哈希法、链地址法(HashMap中常见的拉链法)、简历公共溢出区。HashMap中采用的是链地址法。

开放定址法也称为再散列法,基本思想就是,如果p=H(key)出现冲突时,则以p为基础,再次hash,p1=H(p),如果p1再次出现冲突,则以p1为基础,以此类推,直到找到一个不冲突的哈希地址pi。因此开放定址法所需要的hash表的长度要大于等于所需要存放的元素,而且因为存在再次hash,所以只能在删除的节点上做标记,而不能真正删除节点。

再哈希法(双重散列,多重散列),提供多个不同的hash函数,R1=H1(key1)发生冲突时,再计算R2=H2(key1),直到没有冲突为止。这样做虽然不易产生堆集,但增加了计算的时间。

链地址法(拉链法),将哈希值相同的元素构成一个同义词的单链表,并将单链表的头指针存放在哈希表的第i个单元中,查找、插入和删除主要在同义词链表中进行,链表法适用于经常进行插入和删除的情况。

建立公共溢出区,将哈希表分为公共表和溢出表,当溢出发生时,将所有溢出数据统一放到溢出区。

注意开放定址法和再哈希法的区别是:

开放定址法只能使用同一种hash函数进行再次hash,再哈希法可以调用多种不同的hash函数进行再次hash。

对于第三点补充说明,检查链表长度转换成红黑树之前,还会先检测当前数组数组是否到达一个阈值(64),如果没有到达这个容量,会放弃转换,先去扩充数组。所以上面也说了链表长度的阈值是7或8,因为会有一次放弃转换的操作。