├── README ├── README.cliffc ├── README.md ├── Testing ├── CAT_Tester │ ├── AtomicCounter.java │ ├── BUILD.sh │ ├── CATCounter.java │ ├── Counter.java │ ├── Harness.java │ ├── LockCounter.java │ ├── README.txt │ ├── RaceyCounter.java │ ├── StripeLockCounter.java │ ├── StripeUnsafeCounter.java │ ├── SyncCounter.java │ ├── UnsafeCounter.java │ ├── build.bat │ └── hash_test.java ├── JUnitSuite.java ├── NBHM_Tester │ ├── NBHMID_Tester2.java │ ├── NBHML_Tester2.java │ ├── NBHM_Tester.java │ ├── NBHM_Tester2.java │ ├── perf_hash_test.bat │ ├── perf_hash_test.java │ └── perf_hashlong_test.java └── NBHS_Tester │ ├── nbhs_tester.java │ ├── nbsi_tester.java │ └── perf_set_test.java ├── build.java ├── contrib └── Testing │ └── NBQ_Tester │ └── NBQ_Tester.java ├── java └── util │ ├── Hashtable.java │ └── concurrent │ └── ConcurrentHashMap.java ├── lib └── junit-4.4.jar └── org └── cliffc └── high_scale_lib ├── AbstractEntry.java ├── ConcurrentAutoTable.java ├── Counter.java ├── NonBlockingHashMap.java ├── NonBlockingHashMapLong.java ├── NonBlockingHashSet.java ├── NonBlockingIdentityHashMap.java ├── NonBlockingSetInt.java └── UtilUnsafe.java /README: -------------------------------------------------------------------------------- 1 | This is a drop-in replacement for java.util.Hashtable, written by 2 | Dr. Cliff Click and donated to public domain. It is lock-free and 3 | yet multi-threaded safe. It can scale linearly to 768 CPUs. 4 | 5 | The repository was in sourceforge and seems to be not as well-known 6 | as it should be. So I moved it here in the hope of letting more 7 | java developers use it. 8 | 9 | To build: 10 | javac build.java 11 | java build all 12 | You can also check out README.cliffc for more instructions. 13 | There's also a video by Dr. Cliff Click talking about the lib: 14 | http://www.youtube.com/watch?v=k5FltpgKcVk 15 | 16 | Edward Wang -------------------------------------------------------------------------------- /README.cliffc: -------------------------------------------------------------------------------- 1 | 2 | IF YOU ARE LOOKING for the drop-in replacement for java.util.Hashtable, it's 3 | in the lib directory, lib/java_util_hashtable.jar. It needs to be in your 4 | bootclasspath. Example: 5 | 6 | java -Xbootclasspath/p:lib/java_util_hashtable.jar my_java_app_goes_here 7 | 8 | 9 | --- 10 | 11 | A collection of Concurrent and Highly Scalable Utilities. These are intended 12 | as direct replacements for the java.util.* or java.util.concurrent.* 13 | collections but with better performance when many CPUs are using the 14 | collection concurrently. Single-threaded performance may be slightly lower. 15 | 16 | The direct replacements match the API - but not all behaviors are covered by 17 | the API, and so they may not work for your program. In particular, the 18 | replacement for java.util.Hashtable is NOT synchronized (that is the point!), 19 | although it is multi-threaded safe. If you rely on the undocumented 20 | synchronization behavior of the JDK Hashtable, your program may not work. 21 | Similarly, the iteration order is different between this version and the JDK 22 | version (this exact issue broke the SpecJBB benchmark when the iteration order 23 | was changed slightly (via using a slightly different hash function) between 24 | JDK rev's). 25 | 26 | If you want to drop-in the non-blocking versions of Hashtable, HashMap or 27 | ConcurrentHashMap, you'll need to alter your bootclasspath - these classes 28 | come directly from your JDK and so are found via the System loader before any 29 | class-path hacks can be done. 30 | 31 | To replace the JDK implementation of Hashtable with a non-blocking version of 32 | Hashtable, add java_util_hashtable.jar to your java launch line: 33 | 34 | java -Xbootclasspath/p:lib/java_util_hashtable.jar my_app_goes_here 35 | 36 | Similarly for ConcurrentHashMap, add java_util_concurrent_chm.jar: 37 | 38 | java -Xbootclasspath/p:lib/java_util_concurrent_chm.jar my_app_goes_here 39 | 40 | 41 | The other utilities do not have direct JDK replacements; you need to call them 42 | out directly and place high_scale_lib.jar in your classpath: 43 | 44 | - NonBlockingHashMap - Fast, concurrent, lock-free HashMap. Linear scaling to 768 CPUs. 45 | - NonBlockingHashMapLong - Same as above, but using primitive 'long' keys 46 | - NonBlockingHashSet - A Set version of NBHM 47 | - NonBlockingSetInt - A fast fully concurrent BitVector 48 | - Counter - A simple counter that scales linearly even when extremely hot. 49 | Most simple counters are either unsynchronized (hence drop counts, generally 50 | really badly beyond 2 cpus), or are normally lock'd (hence bottleneck in the 51 | 5-10 cpu range), or might use Atomic's (hence bottleneck in the 25-50 cpu 52 | range). This version scales linearly to 768 CPUs. 53 | 54 | 55 | 56 | Cliff Click 57 | 58 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ## Introduction 2 | 3 | This is a drop-in replacement for java.util.Hashtable, written by 4 | Dr. Cliff Click and donated to public domain. It is lock-free and 5 | still multi-threaded safe. It can scale linearly to 768 CPUs. 6 | 7 | The repository was in sourceforge and seems to be not as well-known 8 | as it should be. So I copied it here in the hope of letting more 9 | java developers use it. 10 | 11 | ## Build 12 | 13 | javac build.java 14 | java build all 15 | 16 | ## Reference 17 | 18 | You can also check out README.cliffc for more instructions. 19 | 20 | There's also a video by Dr. Cliff Click talking about the lib: [http://www.youtube.com/watch?v=k5FltpgKcVk](http://www.youtube.com/watch?v=k5FltpgKcVk) 21 | 22 | ### Edward Wang -------------------------------------------------------------------------------- /Testing/CAT_Tester/AtomicCounter.java: -------------------------------------------------------------------------------- 1 | 2 | import java.util.concurrent.atomic.*; 3 | public final class AtomicCounter extends Counter { 4 | public String name() { return "Atomic"; } 5 | private final AtomicLong _cnt = new AtomicLong(); 6 | public long get(){ return _cnt.get(); } 7 | public void add( long x ) { _cnt.getAndAdd(x); } 8 | } 9 | -------------------------------------------------------------------------------- /Testing/CAT_Tester/BUILD.sh: -------------------------------------------------------------------------------- 1 | # Simple build line 2 | # 3 | set JAVA_HOME=/usr/local/j2sdk1.5.0_06 4 | javac -classpath $JAVA_HOME/jre/lib/rt.jar:. harness.java org/cliffc/high_scale_lib/*.java ../org/cliffc/high_scale_lib/*.java 5 | 6 | -------------------------------------------------------------------------------- /Testing/CAT_Tester/CATCounter.java: -------------------------------------------------------------------------------- 1 | 2 | import org.cliffc.high_scale_lib.*; 3 | public final class CATCounter extends Counter { 4 | public String name() { return "CAT"; } 5 | private final ConcurrentAutoTable _tab = new ConcurrentAutoTable(); 6 | public long get(){ return _tab.get(); } 7 | public void add( long x ) { _tab.add(x); } 8 | public void print() { _tab.print(); } 9 | public int internal_size() { return _tab.internal_size(); } 10 | } 11 | -------------------------------------------------------------------------------- /Testing/CAT_Tester/Counter.java: -------------------------------------------------------------------------------- 1 | //package org.cliffc.high_scale_lib; 2 | public abstract class Counter { 3 | public abstract String name(); 4 | public abstract long get(); 5 | public abstract void add( long x ); 6 | public long pre_add ( long x ) { long l = get(); add(x); return l; } 7 | public long post_add( long x ) { add(x); long l = get(); return l; } 8 | public long post_inc() { return post_add( 1); } 9 | public long pre_dec() { return pre_add(-1); } 10 | } 11 | 12 | -------------------------------------------------------------------------------- /Testing/CAT_Tester/Harness.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Written by Cliff Click and released to the public domain, as explained at 3 | * http://creativecommons.org/licenses/publicdomain 4 | */ 5 | 6 | public class Harness extends Thread { 7 | static int _thread_min, _thread_max, _thread_incr; 8 | static int _ctr_impl; 9 | 10 | static Counter make_ctr( final int impl ) { 11 | switch( impl ) { 12 | case 1: return new RaceyCounter(); 13 | case 2: return new SyncCounter(); 14 | case 3: return new LockCounter(); 15 | case 4: return new AtomicCounter(); 16 | case 5: return new UnsafeCounter(); 17 | case 6: return new StripeLockCounter( 8); 18 | case 7: return new StripeUnsafeCounter( 8); 19 | case 8: return new StripeLockCounter( 64); 20 | case 9: return new StripeUnsafeCounter( 64); 21 | case 10: return new StripeLockCounter(256); 22 | case 11: return new StripeUnsafeCounter(256); 23 | case 12: return new CATCounter(); 24 | default: 25 | throw new Error("Bad imple"); 26 | } 27 | } 28 | 29 | static volatile boolean _start; 30 | static volatile boolean _stop; 31 | static final int NUM_CPUS = Runtime.getRuntime().availableProcessors(); 32 | 33 | static int check( String arg, String msg, int lower, int upper ) { 34 | return check( Integer.parseInt(arg), msg, lower, upper ); 35 | } 36 | static int check( int x, String msg, int lower, int upper ) { 37 | if( x < lower || x > upper ) 38 | throw new Error(msg+" must be from "+lower+" to "+upper); 39 | return x; 40 | } 41 | 42 | public static void main( String args[] ) { 43 | // Parse args 44 | try { 45 | _thread_min = check( args[0], "thread_min", 1, 100000 ); 46 | _thread_max = check( args[1], "thread_max", 1, 100000 ); 47 | _thread_incr = check( args[2], "thread_incr", 1, 100000 ); 48 | _ctr_impl = check( args[3], "implementation", -1, 13 ); 49 | 50 | int trips = (_thread_max - _thread_min)/_thread_incr; 51 | _thread_max = trips*_thread_incr + _thread_min; 52 | 53 | } catch( Error e ) { 54 | System.out.println("Usage: harness thread-min thread-max thread-incr impl[All=0]"); 55 | throw e; 56 | } 57 | String name = _ctr_impl == 0 ? "ALL" : (_ctr_impl==-1 ? "Best" : make_ctr(_ctr_impl).name()); 58 | System.out.println("===== "+name+" ====="); 59 | System.out.println("Threads from "+_thread_min+" to "+_thread_max+" by "+_thread_incr); 60 | 61 | // Do some warmup 62 | System.out.println("==== Warmup -variance: "); 63 | run_till_stable(Math.min(_thread_min,2),1); 64 | 65 | // Now do the real thing 66 | int num_trials = 7; // Number of Trials 67 | System.out.print("==== Counter Threads Trial:"); 68 | for( int i=0; i 0 ) { 77 | run_till_stable(num_threads,num_trials,_ctr_impl); 78 | } else if( _ctr_impl == 0 ) { 79 | for( int impl=1;impl<13; impl++ ) 80 | run_till_stable(num_threads,num_trials,impl); 81 | System.out.println(); 82 | } else { 83 | run_till_stable(num_threads,num_trials,11); // big stripage Unsafe 84 | run_till_stable(num_threads,num_trials,12); // CAT 85 | } 86 | } 87 | 88 | static void run_till_stable( int num_threads, int num_trials, int impl ) { 89 | 90 | Counter C = make_ctr(impl); 91 | System.out.printf("=== %10.10s %3d cnts/sec=",C.name(),num_threads); 92 | long[] trials = new long[num_trials]; // Number of trials 93 | long total_ops = 0; // Total ops altogether 94 | long total_ops_sec = 0; // Sum of ops/sec for each run 95 | 96 | // Run some trials 97 | for( int j=0; j 2 ) { 112 | // Toss out low & high 113 | int lo=0; 114 | int hi=0; 115 | for( int j=1; j trials[j] ) hi=j; 118 | } 119 | long total2 = total_ops_sec - (trials[lo]+trials[hi]); 120 | trials[lo] = trials[trials.length-1]; 121 | trials[hi] = trials[trials.length-2]; 122 | // Print avg,stddev 123 | long avg = total2/(trials.length-2); 124 | long stddev = compute_stddev(trials,trials.length-2); 125 | long p = stddev*100/avg; // std-dev as a percent 126 | 127 | System.out.printf(" %10d",avg); 128 | System.out.printf(" (+/-%2d%%)",p); 129 | } 130 | 131 | long loss = total_ops - C.get(); 132 | if( loss != 0 ) { 133 | System.out.print(" Lossage="); 134 | int loss_per = (int)(loss*100/total_ops); 135 | System.out.print(loss_per == 0 ? (""+loss) : (""+loss_per+"%")); 136 | } 137 | 138 | if( C instanceof CATCounter ) { 139 | CATCounter cat = (CATCounter)C; 140 | System.out.print(" autotable="+ cat.internal_size()); 141 | if( loss != 0 ) cat.print(); 142 | } 143 | 144 | System.out.println(); 145 | } 146 | 147 | static long compute_stddev(long[] trials, int len) { 148 | double sum = 0; 149 | double squ = 0.0; 150 | for( int i=0; i>>20) ^ (h2>>>12); 176 | //h2 ^= (h2>>> 7) ^ (h2>>> 4); 177 | //System.out.printf("%x ",h1&0xfff); 178 | } 179 | //System.out.println(""); 180 | for( int i=0; i 10 | Field f = null; 11 | try { 12 | f = UnsafeCounter.class.getDeclaredField("_cnt"); 13 | } catch( java.lang.NoSuchFieldException e ) { 14 | throw new Error(e); 15 | } 16 | CNT_OFFSET = _unsafe.objectFieldOffset(f); 17 | } 18 | } 19 | 20 | private long _cnt; 21 | public long get(){ return _cnt; } 22 | public void add( final long x ) { 23 | long cnt=0; 24 | do { 25 | cnt = _cnt; 26 | } while( !_unsafe.compareAndSwapLong(this,CNT_OFFSET,cnt,cnt+x) ); 27 | } 28 | 29 | 30 | private static class UtilUnsafe { 31 | private UtilUnsafe() { } // dummy private constructor 32 | public static Unsafe getUnsafe() { 33 | // Not on bootclasspath 34 | if( UtilUnsafe.class.getClassLoader() == null ) 35 | return Unsafe.getUnsafe(); 36 | try { 37 | final Field fld = Unsafe.class.getDeclaredField("theUnsafe"); 38 | fld.setAccessible(true); 39 | return (Unsafe) fld.get(UtilUnsafe.class); 40 | } catch (Exception e) { 41 | throw new RuntimeException("Could not obtain access to sun.misc.Unsafe", e); 42 | } 43 | } 44 | } 45 | 46 | } 47 | -------------------------------------------------------------------------------- /Testing/CAT_Tester/build.bat: -------------------------------------------------------------------------------- 1 | echo 2 | cd "c:\Documents and Settings\Cliff\Desktop\Highly Scalable Java\high-scale-lib\Testing\CAT_Tester" 3 | javac -classpath .;..\.. Harness.java CATCounter.java 4 | cd ..\.. 5 | java -classpath Testing\CAT_Tester;. -Xbootclasspath/p:Testing\CAT_Tester;. Harness 1 3 1 0 6 | -------------------------------------------------------------------------------- /Testing/CAT_Tester/hash_test.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Written by Cliff Click and released to the public domain, as explained at 3 | * http://creativecommons.org/licenses/publicdomain 4 | * Big Chunks of code shamelessly copied from Doug Lea's test harness which is also public domain. 5 | */ 6 | 7 | 8 | import org.cliffc.high_scale_lib.*; 9 | import java.util.*; 10 | import java.util.concurrent.*; 11 | import java.util.concurrent.atomic.*; 12 | 13 | public class hash_test extends Thread { 14 | static int _read_ratio, _gr, _pr; 15 | static int _thread_min, _thread_max, _thread_incr; 16 | static int _table_size; 17 | static int _map_impl; 18 | 19 | static Map make_map( int impl ) { 20 | switch( impl ) { 21 | case 1: return new Hashtable(0); 22 | case 2: return null; // new CliffWrapHerlihy(); // was a non-blocking HashSet implementation from Maurice Herlihy 23 | case 3: return new ConcurrentHashMap(16,0.75f, 16); // force to 16 striping 24 | case 4: return new ConcurrentHashMap(16,0.75f, 256); // force to 256 striping 25 | case 5: return new ConcurrentHashMap(16,0.75f,4096); // force to 4096 striping 26 | case 6: return new NonBlockingHashMap(); 27 | default: throw new Error("Bad imple"); 28 | } 29 | } 30 | static String names[] = { 31 | "ALL", 32 | "HashTable", 33 | "HerlihyHashSet", 34 | "CHM_16", 35 | "CHM_256", 36 | "CHM_4096", 37 | "NBHashMap", 38 | }; 39 | 40 | 41 | static String KEYS[]; 42 | static volatile boolean _start; 43 | static volatile boolean _stop; 44 | static final int NUM_CPUS = Runtime.getRuntime().availableProcessors(); 45 | 46 | static int check( String arg, String msg, int lower, int upper ) throws Exception { 47 | return check( Integer.parseInt(arg), msg, lower, upper ); 48 | } 49 | static int check( int x, String msg, int lower, int upper ) throws Exception { 50 | if( x < lower || x > upper ) 51 | throw new Error(msg+" must be from "+lower+" to "+upper); 52 | return x; 53 | } 54 | 55 | public static void main( String args[] ) throws Exception { 56 | // Parse args 57 | try { 58 | _read_ratio = check( args[0], "read%", 0, 100 ); 59 | _thread_min = check( args[1], "thread_min", 1, 100000 ); 60 | _thread_max = check( args[2], "thread_max", 1, 100000 ); 61 | _thread_incr = check( args[3], "thread_incr", 1, 100000 ); 62 | _table_size = check( args[4], "table_size", 1, 100000000 ); 63 | _map_impl = check( args[5], "implementation", -1, names.length ); 64 | 65 | _gr = (_read_ratio<<20)/100; 66 | _pr = (((1<<20) - _gr)>>1) + _gr; 67 | 68 | int trips = (_thread_max - _thread_min)/_thread_incr; 69 | _thread_max = trips*_thread_incr + _thread_min; 70 | 71 | } catch( Exception e ) { 72 | System.out.println("Usage: hash_test read%[0=churn test] thread-min thread-max thread-increment hash_table_size impl[All=0,Hashtable=1,HerlihyHashSet=2,CHM_16=3,CHM_256=4,CHM_4096=5,NonBlockingHashMap=6]"); 73 | throw e; 74 | } 75 | 76 | System.out.print( _read_ratio+"% gets, "+ 77 | ((100-_read_ratio)>>1)+"% inserts, "+ 78 | ((100-_read_ratio)>>1)+"% removes, " + 79 | "table_size="+_table_size); 80 | if( _read_ratio==0 ) 81 | System.out.print(" -- churn"); 82 | String name = _map_impl == -1 ? "Best" : names[_map_impl]; 83 | System.out.println(" "+name); 84 | System.out.println("Threads from "+_thread_min+" to "+_thread_max+" by "+_thread_incr); 85 | 86 | // Do some warmup 87 | int keymax = 1; 88 | while( keymax < _table_size ) keymax<<=1; 89 | if( _read_ratio == 0 ) keymax = 1024*1024; // The churn test uses a large key set 90 | KEYS = new String[keymax]; 91 | int [] histo = new int[64]; 92 | for( int i=0; i>>(32-6)]++; 95 | } 96 | // verify good key spread to help ConcurrentHashMap 97 | //for( int i=0; i 0 ) { 115 | run_till_stable(num_threads,num_trials,_map_impl); 116 | } else if( _map_impl == 0 ) { 117 | for( int i=1; i HM = make_map(impl); 127 | if( HM == null ) return; 128 | String name = names[impl]; 129 | System.out.printf("=== %10.10s %3d cnts/sec=",name,num_threads); 130 | 131 | // Quicky sanity check 132 | for( int i=0; i<100; i++ ) { 133 | HM.put(KEYS[i],KEYS[i]); 134 | for( int j=0; j 2010000000 || 160 | // ops[i] < 100000 ) 161 | // System.out.printf(" %d",ops[i]); 162 | //} 163 | 164 | } 165 | 166 | if( trials.length > 2 ) { 167 | // Toss out low & high 168 | int lo=0; 169 | int hi=0; 170 | for( int j=1; j trials[j] ) hi=j; 173 | } 174 | total -= (trials[lo]+trials[hi]); 175 | trials[lo] = trials[trials.length-1]; 176 | trials[hi] = trials[trials.length-2]; 177 | // Print avg,stddev 178 | long avg = total/(trials.length-2); 179 | long stddev = compute_stddev(trials,trials.length-2); 180 | long p = stddev*100/avg; // std-dev as a percent 181 | 182 | if( trials.length-2 > 2 ) { 183 | // Toss out low & high 184 | lo=0; 185 | hi=0; 186 | for( int j=1; j trials[j] ) hi=j; 189 | } 190 | total -= (trials[lo]+trials[hi]); 191 | trials[lo] = trials[trials.length-2-1]; 192 | trials[hi] = trials[trials.length-2-2]; 193 | // Print avg,stddev 194 | avg = total/(trials.length-2-2); 195 | stddev = compute_stddev(trials,trials.length-2-2); 196 | p = stddev*100/avg; // std-dev as a percent 197 | } 198 | System.out.printf(" %10d",avg); 199 | System.out.printf(" (+/-%2d%%) %d",p,HM.size()); 200 | } 201 | System.out.println(); 202 | } 203 | 204 | static long compute_stddev(long[] trials, int len) { 205 | double sum = 0; 206 | double squ = 0.0; 207 | for( int i=0; i _hash; // Shared hashtable 220 | final long[] _ops; 221 | final long[] _nanos; 222 | hash_test( int tnum, Map HM, long[] ops, long [] nanos ) { _tnum = tnum; _hash = HM; _ops = ops; _nanos = nanos; } 223 | 224 | static long run_once( int num_threads, Map HM, long[] ops, long [] nanos ) throws Exception { 225 | Random R = new Random(); 226 | _start = false; 227 | _stop = false; 228 | 229 | HM.put("Cliff","Cliff"); 230 | HM.remove("Cliff"); 231 | 232 | int sz = HM.size(); 233 | while( sz+1024 < _table_size ) { 234 | int idx = R.nextInt(); 235 | for( int i=0; i<1024; i++ ) { 236 | String key = KEYS[idx&(KEYS.length-1)]; 237 | HM.put(key,key); 238 | idx++; 239 | } 240 | sz = HM.size(); 241 | } 242 | 243 | while( sz < ((_table_size>>1)+(_table_size>>3)) ) { 244 | int trip = 0; 245 | int idx = R.nextInt(); 246 | while( true ) { 247 | String key = KEYS[idx&(KEYS.length-1)]; 248 | if( sz < _table_size ) { 249 | if( HM.put(key,key) == null ) { sz++; break; } 250 | } else { 251 | if( HM.remove(key ) != null ) { sz--; break; } 252 | } 253 | idx++; 254 | if( (trip & 15)==15 ) idx = R.nextInt(); 255 | if( trip++ > 1024*1024 ) { 256 | if( trip > 1024*1024+100 ) 257 | throw new Exception("barf trip "+sz+" "+HM.size()+" numkeys="+KEYS.length); 258 | System.out.println(key); 259 | } 260 | } 261 | } 262 | 263 | if( sz != HM.size() ) { 264 | throw new Error("size does not match table contents sz="+sz+" size()="+HM.size()); 265 | } 266 | 267 | // Launch threads 268 | //long nanoz = System.nanoTime(); 269 | //System.out.println(" "+nanoz+" Create-Threads"); 270 | hash_test thrs[] = new hash_test[num_threads]; 271 | for( int i=0; i _nbhm; 23 | protected void setUp () { _nbhm = new NonBlockingIdentityHashMap(); } 24 | protected void tearDown() { _nbhm = null; } 25 | 26 | // Throw a ClassCastException if I see a tombstone during key-compares 27 | private static class KeyBonk { 28 | final int _x; 29 | KeyBonk( int i ) { _x=i; } 30 | public boolean equals( Object o ) { 31 | if( o == null ) return false; 32 | return ((KeyBonk)o)._x // Throw CCE here 33 | == this._x; 34 | } 35 | public int hashCode() { return (_x>>2); } 36 | public String toString() { return "Bonk_"+Integer.toString(_x); } 37 | } 38 | 39 | // Test some basic stuff; add a few keys, remove a few keys 40 | public void testBasic() { 41 | assertTrue ( _nbhm.isEmpty() ); 42 | assertThat ( _nbhm.putIfAbsent("k1","v1"), nullValue() ); 43 | checkSizes (1); 44 | assertThat ( _nbhm.putIfAbsent("k2","v2"), nullValue() ); 45 | checkSizes (2); 46 | assertTrue ( _nbhm.containsKey("k2") ); 47 | assertThat ( _nbhm.put("k1","v1a"), is("v1") ); 48 | assertThat ( _nbhm.put("k2","v2a"), is("v2") ); 49 | checkSizes (2); 50 | assertThat ( _nbhm.putIfAbsent("k2","v2b"), is("v2a") ); 51 | assertThat ( _nbhm.remove("k1"), is("v1a") ); 52 | assertFalse( _nbhm.containsKey("k1") ); 53 | checkSizes (1); 54 | assertThat ( _nbhm.remove("k1"), nullValue() ); 55 | assertThat ( _nbhm.remove("k2"), is("v2a") ); 56 | checkSizes (0); 57 | assertThat ( _nbhm.remove("k2"), nullValue() ); 58 | assertThat ( _nbhm.remove("k3"), nullValue() ); 59 | assertTrue ( _nbhm.isEmpty() ); 60 | 61 | assertThat ( _nbhm.put("k0","v0"), nullValue() ); 62 | assertTrue ( _nbhm.containsKey("k0") ); 63 | checkSizes (1); 64 | assertThat ( _nbhm.remove("k0"), is("v0") ); 65 | assertFalse( _nbhm.containsKey("k0") ); 66 | checkSizes (0); 67 | 68 | assertThat ( _nbhm.replace("k0","v0"), nullValue() ); 69 | assertFalse( _nbhm.containsKey("k0") ); 70 | assertThat ( _nbhm.put("k0","v0"), nullValue() ); 71 | assertEquals(_nbhm.replace("k0","v0a"), "v0" ); 72 | assertEquals(_nbhm.get("k0"), "v0a" ); 73 | assertThat ( _nbhm.remove("k0"), is("v0a") ); 74 | assertFalse( _nbhm.containsKey("k0") ); 75 | checkSizes (0); 76 | 77 | assertThat ( _nbhm.replace("k1","v1"), nullValue() ); 78 | assertFalse( _nbhm.containsKey("k1") ); 79 | assertThat ( _nbhm.put("k1","v1"), nullValue() ); 80 | assertEquals(_nbhm.replace("k1","v1a"), "v1" ); 81 | assertEquals(_nbhm.get("k1"), "v1a" ); 82 | assertThat ( _nbhm.remove("k1"), is("v1a") ); 83 | assertFalse( _nbhm.containsKey("k1") ); 84 | checkSizes (0); 85 | 86 | // Insert & Remove KeyBonks until the table resizes and we start 87 | // finding Tombstone keys- and KeyBonk's equals-call with throw a 88 | // ClassCastException if it sees a non-KeyBonk. 89 | NonBlockingIdentityHashMap dumb = new NonBlockingIdentityHashMap(); 90 | for( int i=0; i<10000; i++ ) { 91 | final KeyBonk happy1 = new KeyBonk(i); 92 | assertThat( dumb.put(happy1,"and"), nullValue() ); 93 | if( (i&1)==0 ) dumb.remove(happy1); 94 | final KeyBonk happy2 = new KeyBonk(i); // 'equals' but not '==' 95 | dumb.get(happy2); 96 | } 97 | } 98 | 99 | // Check all iterators for correct size counts 100 | private void checkSizes(int expectedSize) { 101 | assertEquals( "size()", _nbhm.size(), expectedSize ); 102 | Collection vals = _nbhm.values(); 103 | checkSizes("values()",vals.size(),vals.iterator(),expectedSize); 104 | Set keys = _nbhm.keySet(); 105 | checkSizes("keySet()",keys.size(),keys.iterator(),expectedSize); 106 | Set> ents = _nbhm.entrySet(); 107 | checkSizes("entrySet()",ents.size(),ents.iterator(),expectedSize); 108 | } 109 | 110 | // Check that the iterator iterates the correct number of times 111 | private void checkSizes(String msg, int sz, Iterator it, int expectedSize) { 112 | assertEquals( msg, expectedSize, sz ); 113 | int result = 0; 114 | while (it.hasNext()) { 115 | result++; 116 | it.next(); 117 | } 118 | assertEquals( msg, expectedSize, result ); 119 | } 120 | 121 | 122 | public void testIteration() { 123 | assertTrue ( _nbhm.isEmpty() ); 124 | assertThat ( _nbhm.put("k1","v1"), nullValue() ); 125 | assertThat ( _nbhm.put("k2","v2"), nullValue() ); 126 | 127 | String str1 = ""; 128 | for( Iterator> i = _nbhm.entrySet().iterator(); i.hasNext(); ) { 129 | Map.Entry e = i.next(); 130 | str1 += e.getKey(); 131 | } 132 | assertThat("found all entries",str1,anyOf(is("k1k2"),is("k2k1"))); 133 | 134 | String str2 = ""; 135 | for( Iterator i = _nbhm.keySet().iterator(); i.hasNext(); ) { 136 | String key = i.next(); 137 | str2 += key; 138 | } 139 | assertThat("found all keys",str2,anyOf(is("k1k2"),is("k2k1"))); 140 | 141 | String str3 = ""; 142 | for( Iterator i = _nbhm.values().iterator(); i.hasNext(); ) { 143 | String val = i.next(); 144 | str3 += val; 145 | } 146 | assertThat("found all vals",str3,anyOf(is("v1v2"),is("v2v1"))); 147 | 148 | assertThat("toString works",_nbhm.toString(), anyOf(is("{k1=v1, k2=v2}"),is("{k2=v2, k1=v1}"))); 149 | } 150 | 151 | public void testSerial() { 152 | assertTrue ( _nbhm.isEmpty() ); 153 | final String k1 = "k1"; 154 | final String k2 = "k2"; 155 | assertThat ( _nbhm.put(k1,"v1"), nullValue() ); 156 | assertThat ( _nbhm.put(k2,"v2"), nullValue() ); 157 | 158 | // Serialize it out 159 | try { 160 | FileOutputStream fos = new FileOutputStream("NBHM_test.txt"); 161 | ObjectOutputStream out = new ObjectOutputStream(fos); 162 | out.writeObject(_nbhm); 163 | out.close(); 164 | } catch(IOException ex) { 165 | ex.printStackTrace(); 166 | } 167 | 168 | // Read it back 169 | try { 170 | File f = new File("NBHM_test.txt"); 171 | FileInputStream fis = new FileInputStream(f); 172 | ObjectInputStream in = new ObjectInputStream(fis); 173 | NonBlockingIdentityHashMap nbhm = (NonBlockingIdentityHashMap)in.readObject(); 174 | in.close(); 175 | assertThat("serialization works",nbhm.toString(), anyOf(is("{k1=v1, k2=v2}"),is("{k2=v2, k1=v1}"))); 176 | if( !f.delete() ) throw new IOException("delete failed"); 177 | } catch(IOException ex) { 178 | ex.printStackTrace(); 179 | } catch(ClassNotFoundException ex) { 180 | ex.printStackTrace(); 181 | } 182 | } 183 | 184 | public void testIterationBig2() { 185 | final int CNT = 10000; 186 | NonBlockingIdentityHashMap nbhm = new NonBlockingIdentityHashMap(); 187 | final String v = "v"; 188 | for( int i=0; i=0 && x<=(CNT-1)); 216 | } 217 | assertThat("Found 10000 ints",sz,is(CNT)); 218 | assertThat("Found all integers in list",sum,is(CNT*(CNT-1)/2)); 219 | 220 | assertThat( "can remove 3", _nbhm.remove(keys[3]), is(vals[3]) ); 221 | assertThat( "can remove 4", _nbhm.remove(keys[4]), is(vals[4]) ); 222 | sz =0; 223 | sum = 0; 224 | for( String s : _nbhm.keySet() ) { 225 | sz++; 226 | assertThat("",s.charAt(0),is('k')); 227 | int x = Integer.parseInt(s.substring(1)); 228 | sum += x; 229 | assertTrue(x>=0 && x<=(CNT-1)); 230 | String v = _nbhm.get(s); 231 | assertThat("",v.charAt(0),is('v')); 232 | assertThat("",s.substring(1),is(v.substring(1))); 233 | } 234 | assertThat("Found "+(CNT-2)+" ints",sz,is(CNT-2)); 235 | assertThat("Found all integers in list",sum,is(CNT*(CNT-1)/2 - (3+4))); 236 | } 237 | 238 | // Do some simple concurrent testing 239 | public void testConcurrentSimple() throws InterruptedException { 240 | final NonBlockingIdentityHashMap nbhm = new NonBlockingIdentityHashMap(); 241 | final String [] keys = new String[20000]; 242 | for( int i=0; i<20000; i++ ) 243 | keys[i]="k"+i; 244 | 245 | // In 2 threads, add & remove even & odd elements concurrently 246 | Thread t1 = new Thread() { public void run() { work_helper(nbhm,"T1",1,keys); } }; 247 | t1.start(); 248 | work_helper(nbhm,"T0",0,keys); 249 | t1.join(); 250 | 251 | // In the end, all members should be removed 252 | StringBuffer buf = new StringBuffer(); 253 | buf.append("Should be emptyset but has these elements: {"); 254 | boolean found = false; 255 | for( String x : nbhm.keySet() ) { 256 | buf.append(" ").append(x); 257 | found = true; 258 | } 259 | if( found ) System.out.println(buf+" }"); 260 | assertThat( "concurrent size=0", nbhm.size(), is(0) ); 261 | for( String x : nbhm.keySet() ) { 262 | assertTrue("No elements so never get here",false); 263 | } 264 | } 265 | 266 | void work_helper(NonBlockingIdentityHashMap nbhm, String thrd, int d, String[] keys) { 267 | final int ITERS = 20000; 268 | for( int j=0; j<10; j++ ) { 269 | long start = System.nanoTime(); 270 | for( int i=d; i entry : items.entrySet()) { 299 | if (entry.getKey().equals(Long.valueOf(100))) { 300 | assertEquals("entry[100].getValue()==\"100\"", "100", entry.getValue()); 301 | found100 = true; 302 | } else if (entry.getKey().equals(Long.valueOf(101))) { 303 | assertEquals("entry[101].getValue()==\"101\"", "101", entry.getValue()); 304 | found101 = true; 305 | } 306 | } 307 | assertTrue("entrySet().contains([100])", found100); 308 | assertTrue("entrySet().contains([101])", found101); 309 | } 310 | 311 | // Concurrent insertion & then iterator test. 312 | static public void testNonBlockingIdentityHashMapIterator() throws InterruptedException { 313 | final int ITEM_COUNT1 = 1000; 314 | final int THREAD_COUNT = 5; 315 | final int PER_CNT = ITEM_COUNT1/THREAD_COUNT; 316 | final int ITEM_COUNT = PER_CNT*THREAD_COUNT; // fix roundoff for odd thread counts 317 | 318 | NonBlockingIdentityHashMap nbhml = new NonBlockingIdentityHashMap(); 319 | // use a barrier to open the gate for all threads at once to avoid rolling 320 | // start and no actual concurrency 321 | final CyclicBarrier barrier = new CyclicBarrier(THREAD_COUNT); 322 | final ExecutorService ex = Executors.newFixedThreadPool(THREAD_COUNT); 323 | final CompletionService co = new ExecutorCompletionService(ex); 324 | for( int i=0; i { 344 | static private final Random _rand = new Random(System.currentTimeMillis()); 345 | private final NonBlockingIdentityHashMap _map; 346 | private final int _count; 347 | private final CyclicBarrier _barrier; 348 | private final long _offset; 349 | public NBHMLFeeder(final NonBlockingIdentityHashMap map, final int count, final CyclicBarrier barrier, final long offset) { 350 | _map = map; 351 | _count = count; 352 | _barrier = barrier; 353 | _offset = offset; 354 | } 355 | public Object call() throws Exception { 356 | _barrier.await(); // barrier, to force racing start 357 | for( long j=0; j<_count; j++ ) 358 | _map.put(j+_offset, new TestKey(_rand.nextLong(),_rand.nextInt (), (short) _rand.nextInt(Short.MAX_VALUE))); 359 | return null; 360 | } 361 | } 362 | 363 | // --- TestKey --- 364 | // Funny key tests all sorts of things, has a pre-wired hashCode & equals. 365 | static private final class TestKey { 366 | public final int _type; 367 | public final long _id; 368 | public final int _hash; 369 | public TestKey(final long id, final int type, int hash) { 370 | _id = id; 371 | _type = type; 372 | _hash = hash; 373 | } 374 | public int hashCode() { return _hash; } 375 | public boolean equals(Object object) { 376 | if (null == object) return false; 377 | if (object == this) return true; 378 | if (object.getClass() != this.getClass()) return false; 379 | final TestKey other = (TestKey) object; 380 | return (this._type == other._type && this._id == other._id); 381 | } 382 | public String toString() { return String.format("%s:%d,%d,%d", getClass().getSimpleName(), _id, _type, _hash); } 383 | } 384 | 385 | // --- Customer Test Case 3 ------------------------------------------------ 386 | private TestKeyFeeder getTestKeyFeeder() { 387 | final TestKeyFeeder feeder = new TestKeyFeeder(); 388 | feeder.checkedPut(10401000001844L, 657829272, 680293140); // section 12 389 | feeder.checkedPut(10401000000614L, 657829272, 401326994); // section 12 390 | feeder.checkedPut(10400345749304L, 2095121916, -9852212); // section 12 391 | feeder.checkedPut(10401000002204L, 657829272, 14438460); // section 12 392 | feeder.checkedPut(10400345749234L, 1186831289, -894006017); // section 12 393 | feeder.checkedPut(10401000500234L, 969314784, -2112018706); // section 12 394 | feeder.checkedPut(10401000000284L, 657829272, 521425852); // section 12 395 | feeder.checkedPut(10401000002134L, 657829272, 208406306); // section 12 396 | feeder.checkedPut(10400345749254L, 2095121916, -341939818); // section 12 397 | feeder.checkedPut(10401000500384L, 969314784, -2136811544); // section 12 398 | feeder.checkedPut(10401000001944L, 657829272, 935194952); // section 12 399 | feeder.checkedPut(10400345749224L, 1186831289, -828214183); // section 12 400 | feeder.checkedPut(10400345749244L, 2095121916, -351234120); // section 12 401 | feeder.checkedPut(10400333128994L, 2095121916, -496909430); // section 12 402 | feeder.checkedPut(10400333197934L, 2095121916, 2147144926); // section 12 403 | feeder.checkedPut(10400333197944L, 2095121916, -2082366964); // section 12 404 | feeder.checkedPut(10400336947684L, 2095121916, -1404212288); // section 12 405 | feeder.checkedPut(10401000000594L, 657829272, 124369790); // section 12 406 | feeder.checkedPut(10400331896264L, 2095121916, -1028383492); // section 12 407 | feeder.checkedPut(10400332415044L, 2095121916, 1629436704); // section 12 408 | feeder.checkedPut(10400345749614L, 1186831289, 1027996827); // section 12 409 | feeder.checkedPut(10401000500424L, 969314784, -1871616544); // section 12 410 | feeder.checkedPut(10400336947694L, 2095121916, -1468802722); // section 12 411 | feeder.checkedPut(10410002672481L, 2154973, 1515288586); // section 12 412 | feeder.checkedPut(10410345749171L, 2154973, 2084791828); // section 12 413 | feeder.checkedPut(10400004960671L, 2154973, 1554754674); // section 12 414 | feeder.checkedPut(10410009983601L, 2154973, -2049707334); // section 12 415 | feeder.checkedPut(10410335811601L, 2154973, 1547385114); // section 12 416 | feeder.checkedPut(10410000005951L, 2154973, -1136117016); // section 12 417 | feeder.checkedPut(10400004938331L, 2154973, -1361373018); // section 12 418 | feeder.checkedPut(10410001490421L, 2154973, -818792874); // section 12 419 | feeder.checkedPut(10400001187131L, 2154973, 649763142); // section 12 420 | feeder.checkedPut(10410000409071L, 2154973, -614460616); // section 12 421 | feeder.checkedPut(10410333717391L, 2154973, 1343531416); // section 12 422 | feeder.checkedPut(10410336680071L, 2154973, -914544144); // section 12 423 | feeder.checkedPut(10410002068511L, 2154973, -746995576); // section 12 424 | feeder.checkedPut(10410336207851L, 2154973, 863146156); // section 12 425 | feeder.checkedPut(10410002365251L, 2154973, 542724164); // section 12 426 | feeder.checkedPut(10400335812581L, 2154973, 2146284796); // section 12 427 | feeder.checkedPut(10410337345361L, 2154973, -384625318); // section 12 428 | feeder.checkedPut(10410000409091L, 2154973, -528258556); // section 12 429 | return feeder; 430 | } 431 | 432 | // --- 433 | static private class TestKeyFeeder { 434 | private final Hashtable> _items = new Hashtable>(); 435 | private int _size = 0; 436 | public int size() { return _size; } 437 | // Put items into the hashtable, sorted by 'type' into LinkedLists. 438 | public void checkedPut(final long id, final int type, final int hash) { 439 | _size++; 440 | final TestKey item = new TestKey(id, type, hash); 441 | if( !_items.containsKey(type) ) 442 | _items.put(type, new LinkedList()); 443 | _items.get(type).add(item); 444 | } 445 | 446 | public NonBlockingIdentityHashMap getMapMultithreaded() throws InterruptedException, ExecutionException { 447 | final int threadCount = _items.keySet().size(); 448 | final NonBlockingIdentityHashMap map = new NonBlockingIdentityHashMap(); 449 | 450 | // use a barrier to open the gate for all threads at once to avoid rolling start and no actual concurrency 451 | final CyclicBarrier barrier = new CyclicBarrier(threadCount); 452 | final ExecutorService ex = Executors.newFixedThreadPool(threadCount); 453 | final CompletionService co = new ExecutorCompletionService(ex); 454 | for( Integer type : _items.keySet() ) { 455 | // A linked-list of things to insert 456 | List items = _items.get(type); 457 | TestKeyFeederThread feeder = new TestKeyFeederThread(type, items, map, barrier); 458 | co.submit(feeder); 459 | } 460 | 461 | // wait for all threads to return 462 | int itemCount = 0; 463 | for( int retCount = 0; retCount < threadCount; retCount++ ) { 464 | final Future result = co.take(); 465 | itemCount += result.get(); 466 | } 467 | ex.shutdown(); 468 | return map; 469 | } 470 | } 471 | 472 | // --- TestKeyFeederThread 473 | static private class TestKeyFeederThread implements Callable { 474 | private final int _type; 475 | private final NonBlockingIdentityHashMap _map; 476 | private final List _items; 477 | private final CyclicBarrier _barrier; 478 | public TestKeyFeederThread(final int type, final List items, final NonBlockingIdentityHashMap map, final CyclicBarrier barrier) { 479 | _type = type; 480 | _map = map; 481 | _items = items; 482 | _barrier = barrier; 483 | } 484 | 485 | public Integer call() throws Exception { 486 | _barrier.await(); 487 | int count = 0; 488 | for( TestKey item : _items ) { 489 | if (_map.contains(item._id)) { 490 | System.err.printf("COLLISION DETECTED: %s exists\n", item.toString()); 491 | } 492 | final TestKey exists = _map.putIfAbsent(item._id, item); 493 | if (exists == null) { 494 | count++; 495 | } else { 496 | System.err.printf("COLLISION DETECTED: %s exists as %s\n", item.toString(), exists.toString()); 497 | } 498 | } 499 | return count; 500 | } 501 | } 502 | 503 | // --- 504 | public void testNonBlockingIdentityHashMapIteratorMultithreaded() throws InterruptedException, ExecutionException { 505 | TestKeyFeeder feeder = getTestKeyFeeder(); 506 | final int itemCount = feeder.size(); 507 | 508 | // validate results 509 | final NonBlockingIdentityHashMap items = feeder.getMapMultithreaded(); 510 | assertEquals("size()", itemCount, items.size()); 511 | 512 | assertEquals("values().size()", itemCount, items.values().size()); 513 | 514 | assertEquals("entrySet().size()", itemCount, items.entrySet().size()); 515 | 516 | int iteratorCount = 0; 517 | for( TestKey m : items.values() ) 518 | iteratorCount++; 519 | // sometimes a different result comes back the second time 520 | int iteratorCount2 = 0; 521 | for( Iterator it = items.values().iterator(); it.hasNext(); ) { 522 | iteratorCount2++; 523 | it.next(); 524 | } 525 | assertEquals("iterator counts differ", iteratorCount, iteratorCount2); 526 | assertEquals("values().iterator() count", itemCount, iteratorCount); 527 | } 528 | 529 | // This test is a copy of the JCK test Hashtable2027, which is incorrect. 530 | // The test requires a particular order of values to appear in the esa 531 | // array - but this is not part of the spec. A different implementation 532 | // might put the same values into the array but in a different order. 533 | //public void testToArray() { 534 | // NonBlockingIdentityHashMap ht = new NonBlockingIdentityHashMap(); 535 | // 536 | // ht.put("Nine", new Integer(9)); 537 | // ht.put("Ten", new Integer(10)); 538 | // ht.put("Ten1", new Integer(100)); 539 | // 540 | // Collection es = ht.values(); 541 | // 542 | // Object [] esa = es.toArray(); 543 | // 544 | // ht.remove("Ten1"); 545 | // 546 | // assertEquals( "size check", es.size(), 2 ); 547 | // assertEquals( "iterator_order[0]", new Integer( 9), esa[0] ); 548 | // assertEquals( "iterator_order[1]", new Integer(10), esa[1] ); 549 | //} 550 | } 551 | -------------------------------------------------------------------------------- /Testing/NBHM_Tester/NBHML_Tester2.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Written by Cliff Click and released to the public domain, as explained at 3 | * http://creativecommons.org/licenses/publicdomain 4 | * Additional test cases provided by Andy Martin of TeleAtlas. 5 | */ 6 | 7 | package Testing.NBHM_Tester; 8 | import java.io.*; 9 | import java.util.*; 10 | import java.util.Map.Entry; 11 | import java.util.concurrent.*; 12 | import junit.framework.TestCase; 13 | import org.cliffc.high_scale_lib.*; 14 | import static org.hamcrest.CoreMatchers.*; 15 | import static org.junit.Assert.*; 16 | 17 | // Test NonBlockingHashMapLong via JUnit 18 | public class NBHML_Tester2 extends TestCase { 19 | public static void main(String args[]) { 20 | org.junit.runner.JUnitCore.main("Testing.NBHM_Tester.NBHML_Tester2"); 21 | } 22 | 23 | private NonBlockingHashMapLong _nbhml; 24 | protected void setUp () { _nbhml = new NonBlockingHashMapLong(); } 25 | protected void tearDown() { _nbhml = null; } 26 | 27 | // Test some basic stuff; add a few keys, remove a few keys 28 | public void testBasic() { 29 | assertTrue ( _nbhml.isEmpty() ); 30 | assertThat ( _nbhml.put(1,"v1"), nullValue() ); 31 | checkSizes (1); 32 | assertThat ( _nbhml.putIfAbsent(2,"v2"), nullValue() ); 33 | checkSizes (2); 34 | assertTrue ( _nbhml.containsKey(2) ); 35 | assertThat ( _nbhml.put(1,"v1a"), is("v1") ); 36 | assertThat ( _nbhml.put(2,"v2a"), is("v2") ); 37 | checkSizes (2); 38 | assertThat ( _nbhml.putIfAbsent(2,"v2b"), is("v2a") ); 39 | assertThat ( _nbhml.remove(1), is("v1a") ); 40 | assertFalse( _nbhml.containsKey(1) ); 41 | checkSizes (1); 42 | assertThat ( _nbhml.remove(1), nullValue() ); 43 | assertThat ( _nbhml.remove(2), is("v2a") ); 44 | checkSizes (0); 45 | assertThat ( _nbhml.remove(2), nullValue() ); 46 | assertThat ( _nbhml.remove("k3"), nullValue() ); 47 | assertTrue ( _nbhml.isEmpty() ); 48 | 49 | assertThat ( _nbhml.put(0,"v0"), nullValue() ); 50 | assertTrue ( _nbhml.containsKey(0) ); 51 | checkSizes (1); 52 | assertThat ( _nbhml.remove(0), is("v0") ); 53 | assertFalse( _nbhml.containsKey(0) ); 54 | checkSizes (0); 55 | 56 | assertThat ( _nbhml.replace(0,"v0"), nullValue() ); 57 | assertFalse( _nbhml.containsKey(0) ); 58 | assertThat ( _nbhml.put(0,"v0"), nullValue() ); 59 | assertEquals(_nbhml.replace(0,"v0a"), "v0" ); 60 | assertEquals(_nbhml.get(0), "v0a" ); 61 | assertThat ( _nbhml.remove(0), is("v0a") ); 62 | assertFalse( _nbhml.containsKey(0) ); 63 | checkSizes (0); 64 | 65 | assertThat ( _nbhml.replace(1,"v1"), nullValue() ); 66 | assertFalse( _nbhml.containsKey(1) ); 67 | assertThat ( _nbhml.put(1,"v1"), nullValue() ); 68 | assertEquals(_nbhml.replace(1,"v1a"), "v1" ); 69 | assertEquals(_nbhml.get(1), "v1a" ); 70 | assertThat ( _nbhml.remove(1), is("v1a") ); 71 | assertFalse( _nbhml.containsKey(1) ); 72 | checkSizes (0); 73 | 74 | // Simple insert of simple keys, with no reprobing on insert until the 75 | // table gets full exactly. Then do a 'get' on the totally full table. 76 | NonBlockingHashMapLong map = new NonBlockingHashMapLong(32); 77 | for( int i = 1; i < 32; i++ ) 78 | map.put(i, new Object()); 79 | map.get(33); // this causes a NPE 80 | } 81 | 82 | // Check all iterators for correct size counts 83 | private void checkSizes(int expectedSize) { 84 | assertEquals( "size()", _nbhml.size(), expectedSize ); 85 | Collection vals = _nbhml.values(); 86 | checkSizes("values()",vals.size(),vals.iterator(),expectedSize); 87 | Set keys = _nbhml.keySet(); 88 | checkSizes("keySet()",keys.size(),keys.iterator(),expectedSize); 89 | Set> ents = _nbhml.entrySet(); 90 | checkSizes("entrySet()",ents.size(),ents.iterator(),expectedSize); 91 | } 92 | 93 | // Check that the iterator iterates the correct number of times 94 | private void checkSizes(String msg, int sz, Iterator it, int expectedSize) { 95 | assertEquals( msg, expectedSize, sz ); 96 | int result = 0; 97 | while (it.hasNext()) { 98 | result++; 99 | it.next(); 100 | } 101 | assertEquals( msg, expectedSize, result ); 102 | } 103 | 104 | 105 | public void testIterationBig2() { 106 | final int CNT = 10000; 107 | assertThat( _nbhml.size(), is(0) ); 108 | final String v = "v"; 109 | for( int i=0; i> i = _nbhml.entrySet().iterator(); i.hasNext(); ) { 125 | Map.Entry e = i.next(); 126 | str1 += e.getKey(); 127 | } 128 | assertThat("found all entries",str1,anyOf(is("12"),is("21"))); 129 | 130 | String str2 = ""; 131 | for( Iterator i = _nbhml.keySet().iterator(); i.hasNext(); ) { 132 | Long key = i.next(); 133 | str2 += key; 134 | } 135 | assertThat("found all keys",str2,anyOf(is("12"),is("21"))); 136 | 137 | String str3 = ""; 138 | for( Iterator i = _nbhml.values().iterator(); i.hasNext(); ) { 139 | String val = i.next(); 140 | str3 += val; 141 | } 142 | assertThat("found all vals",str3,anyOf(is("v1v2"),is("v2v1"))); 143 | 144 | assertThat("toString works",_nbhml.toString(), anyOf(is("{1=v1, 2=v2}"),is("{2=v2, 1=v1}"))); 145 | } 146 | 147 | public void testSerial() { 148 | assertTrue ( _nbhml.isEmpty() ); 149 | assertThat ( _nbhml.put(0x12345678L,"v1"), nullValue() ); 150 | assertThat ( _nbhml.put(0x87654321L,"v2"), nullValue() ); 151 | 152 | // Serialize it out 153 | try { 154 | FileOutputStream fos = new FileOutputStream("NBHML_test.txt"); 155 | ObjectOutputStream out = new ObjectOutputStream(fos); 156 | out.writeObject(_nbhml); 157 | out.close(); 158 | } catch(IOException ex) { 159 | ex.printStackTrace(); 160 | } 161 | 162 | // Read it back 163 | try { 164 | File f = new File("NBHML_test.txt"); 165 | FileInputStream fis = new FileInputStream(f); 166 | ObjectInputStream in = new ObjectInputStream(fis); 167 | NonBlockingHashMapLong nbhml = (NonBlockingHashMapLong)in.readObject(); 168 | in.close(); 169 | assertEquals(_nbhml.toString(),nbhml.toString()); 170 | if( !f.delete() ) throw new IOException("delete failed"); 171 | } catch(IOException ex) { 172 | ex.printStackTrace(); 173 | } catch(ClassNotFoundException ex) { 174 | ex.printStackTrace(); 175 | } 176 | 177 | } 178 | 179 | public void testIterationBig() { 180 | final int CNT = 10000; 181 | assertThat( _nbhml.size(), is(0) ); 182 | for( int i=0; i=0 && x<=(CNT-1)); 192 | } 193 | assertThat("Found 10000 ints",sz,is(CNT)); 194 | assertThat("Found all integers in list",sum,is(CNT*(CNT-1)/2)); 195 | 196 | assertThat( "can remove 3", _nbhml.remove(3), is("v3") ); 197 | assertThat( "can remove 4", _nbhml.remove(4), is("v4") ); 198 | sz =0; 199 | sum = 0; 200 | for( long x : _nbhml.keySet() ) { 201 | sz++; 202 | sum += x; 203 | assertTrue(x>=0 && x<=(CNT-1)); 204 | String v = _nbhml.get(x); 205 | assertThat("",v.charAt(0),is('v')); 206 | assertThat("",x,is(Long.parseLong(v.substring(1)))); 207 | } 208 | assertThat("Found "+(CNT-2)+" ints",sz,is(CNT-2)); 209 | assertThat("Found all integers in list",sum,is(CNT*(CNT-1)/2 - (3+4))); 210 | } 211 | 212 | // Do some simple concurrent testing 213 | public void testConcurrentSimple() throws InterruptedException { 214 | final NonBlockingHashMapLong nbhml = new NonBlockingHashMapLong(); 215 | 216 | // In 2 threads, add & remove even & odd elements concurrently 217 | final int num_thrds = 2; 218 | Thread ts[] = new Thread[num_thrds]; 219 | for( int i=1; i nbhml, int d, int num_thrds) { 245 | String thrd = "T"+d; 246 | final int ITERS = 20000; 247 | for( int j=0; j<10; j++ ) { 248 | //long start = System.nanoTime(); 249 | for( int i=d; i entry : items.entrySet()) { 280 | if (entry.getKey().equals(Long.valueOf(100))) { 281 | assertEquals("entry[100].getValue()==\"100\"", "100", entry.getValue()); 282 | found100 = true; 283 | } else if (entry.getKey().equals(Long.valueOf(101))) { 284 | assertEquals("entry[101].getValue()==\"101\"", "101", entry.getValue()); 285 | found101 = true; 286 | } 287 | } 288 | assertTrue("entrySet().contains([100])", found100); 289 | assertTrue("entrySet().contains([101])", found101); 290 | } 291 | 292 | // --- Customer Test Case 2 ------------------------------------------------ 293 | // Concurrent insertion & then iterator test. 294 | static public void testNonBlockingHashMapIterator() throws InterruptedException { 295 | final int ITEM_COUNT1 = 1000; 296 | final int THREAD_COUNT = 5; 297 | final int PER_CNT = ITEM_COUNT1/THREAD_COUNT; 298 | final int ITEM_COUNT = PER_CNT*THREAD_COUNT; // fix roundoff for odd thread counts 299 | 300 | NonBlockingHashMapLong nbhml = new NonBlockingHashMapLong(); 301 | // use a barrier to open the gate for all threads at once to avoid rolling 302 | // start and no actual concurrency 303 | final CyclicBarrier barrier = new CyclicBarrier(THREAD_COUNT); 304 | final ExecutorService ex = Executors.newFixedThreadPool(THREAD_COUNT); 305 | final CompletionService co = new ExecutorCompletionService(ex); 306 | for( int i=0; i { 326 | static private final Random _rand = new Random(System.currentTimeMillis()); 327 | private final NonBlockingHashMapLong _map; 328 | private final int _count; 329 | private final CyclicBarrier _barrier; 330 | private final long _offset; 331 | public NBHMLFeeder(final NonBlockingHashMapLong map, final int count, final CyclicBarrier barrier, final long offset) { 332 | _map = map; 333 | _count = count; 334 | _barrier = barrier; 335 | _offset = offset; 336 | } 337 | public Object call() throws Exception { 338 | _barrier.await(); // barrier, to force racing start 339 | for( long j=0; j<_count; j++ ) 340 | _map.put(j+_offset, new TestKey(_rand.nextLong(),_rand.nextInt (), (short) _rand.nextInt(Short.MAX_VALUE))); 341 | return null; 342 | } 343 | } 344 | 345 | // --- TestKey --- 346 | // Funny key tests all sorts of things, has a pre-wired hashCode & equals. 347 | static private final class TestKey { 348 | public final int _type; 349 | public final long _id; 350 | public final int _hash; 351 | public TestKey(final long id, final int type, int hash) { 352 | _id = id; 353 | _type = type; 354 | _hash = hash; 355 | } 356 | public int hashCode() { return _hash; } 357 | public boolean equals(Object object) { 358 | if (null == object) return false; 359 | if (object == this) return true; 360 | if (object.getClass() != this.getClass()) return false; 361 | final TestKey other = (TestKey) object; 362 | return (this._type == other._type && this._id == other._id); 363 | } 364 | public String toString() { return String.format("%s:%d,%d,%d", getClass().getSimpleName(), _id, _type, _hash); } 365 | } 366 | 367 | // --- Customer Test Case 3 ------------------------------------------------ 368 | private TestKeyFeeder getTestKeyFeeder() { 369 | final TestKeyFeeder feeder = new TestKeyFeeder(); 370 | feeder.checkedPut(10401000001844L, 657829272, 680293140); // section 12 371 | feeder.checkedPut(10401000000614L, 657829272, 401326994); // section 12 372 | feeder.checkedPut(10400345749304L, 2095121916, -9852212); // section 12 373 | feeder.checkedPut(10401000002204L, 657829272, 14438460); // section 12 374 | feeder.checkedPut(10400345749234L, 1186831289, -894006017); // section 12 375 | feeder.checkedPut(10401000500234L, 969314784, -2112018706); // section 12 376 | feeder.checkedPut(10401000000284L, 657829272, 521425852); // section 12 377 | feeder.checkedPut(10401000002134L, 657829272, 208406306); // section 12 378 | feeder.checkedPut(10400345749254L, 2095121916, -341939818); // section 12 379 | feeder.checkedPut(10401000500384L, 969314784, -2136811544); // section 12 380 | feeder.checkedPut(10401000001944L, 657829272, 935194952); // section 12 381 | feeder.checkedPut(10400345749224L, 1186831289, -828214183); // section 12 382 | feeder.checkedPut(10400345749244L, 2095121916, -351234120); // section 12 383 | feeder.checkedPut(10400333128994L, 2095121916, -496909430); // section 12 384 | feeder.checkedPut(10400333197934L, 2095121916, 2147144926); // section 12 385 | feeder.checkedPut(10400333197944L, 2095121916, -2082366964); // section 12 386 | feeder.checkedPut(10400336947684L, 2095121916, -1404212288); // section 12 387 | feeder.checkedPut(10401000000594L, 657829272, 124369790); // section 12 388 | feeder.checkedPut(10400331896264L, 2095121916, -1028383492); // section 12 389 | feeder.checkedPut(10400332415044L, 2095121916, 1629436704); // section 12 390 | feeder.checkedPut(10400345749614L, 1186831289, 1027996827); // section 12 391 | feeder.checkedPut(10401000500424L, 969314784, -1871616544); // section 12 392 | feeder.checkedPut(10400336947694L, 2095121916, -1468802722); // section 12 393 | feeder.checkedPut(10410002672481L, 2154973, 1515288586); // section 12 394 | feeder.checkedPut(10410345749171L, 2154973, 2084791828); // section 12 395 | feeder.checkedPut(10400004960671L, 2154973, 1554754674); // section 12 396 | feeder.checkedPut(10410009983601L, 2154973, -2049707334); // section 12 397 | feeder.checkedPut(10410335811601L, 2154973, 1547385114); // section 12 398 | feeder.checkedPut(10410000005951L, 2154973, -1136117016); // section 12 399 | feeder.checkedPut(10400004938331L, 2154973, -1361373018); // section 12 400 | feeder.checkedPut(10410001490421L, 2154973, -818792874); // section 12 401 | feeder.checkedPut(10400001187131L, 2154973, 649763142); // section 12 402 | feeder.checkedPut(10410000409071L, 2154973, -614460616); // section 12 403 | feeder.checkedPut(10410333717391L, 2154973, 1343531416); // section 12 404 | feeder.checkedPut(10410336680071L, 2154973, -914544144); // section 12 405 | feeder.checkedPut(10410002068511L, 2154973, -746995576); // section 12 406 | feeder.checkedPut(10410336207851L, 2154973, 863146156); // section 12 407 | feeder.checkedPut(10410002365251L, 2154973, 542724164); // section 12 408 | feeder.checkedPut(10400335812581L, 2154973, 2146284796); // section 12 409 | feeder.checkedPut(10410337345361L, 2154973, -384625318); // section 12 410 | feeder.checkedPut(10410000409091L, 2154973, -528258556); // section 12 411 | return feeder; 412 | } 413 | 414 | // --- 415 | static private class TestKeyFeeder { 416 | private final Hashtable> _items = new Hashtable>(); 417 | private int _size = 0; 418 | public int size() { return _size; } 419 | // Put items into the hashtable, sorted by 'type' into LinkedLists. 420 | public void checkedPut(final long id, final int type, final int hash) { 421 | _size++; 422 | final TestKey item = new TestKey(id, type, hash); 423 | if( !_items.containsKey(type) ) 424 | _items.put(type, new LinkedList()); 425 | _items.get(type).add(item); 426 | } 427 | 428 | public NonBlockingHashMapLong getMapMultithreaded() throws InterruptedException, ExecutionException { 429 | final int threadCount = _items.keySet().size(); 430 | final NonBlockingHashMapLong map = new NonBlockingHashMapLong(); 431 | 432 | // use a barrier to open the gate for all threads at once to avoid rolling start and no actual concurrency 433 | final CyclicBarrier barrier = new CyclicBarrier(threadCount); 434 | final ExecutorService ex = Executors.newFixedThreadPool(threadCount); 435 | final CompletionService co = new ExecutorCompletionService(ex); 436 | for( Integer type : _items.keySet() ) { 437 | // A linked-list of things to insert 438 | List items = _items.get(type); 439 | TestKeyFeederThread feeder = new TestKeyFeederThread(type, items, map, barrier); 440 | co.submit(feeder); 441 | } 442 | 443 | // wait for all threads to return 444 | int itemCount = 0; 445 | for( int retCount = 0; retCount < threadCount; retCount++ ) { 446 | final Future result = co.take(); 447 | itemCount += result.get(); 448 | } 449 | ex.shutdown(); 450 | return map; 451 | } 452 | } 453 | 454 | // --- TestKeyFeederThread 455 | static private class TestKeyFeederThread implements Callable { 456 | private final int _type; 457 | private final NonBlockingHashMapLong _map; 458 | private final List _items; 459 | private final CyclicBarrier _barrier; 460 | public TestKeyFeederThread(final int type, final List items, final NonBlockingHashMapLong map, final CyclicBarrier barrier) { 461 | _type = type; 462 | _map = map; 463 | _items = items; 464 | _barrier = barrier; 465 | } 466 | 467 | public Integer call() throws Exception { 468 | _barrier.await(); 469 | int count = 0; 470 | for( TestKey item : _items ) { 471 | if (_map.contains(item._id)) { 472 | System.err.printf("COLLISION DETECTED: %s exists\n", item.toString()); 473 | } 474 | final TestKey exists = _map.putIfAbsent(item._id, item); 475 | if (exists == null) { 476 | count++; 477 | } else { 478 | System.err.printf("COLLISION DETECTED: %s exists as %s\n", item.toString(), exists.toString()); 479 | } 480 | } 481 | return count; 482 | } 483 | } 484 | 485 | // --- 486 | public void testNonBlockingHashMapIteratorMultithreaded() throws InterruptedException, ExecutionException { 487 | TestKeyFeeder feeder = getTestKeyFeeder(); 488 | final int itemCount = feeder.size(); 489 | 490 | // validate results 491 | final NonBlockingHashMapLong items = feeder.getMapMultithreaded(); 492 | assertEquals("size()", itemCount, items.size()); 493 | 494 | assertEquals("values().size()", itemCount, items.values().size()); 495 | 496 | assertEquals("entrySet().size()", itemCount, items.entrySet().size()); 497 | 498 | int iteratorCount = 0; 499 | for( TestKey m : items.values() ) 500 | iteratorCount++; 501 | // sometimes a different result comes back the second time 502 | int iteratorCount2 = 0; 503 | for( Iterator it = items.values().iterator(); it.hasNext(); ) { 504 | iteratorCount2++; 505 | it.next(); 506 | } 507 | assertEquals("iterator counts differ", iteratorCount, iteratorCount2); 508 | assertEquals("values().iterator() count", itemCount, iteratorCount); 509 | } 510 | 511 | } 512 | -------------------------------------------------------------------------------- /Testing/NBHM_Tester/NBHM_Tester.java: -------------------------------------------------------------------------------- 1 | // A Java Program to more formally test the ideas in my Non-Blocking-Hash-Map 2 | 3 | import java.util.*; 4 | import java.util.concurrent.*; 5 | 6 | class NBHM_Tester { 7 | // Set of States for an individual State Machine. 8 | // Each State is really a pair of memory words. 9 | // The first word is only 0, K, X; the 2nd word is only 0, A/a, B/b, _ or x. 10 | enum S { // States 11 | BAD (0), 12 | MT (1), // No Key, No Value 13 | X0 (2), // Key is X'd out (slot is dead, nothing to copy) 14 | K0 (3), // Key only, Value is NULL 15 | KA (4), // Key/Value-A pair 16 | Ka (5), // Key/Value-A' pair 17 | KB (6), // Key/Value-B pair 18 | Kb (7), // Key/Value-B' pair 19 | K_ (8), // Key/Tombstone - deleted 20 | KX (9); // Key/X pair - copied 21 | 22 | // A field to let me cheapo map to integers 23 | final int _idx; 24 | S(int idx) { _idx=idx; } 25 | static final int MAX = values().length; 26 | 27 | // --- compute_reached --------------------------------------------------- 28 | // Used to test sanity of the allowed-transitions 29 | private void compute_reached(boolean [] reached) { 30 | if( reached[_idx] ) return; // Already reached this state 31 | reached[_idx] = true; // First time reached this state 32 | S[] T = _allowed_transitions; // Short handy name 33 | // Visit all transitions... 34 | for( int i=0; i dead_slot 51 | S.MT, S.K0, // Empty -> Key insertion 52 | 53 | S.K0, S.KA, // Key -> Key/A pair 54 | S.K0, S.Ka, // Key -> Key/A' pair 55 | S.K0, S.KB, // Key -> Key/B pair 56 | S.K0, S.Kb, // Key -> Key/B' pair 57 | S.K0, S.K_, // Key -> deleted 58 | 59 | S.KA, S.KB, // Key/A -> Key/B 60 | S.KA, S.K_, // Key/A -> deleted 61 | 62 | S.KB, S.KA, // Key/B -> Key/A 63 | S.KB, S.K_, // Key/B -> deleted 64 | 65 | S.K_, S.KA, // deleted -> Key/A 66 | S.K_, S.KB, // deleted -> Key/B 67 | 68 | S.Ka, S.KA, // Key/A' -> Key/A (strip prime) 69 | S.Ka, S.Kb, // Key/A' -> Key/B' 70 | S.Ka, S.K0, // Key/A' -> Key alone (same as deleted-prime) 71 | S.Ka, S.KB, // Key/A' -> Key/B (last write overrides copy) 72 | S.Ka, S.K_, // Key/A' -> Key delete 73 | 74 | S.Kb, S.KB, // Key/B' -> Key/B (strip prime) 75 | S.Kb, S.Ka, // Key/B' -> Key/A' 76 | S.Kb, S.K0, // Key/B' -> Key alone (same as deleted-prime) 77 | S.Kb, S.KA, // Key/B' -> Key/A (last write overrides copy) 78 | S.Kb, S.K_, // Key/B' -> Key delete 79 | 80 | S.K0, S.KX, // Key -> copied 81 | S.KA, S.KX, // Key/A -> copied 82 | S.KB, S.KX, // Key/B -> copied 83 | S.K_, S.KX, // deleted -> copied 84 | 85 | null 86 | }; 87 | // power-of-2 larger than _allowed_transitions.length 88 | private static final int LOG2TRAN = 6; 89 | private static final int MAXTRAN = 1< hash = 397 | new ConcurrentHashMap(); 398 | private int _hash; 399 | public final int hashCode() { return _hash; } 400 | // Two Historys are 'equals' if they have the same state sequences. 401 | public final boolean equals( Object x ) { 402 | if( !(x instanceof History) ) return false; 403 | History h = (History)x; 404 | if( _events.length != h._events.length ) return false; 405 | for( int i=0; i<_events.length; i++ ) 406 | if( !_events[i].equals(h._events[i]) ) 407 | return false; 408 | return true; 409 | } 410 | 411 | // --- canonical ----------------------------------------------------------- 412 | // Return the canonical History here, using the hash table. 413 | // Allows Histories to be compared using pointer-equivalence. 414 | private History canonical( ) { 415 | if( _events.length > 0 ) { 416 | Event e = _events[_events.length-1]; 417 | S end = _allowed_transitions[e.tran()+1]; 418 | assert e.old_or_new() || (end != S.Ka && end != S.Kb) : 419 | "No Primes in old table: "+e.old_or_new()+" "+end; 420 | } 421 | History old = hash.putIfAbsent(this,this); 422 | if( old == null ) return this; 423 | // Combine thread-ids in the old History 424 | for( int i=0; i<_tids.length; i++ ) 425 | old._tids[i] |= _tids[i]; 426 | return old; 427 | } 428 | 429 | // --- History ----------------------------------------------------------- 430 | // The initial empty history 431 | private History() { 432 | _events = new Event[0]; 433 | _tids = new int[_events.length]; 434 | _hash = 1; 435 | _old = S.MT; 436 | _new = S.MT; 437 | } 438 | public static History make() { return new History().canonical(); } 439 | 440 | // --- History ----------------------------------------------------------- 441 | // Extend an existing history 442 | private History(History h, Event e, Thrd t) { 443 | assert e.is_copyread() || (e.old_or_new() ? h._new : h._old) == _allowed_transitions[e.tran()]; 444 | int idx = h._events.length; 445 | //assert (idx == 0) || !e.is_copyread() || !h._events[idx-1].is_copyread() : "no 2 copyreads in a row "+h; 446 | 447 | _events = new Event[idx+1]; 448 | System.arraycopy(h._events,0,_events,0,idx); 449 | _tids = new int [idx+1]; 450 | System.arraycopy(h._tids ,0,_tids ,0,idx); 451 | _events[idx] = e; 452 | _tids [idx] = t != null ? (1< 0 ) _tids[_tids.length-1] |= (1< 0 ) 476 | return add_at_goal(t); // BREAKS COPYREAD HASHING? 477 | Event ec = new Event( t, old_or_new ); // Add a coherent-copy-read 478 | return new History(this,ec,t).canonical(); 479 | } 480 | 481 | // --- last_read --------------------------------------------------------- 482 | // Last value read by this thread for the given FSM. Only interesting for 483 | // making changes in the OTHER FSM. 484 | public S last_read( Thrd_copy copy, boolean old_or_new ) { 485 | for( int i=_events.length-1; i>=0; i-- ) { 486 | Event e = _events[i]; 487 | if( e.old_or_new() == old_or_new ) { // Matching FSM 488 | boolean was = (_tids[i]&(1<=0 ; i-- ) { // Find last update to this FSM 492 | e = _events[i]; 493 | if( e.old_or_new() == old_or_new && !e.is_copyread() ) { 494 | return _allowed_transitions[e.tran()+1]; 495 | } 496 | } 497 | return S.MT; 498 | } 499 | // Or a normal copy-thread update 500 | if( was ) 501 | return _allowed_transitions[e.tran()+1]; 502 | } 503 | } 504 | return S.MT; // Not ever read before 505 | } 506 | 507 | // --- toString ---------------------------------------------------------- 508 | // Pretty print 509 | public String toString() { 510 | S s_old = S.MT; 511 | S s_new = S.MT; 512 | StringBuffer buf = new StringBuffer(); 513 | buf.append("(").append(s_old).append("/").append(s_new); 514 | for( int i=0; i<_events.length; i++ ) { 515 | buf.append(" --"); 516 | // Print all threads involved here 517 | long tids = _tids[i]; 518 | int t=0; 519 | boolean first = true; 520 | while( tids != 0 ) { 521 | if( (tids & (1< [").append(e.old_or_new()?"new ":"old ").append(s).append("]"); 534 | } else { 535 | assert _allowed_transitions[e.tran()] == s; 536 | s = _allowed_transitions[e.tran()+1]; // New State 537 | if( e.old_or_new() ) s_new = s; else s_old = s; 538 | // Print the New World Order 539 | buf.append("--> ").append(s_old).append("/").append(s_new); 540 | } 541 | } 542 | buf.append(")"); 543 | assert s_old == _old; 544 | assert s_new == _new; 545 | return buf.toString(); 546 | } 547 | 548 | // --- printAll ---------------------------------------------------------- 549 | // Pretty print ALL histories 550 | public static void printAll () { 551 | for( History h : hash.keySet() ) { 552 | System.out.println(h); 553 | } 554 | } 555 | 556 | public static void printComplete() { 557 | for( History h : hash.keySet() ) { 558 | if( h._complete ) 559 | System.out.println(h); 560 | } 561 | } 562 | 563 | // --- witness ----------------------------------------------------------- 564 | // Report back all the visible 'get' values possible 565 | public static void printWitness() { 566 | for( History h : hash.keySet() ) { 567 | if( h._complete ) 568 | System.out.println(h.witness()+" "+h); 569 | } 570 | } 571 | 572 | /** 573 | * Describe witness method here. 574 | * 575 | * @return a String value 576 | */ 577 | public String witness() { 578 | S s_old = S.MT; 579 | S s_new = S.MT; 580 | S s_last = S.MT; 581 | StringBuffer buf = new StringBuffer(); 582 | buf.append("{"); 583 | for( int i=0; i<_events.length; i++ ) { 584 | Event e = _events[i]; 585 | if( !e.is_copyread() ) { 586 | // Update the states based on the transition 587 | S s = e.old_or_new() ? s_new : s_old; 588 | s = _allowed_transitions[e.tran()+1]; // New State 589 | if( e.old_or_new() ) s_new = s; else s_old = s; 590 | // Read from old first, or new if old is dead 591 | s = (s_old == S.X0 || s_old == S.KX) ? s_new : s_old; 592 | // 'flatten' answers 593 | if( s == S.K0 || s == S.K_ ) s = S.MT; 594 | if( s == S.Ka ) s = S.KA; 595 | if( s == S.Kb ) s = S.KB; 596 | if( s != s_last ) { 597 | buf.append(s).append(" "); 598 | s_last = s; 599 | } 600 | } 601 | } 602 | buf.append("}"); 603 | return buf.toString(); 604 | } 605 | 606 | // --- search ------------------------------------------------------------ 607 | // Search the state space for running N threads, each stepping to some 608 | // goal. Try stepping each thread 1 step from the current state. 609 | public void search(final Thrd[] q) { 610 | // Search 1 step for each thread 611 | boolean all_threads_done = true; 612 | for( int i=0; i make_map( int impl ) { 22 | switch( impl ) { 23 | case 1: return null; //new Hashtable(0); 24 | case 2: return null; // new CliffWrapHerlihy(); // was a non-blocking HashSet implementation from Maurice Herlihy 25 | case 3: return new ConcurrentHashMap(16,0.75f, 16); // force to 16 striping 26 | case 4: return new ConcurrentHashMap(16,0.75f, 256); // force to 256 striping 27 | case 5: return new ConcurrentHashMap(16,0.75f,4096); // force to 4096 striping 28 | case 6: return new NonBlockingHashMap(); 29 | default: throw new Error("Bad imple"); 30 | } 31 | } 32 | static String names[] = { 33 | "ALL", 34 | "HashTable", 35 | "HerlihyHashSet", 36 | "CHM_16", 37 | "CHM_256", 38 | "CHM_4096", 39 | "NBHashMap", 40 | }; 41 | 42 | 43 | static String KEYS[]; 44 | static volatile boolean _start; 45 | static volatile boolean _stop; 46 | static final int NUM_CPUS = Runtime.getRuntime().availableProcessors(); 47 | 48 | static int check( String arg, String msg, int lower, int upper ) throws Exception { 49 | return check( Integer.parseInt(arg), msg, lower, upper ); 50 | } 51 | static int check( int x, String msg, int lower, int upper ) throws Exception { 52 | if( x < lower || x > upper ) 53 | throw new Error(msg+" must be from "+lower+" to "+upper); 54 | return x; 55 | } 56 | 57 | public static void main( String args[] ) throws Exception { 58 | // Parse args 59 | try { 60 | _read_ratio = check( args[0], "read%", 0, 100 ); 61 | _thread_min = check( args[1], "thread_min", 1, 100000 ); 62 | _thread_max = check( args[2], "thread_max", 1, 100000 ); 63 | _thread_incr = check( args[3], "thread_incr", 1, 100000 ); 64 | _table_size = check( args[4], "table_size", 1, 100000000 ); 65 | _map_impl = check( args[5], "implementation", -1, names.length ); 66 | 67 | _gr = (_read_ratio<<20)/100; 68 | _pr = (((1<<20) - _gr)>>1) + _gr; 69 | 70 | int trips = (_thread_max - _thread_min)/_thread_incr; 71 | _thread_max = trips*_thread_incr + _thread_min; 72 | 73 | } catch( Exception e ) { 74 | System.out.println("Usage: perf_hash_test read%[0=churn test] thread-min thread-max thread-increment hash_table_size impl[All=0,Hashtable=1,HerlihyHashSet=2,CHM_16=3,CHM_256=4,CHM_4096=5,NonBlockingHashMap=6]"); 75 | throw e; 76 | } 77 | 78 | System.out.print( _read_ratio+"% gets, "+ 79 | ((100-_read_ratio)>>1)+"% inserts, "+ 80 | ((100-_read_ratio)>>1)+"% removes, " + 81 | "table_size="+_table_size); 82 | if( _read_ratio==0 ) 83 | System.out.print(" -- churn"); 84 | String name = _map_impl == -1 ? "Best" : names[_map_impl]; 85 | System.out.println(" "+name); 86 | System.out.println("Threads from "+_thread_min+" to "+_thread_max+" by "+_thread_incr); 87 | 88 | // Do some warmup 89 | int keymax = 1; 90 | while( keymax < _table_size ) keymax<<=1; 91 | if( _read_ratio == 0 ) keymax = 1024*1024; // The churn test uses a large key set 92 | KEYS = new String[keymax]; 93 | int [] histo = new int[64]; 94 | for( int i=0; i>>(32-6)]++; 97 | } 98 | // verify good key spread to help ConcurrentHashMap 99 | //for( int i=0; i 0 ) { 117 | run_till_stable(num_threads,num_trials,_map_impl); 118 | } else if( _map_impl == 0 ) { 119 | for( int i=1; i HM = make_map(impl); 129 | if( HM == null ) return; 130 | String name = names[impl]; 131 | System.out.printf("=== %10.10s %3d",name,num_threads); 132 | 133 | // Quicky sanity check 134 | for( int i=0; i<100; i++ ) { 135 | HM.put(KEYS[i],KEYS[i]); 136 | for( int j=0; j 0 ) 168 | System.out.printf("(%5.2f)",(double)reprobes/(double)sum_ops); 169 | } 170 | 171 | } 172 | 173 | if( trials.length > 2 ) { 174 | // Toss out low & high 175 | int lo=0; 176 | int hi=0; 177 | for( int j=1; j trials[j] ) hi=j; 180 | } 181 | total -= (trials[lo]+trials[hi]); 182 | trials[lo] = trials[trials.length-1]; 183 | trials[hi] = trials[trials.length-2]; 184 | // Print avg,stddev 185 | long avg = total/(trials.length-2); 186 | long stddev = compute_stddev(trials,trials.length-2); 187 | long p = stddev*100/avg; // std-dev as a percent 188 | 189 | if( trials.length-2 > 2 ) { 190 | // Toss out low & high 191 | lo=0; 192 | hi=0; 193 | for( int j=1; j trials[j] ) hi=j; 196 | } 197 | total -= (trials[lo]+trials[hi]); 198 | trials[lo] = trials[trials.length-2-1]; 199 | trials[hi] = trials[trials.length-2-2]; 200 | // Print avg,stddev 201 | avg = total/(trials.length-2-2); 202 | stddev = compute_stddev(trials,trials.length-2-2); 203 | p = stddev*100/avg; // std-dev as a percent 204 | } 205 | System.out.printf(" %10d",avg); 206 | System.out.printf(" (+/-%2d%%) %d",p,HM.size()); 207 | } 208 | System.out.println(); 209 | } 210 | 211 | static long compute_stddev(long[] trials, int len) { 212 | double sum = 0; 213 | double squ = 0.0; 214 | for( int i=0; i _hash; // Shared hashtable 227 | final long[] _ops; 228 | final long[] _nanos; 229 | perf_hash_test( int tnum, ConcurrentMap HM, long[] ops, long[] nanos ) { _tnum = tnum; _hash = HM; _ops = ops; _nanos = nanos; } 230 | 231 | static long run_once( int num_threads, ConcurrentMap HM, long[] ops, long[] nanos ) throws Exception { 232 | Random R = new Random(); 233 | _start = false; 234 | _stop = false; 235 | 236 | HM.put("Cliff","Cliff"); 237 | HM.remove("Cliff"); 238 | 239 | int sz = HM.size(); 240 | int xsz=0; 241 | while( sz+1024 < _table_size ) { 242 | int idx = R.nextInt(); 243 | for( int i=0; i<1024; i++ ) { 244 | String key = KEYS[idx&(KEYS.length-1)]; 245 | HM.put(key,key); 246 | idx++; 247 | } 248 | sz = HM.size(); 249 | } 250 | 251 | while( sz < ((_table_size>>1)+(_table_size>>3)) ) { 252 | int trip = 0; 253 | int idx = R.nextInt(); 254 | while( true ) { 255 | String key = KEYS[idx&(KEYS.length-1)]; 256 | if( sz < _table_size ) { 257 | if( HM.put(key,key) == null ) { sz++; break; } 258 | } else { 259 | if( HM.remove(key ) != null ) { sz--; break; } 260 | } 261 | idx++; 262 | if( (trip & 15)==15 ) idx = R.nextInt(); 263 | if( trip++ > 1024*1024 ) { 264 | if( trip > 1024*1024+100 ) 265 | throw new Exception("barf trip "+sz+" "+HM.size()+" numkeys="+KEYS.length); 266 | System.out.println(key); 267 | } 268 | } 269 | } 270 | 271 | if( sz != HM.size() ) { 272 | throw new Error("size does not match table contents sz="+sz+" size()="+HM.size()); 273 | } 274 | 275 | // Launch threads 276 | perf_hash_test thrs[] = new perf_hash_test[num_threads]; 277 | for( int i=0; i hm ) { 348 | SimpleRandom R = new SimpleRandom(); 349 | 350 | int get_ops = 0; 351 | int put_ops = 0; 352 | int del_ops = 0; 353 | while( !_stop ) { 354 | int x = R.nextInt()&((1<<20)-1); 355 | String key = KEYS[R.nextInt()&(KEYS.length-1)]; 356 | if( x < _gr ) { 357 | get_ops++; 358 | String val = hm.get(key); 359 | if( val != null && !val.equals(key) ) throw new IllegalArgumentException("Mismatched key="+key+" and val="+val); 360 | } else if( x < _pr ) { 361 | put_ops++; 362 | hm.putIfAbsent( key, key ); 363 | } else { 364 | del_ops++; 365 | hm.remove( key ); 366 | } 367 | } 368 | // We stopped; report results into shared result structure 369 | return get_ops+put_ops+del_ops; 370 | } 371 | 372 | public int run_normal( ConcurrentHashMap hm ) { 373 | SimpleRandom R = new SimpleRandom(); 374 | 375 | int get_ops = 0; 376 | int put_ops = 0; 377 | int del_ops = 0; 378 | while( !_stop ) { 379 | int x = R.nextInt()&((1<<20)-1); 380 | String key = KEYS[R.nextInt()&(KEYS.length-1)]; 381 | if( x < _gr ) { 382 | get_ops++; 383 | String val = hm.get(key); 384 | if( val != null && !val.equals(key) ) throw new IllegalArgumentException("Mismatched key="+key+" and val="+val); 385 | } else if( x < _pr ) { 386 | put_ops++; 387 | hm.putIfAbsent( key, key ); 388 | } else { 389 | del_ops++; 390 | hm.remove( key ); 391 | } 392 | } 393 | // We stopped; report results into shared result structure 394 | return get_ops+put_ops+del_ops; 395 | } 396 | 397 | public int run_normal( ConcurrentMap hm ) { 398 | SimpleRandom R = new SimpleRandom(); 399 | 400 | int get_ops = 0; 401 | int put_ops = 0; 402 | int del_ops = 0; 403 | while( !_stop ) { 404 | int x = R.nextInt()&((1<<20)-1); 405 | String key = KEYS[R.nextInt()&(KEYS.length-1)]; 406 | if( x < _gr ) { 407 | get_ops++; 408 | String val = hm.get(key); 409 | if( val != null && !val.equals(key) ) throw new IllegalArgumentException("Mismatched key="+key+" and val="+val); 410 | } else if( x < _pr ) { 411 | put_ops++; 412 | hm.putIfAbsent( key, key ); 413 | } else { 414 | del_ops++; 415 | hm.remove( key ); 416 | } 417 | } 418 | // We stopped; report results into shared result structure 419 | return get_ops+put_ops+del_ops; 420 | } 421 | 422 | // Fairly fast random numbers 423 | public static final class SimpleRandom { 424 | private final static long multiplier = 0x5DEECE66DL; 425 | private final static long addend = 0xBL; 426 | private final static long mask = (1L << 48) - 1; 427 | static final AtomicLong seq = new AtomicLong( -715159705); 428 | private long seed; 429 | SimpleRandom(long s) { seed = s; } 430 | SimpleRandom() { seed = System.nanoTime() + seq.getAndAdd(129); } 431 | public void setSeed(long s) { seed = s; } 432 | public int nextInt() { return next(); } 433 | public int next() { 434 | long nextseed = (seed * multiplier + addend) & mask; 435 | seed = nextseed; 436 | return ((int)(nextseed >>> 17)) & 0x7FFFFFFF; 437 | } 438 | } 439 | 440 | } 441 | -------------------------------------------------------------------------------- /Testing/NBHM_Tester/perf_hashlong_test.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Written by Cliff Click and released to the public domain, as explained at 3 | * http://creativecommons.org/licenses/publicdomain 4 | * Big Chunks of code shamelessly copied from Doug Lea's test harness which is also public domain. 5 | */ 6 | 7 | 8 | import java.io.*; 9 | import org.cliffc.high_scale_lib.*; 10 | import java.util.*; 11 | import java.util.concurrent.*; 12 | import java.util.concurrent.atomic.*; 13 | import java.lang.reflect.*; 14 | 15 | public class perf_hashlong_test extends Thread { 16 | static int _read_ratio, _gr, _pr; 17 | static int _thread_min, _thread_max, _thread_incr; 18 | static int _table_size; 19 | 20 | static String KEYS[]; 21 | static volatile boolean _start; 22 | static volatile boolean _stop; 23 | static final int NUM_CPUS = Runtime.getRuntime().availableProcessors(); 24 | 25 | static int check( String arg, String msg, int lower, int upper ) throws Exception { 26 | return check( Integer.parseInt(arg), msg, lower, upper ); 27 | } 28 | static int check( int x, String msg, int lower, int upper ) throws Exception { 29 | if( x < lower || x > upper ) 30 | throw new Error(msg+" must be from "+lower+" to "+upper); 31 | return x; 32 | } 33 | 34 | public static void main( String args[] ) throws Exception { 35 | // Parse args 36 | try { 37 | _read_ratio = check( args[0], "read%", 0, 100 ); 38 | _thread_min = check( args[1], "thread_min", 1, 100000 ); 39 | _thread_max = check( args[2], "thread_max", 1, 100000 ); 40 | _thread_incr = check( args[3], "thread_incr", 1, 100000 ); 41 | _table_size = check( args[4], "table_size", 1, 100000000 ); 42 | 43 | _gr = (_read_ratio<<20)/100; 44 | _pr = (((1<<20) - _gr)>>1) + _gr; 45 | 46 | int trips = (_thread_max - _thread_min)/_thread_incr; 47 | _thread_max = trips*_thread_incr + _thread_min; 48 | 49 | } catch( Exception e ) { 50 | System.out.println("Usage: perf_hash_test read%[0=churn test] thread-min thread-max thread-increment hash_table_size impl[All=0,Hashtable=1,HerlihyHashSet=2,CHM_16=3,CHM_256=4,CHM_4096=5,NonBlockingHashMap=6]"); 51 | throw e; 52 | } 53 | 54 | System.out.print( _read_ratio+"% gets, "+ 55 | ((100-_read_ratio)>>1)+"% inserts, "+ 56 | ((100-_read_ratio)>>1)+"% removes, " + 57 | "table_size="+_table_size); 58 | if( _read_ratio==0 ) 59 | System.out.print(" -- churn"); 60 | String name = "NonBlockingHashMapLong"; 61 | System.out.println(" "+name); 62 | System.out.println("Threads from "+_thread_min+" to "+_thread_max+" by "+_thread_incr); 63 | 64 | // Do some warmup 65 | int keymax = 1; 66 | while( keymax < _table_size ) keymax<<=1; 67 | if( _read_ratio == 0 ) keymax = 1024*1024; // The churn test uses a large key set 68 | KEYS = new String[keymax]; 69 | KEYS[0] = "Cliff0"; 70 | for( int i=1; i HM = new NonBlockingHashMapLong(true); 89 | String name = "NonBlockingHashMapLong"; 90 | System.out.printf("=== %10.10s %3d cnts/sec=",name,num_threads); 91 | 92 | // Quicky sanity check 93 | for( int i=0; i<100; i++ ) { 94 | HM.put(i,KEYS[i]); 95 | for( int j=0; j 2010000000 || 121 | // ops[i] < 100000 ) 122 | // System.out.printf(" %d",ops[i]); 123 | //} 124 | 125 | } 126 | 127 | if( trials.length > 2 ) { 128 | // Toss out low & high 129 | int lo=0; 130 | int hi=0; 131 | for( int j=1; j trials[j] ) hi=j; 134 | } 135 | total -= (trials[lo]+trials[hi]); 136 | trials[lo] = trials[trials.length-1]; 137 | trials[hi] = trials[trials.length-2]; 138 | // Print avg,stddev 139 | long avg = total/(trials.length-2); 140 | long stddev = compute_stddev(trials,trials.length-2); 141 | long p = stddev*100/avg; // std-dev as a percent 142 | 143 | if( trials.length-2 > 2 ) { 144 | // Toss out low & high 145 | lo=0; 146 | hi=0; 147 | for( int j=1; j trials[j] ) hi=j; 150 | } 151 | total -= (trials[lo]+trials[hi]); 152 | trials[lo] = trials[trials.length-2-1]; 153 | trials[hi] = trials[trials.length-2-2]; 154 | // Print avg,stddev 155 | avg = total/(trials.length-2-2); 156 | stddev = compute_stddev(trials,trials.length-2-2); 157 | p = stddev*100/avg; // std-dev as a percent 158 | } 159 | System.out.printf(" %10d",avg); 160 | System.out.printf(" (+/-%2d%%) %d",p,HM.size()); 161 | } 162 | System.out.println(); 163 | } 164 | 165 | static long compute_stddev(long[] trials, int len) { 166 | double sum = 0; 167 | double squ = 0.0; 168 | for( int i=0; i _hash; // Shared hashtable 181 | final long[] _ops; 182 | final long[] _nanos; 183 | perf_hashlong_test( int tnum, NonBlockingHashMapLong HM, long[] ops, long [] nanos ) { _tnum = tnum; _hash = HM; _ops = ops; _nanos = nanos; } 184 | 185 | static long run_once( int num_threads, NonBlockingHashMapLong HM, long[] ops, long [] nanos ) throws Exception { 186 | Random R = new Random(); 187 | _start = false; 188 | _stop = false; 189 | 190 | HM.put(0,"Cliff0"); 191 | HM.remove(0); 192 | 193 | int sz = HM.size(); 194 | int xsz=0; 195 | while( sz+1024 < _table_size ) { 196 | int idx = R.nextInt(); 197 | for( int i=0; i<1024; i++ ) { 198 | int k = idx&(KEYS.length-1); 199 | HM.put(k,KEYS[k]); 200 | idx++; 201 | } 202 | sz = HM.size(); 203 | } 204 | 205 | while( sz < ((_table_size>>1)+(_table_size>>3)) ) { 206 | int trip = 0; 207 | int idx = R.nextInt(); 208 | while( true ) { 209 | int k = idx&(KEYS.length-1); 210 | String key = KEYS[k]; 211 | if( sz < _table_size ) { 212 | if( HM.put(k,key) == null ) { sz++; break; } 213 | } else { 214 | if( HM.remove(k) != null ) { sz--; break; } 215 | } 216 | idx++; 217 | if( (trip & 15)==15 ) idx = R.nextInt(); 218 | if( trip++ > 1024*1024 ) { 219 | if( trip > 1024*1024+100 ) 220 | throw new Exception("barf trip "+sz+" "+HM.size()+" numkeys="+KEYS.length); 221 | System.out.println(key); 222 | } 223 | } 224 | } 225 | 226 | if( sz != HM.size() ) { 227 | throw new Error("size does not match table contents sz="+sz+" size()="+HM.size()); 228 | } 229 | 230 | // Launch threads 231 | //long nanoz = System.nanoTime(); 232 | //System.out.println(" "+nanoz+" Create-Threads"); 233 | perf_hashlong_test thrs[] = new perf_hashlong_test[num_threads]; 234 | for( int i=0; i _nbhs; 21 | protected void setUp () { _nbhs = new NonBlockingHashSet(); } 22 | protected void tearDown() { _nbhs = null; } 23 | 24 | // Test some basic stuff; add a few keys, remove a few keys 25 | public void testBasic() { 26 | assertTrue ( _nbhs.isEmpty() ); 27 | assertTrue ( _nbhs.add("k1") ); 28 | checkSizes (1); 29 | assertTrue ( _nbhs.add("k2") ); 30 | checkSizes (2); 31 | assertFalse( _nbhs.add("k1") ); 32 | assertFalse( _nbhs.add("k2") ); 33 | checkSizes (2); 34 | assertThat ( _nbhs.remove("k1"), is(true ) ); 35 | checkSizes (1); 36 | assertThat ( _nbhs.remove("k1"), is(false) ); 37 | assertTrue ( _nbhs.remove("k2") ); 38 | checkSizes (0); 39 | assertFalse( _nbhs.remove("k2") ); 40 | assertFalse( _nbhs.remove("k3") ); 41 | assertTrue ( _nbhs.isEmpty() ); 42 | } 43 | 44 | // Check all iterators for correct size counts 45 | private void checkSizes(int expectedSize) { 46 | assertEquals( "size()", _nbhs.size(), expectedSize ); 47 | Iterator it = _nbhs.iterator(); 48 | int result = 0; 49 | while (it.hasNext()) { 50 | result++; 51 | it.next(); 52 | } 53 | assertEquals( "iterator missed", expectedSize, result ); 54 | } 55 | 56 | 57 | public void testIteration() { 58 | assertTrue ( _nbhs.isEmpty() ); 59 | assertTrue ( _nbhs.add("k1") ); 60 | assertTrue ( _nbhs.add("k2") ); 61 | 62 | StringBuffer buf = new StringBuffer(); 63 | for( Iterator i = _nbhs.iterator(); i.hasNext(); ) { 64 | String val = i.next(); 65 | buf.append(val); 66 | } 67 | assertThat("found all vals",buf.toString(),anyOf(is("k1k2"),is("k2k1"))); 68 | 69 | assertThat("toString works",_nbhs.toString(), anyOf(is("[k1, k2]"),is("[k2, k1]"))); 70 | } 71 | 72 | public void testIterationBig() { 73 | for( int i=0; i<100; i++ ) 74 | _nbhs.add("a"+i); 75 | assertThat( _nbhs.size(), is(100) ); 76 | 77 | int sz =0; 78 | int sum = 0; 79 | for( String s : _nbhs ) { 80 | sz++; 81 | assertThat("",s.charAt(0),is('a')); 82 | int x = Integer.parseInt(s.substring(1)); 83 | sum += x; 84 | assertTrue(x>=0 && x<=99); 85 | } 86 | assertThat("Found 100 ints",sz,is(100)); 87 | assertThat("Found all integers in list",sum,is(100*99/2)); 88 | 89 | assertThat( "can remove 3", _nbhs.remove("a3"), is(true) ); 90 | assertThat( "can remove 4", _nbhs.remove("a4"), is(true) ); 91 | sz =0; 92 | sum = 0; 93 | for( String s : _nbhs ) { 94 | sz++; 95 | assertThat("",s.charAt(0),is('a')); 96 | int x = Integer.parseInt(s.substring(1)); 97 | sum += x; 98 | assertTrue(x>=0 && x<=99); 99 | } 100 | assertThat("Found 98 ints",sz,is(98)); 101 | assertThat("Found all integers in list",sum,is(100*99/2 - (3+4))); 102 | 103 | } 104 | 105 | public void testSerial() { 106 | assertTrue ( _nbhs.isEmpty() ); 107 | assertTrue ( _nbhs.add("k1") ); 108 | assertTrue ( _nbhs.add("k2") ); 109 | 110 | // Serialize it out 111 | try { 112 | FileOutputStream fos = new FileOutputStream("NBHS_test.txt"); 113 | ObjectOutputStream out = new ObjectOutputStream(fos); 114 | out.writeObject(_nbhs); 115 | out.close(); 116 | } catch(IOException ex) { 117 | ex.printStackTrace(); 118 | } 119 | 120 | // Read it back 121 | try { 122 | File f = new File("NBHS_test.txt"); 123 | FileInputStream fis = new FileInputStream(f); 124 | ObjectInputStream in = new ObjectInputStream(fis); 125 | NonBlockingHashSet nbhs = (NonBlockingHashSet)in.readObject(); 126 | in.close(); 127 | assertEquals(_nbhs.toString(),nbhs.toString()); 128 | if( !f.delete() ) throw new IOException("delete failed"); 129 | } catch(IOException ex) { 130 | ex.printStackTrace(); 131 | } catch(ClassNotFoundException ex) { 132 | ex.printStackTrace(); 133 | } 134 | 135 | } 136 | } 137 | -------------------------------------------------------------------------------- /Testing/NBHS_Tester/nbsi_tester.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Written by Cliff Click and released to the public domain, as explained at 3 | * http://creativecommons.org/licenses/publicdomain 4 | */ 5 | 6 | package Testing.NBHS_Tester; 7 | import java.io.*; 8 | import java.util.*; 9 | import junit.framework.TestCase; 10 | import org.cliffc.high_scale_lib.*; 11 | import static org.hamcrest.CoreMatchers.*; 12 | import static org.junit.Assert.*; 13 | 14 | // Test NonBlockingSetInt via JUnit 15 | public class nbsi_tester extends TestCase { 16 | public static void main(String args[]) { 17 | org.junit.runner.JUnitCore.main("Testing.NBHS_Tester.nbsi_tester"); 18 | } 19 | 20 | private NonBlockingSetInt _nbsi; 21 | protected void setUp () { _nbsi = new NonBlockingSetInt(); } 22 | protected void tearDown() { _nbsi = null; } 23 | 24 | // Test some basic stuff; add a few keys, remove a few keys 25 | public void testBasic() { 26 | assertTrue ( _nbsi.isEmpty() ); 27 | assertTrue ( _nbsi.add(1) ); 28 | checkSizes (1); 29 | assertTrue ( _nbsi.add(2) ); 30 | checkSizes (2); 31 | assertFalse( _nbsi.add(1) ); 32 | assertFalse( _nbsi.add(2) ); 33 | checkSizes (2); 34 | assertThat ( _nbsi.remove(1), is(true ) ); 35 | checkSizes (1); 36 | assertThat ( _nbsi.remove(1), is(false) ); 37 | assertTrue ( _nbsi.remove(2) ); 38 | checkSizes (0); 39 | assertFalse( _nbsi.remove(2) ); 40 | assertFalse( _nbsi.remove(3) ); 41 | assertTrue ( _nbsi.isEmpty() ); 42 | assertTrue ( _nbsi.add(63) ); 43 | checkSizes (1); 44 | assertTrue ( _nbsi.remove(63) ); 45 | assertFalse( _nbsi.remove(63) ); 46 | 47 | 48 | assertTrue ( _nbsi.isEmpty() ); 49 | assertTrue ( _nbsi.add(10000) ); 50 | checkSizes (1); 51 | assertTrue ( _nbsi.add(20000) ); 52 | checkSizes (2); 53 | assertFalse( _nbsi.add(10000) ); 54 | assertFalse( _nbsi.add(20000) ); 55 | checkSizes (2); 56 | assertThat ( _nbsi.remove(10000), is(true ) ); 57 | checkSizes (1); 58 | assertThat ( _nbsi.remove(10000), is(false) ); 59 | assertTrue ( _nbsi.remove(20000) ); 60 | checkSizes (0); 61 | assertFalse( _nbsi.remove(20000) ); 62 | } 63 | 64 | // Check all iterators for correct size counts 65 | private void checkSizes(int expectedSize) { 66 | assertEquals( "size()", _nbsi.size(), expectedSize ); 67 | Iterator it = _nbsi.iterator(); 68 | int result = 0; 69 | while (it.hasNext()) { 70 | result++; 71 | it.next(); 72 | } 73 | assertEquals( "iterator missed", expectedSize, result ); 74 | } 75 | 76 | 77 | public void testIteration() { 78 | assertTrue ( _nbsi.isEmpty() ); 79 | assertTrue ( _nbsi.add(1) ); 80 | assertTrue ( _nbsi.add(2) ); 81 | 82 | StringBuffer buf = new StringBuffer(); 83 | for( Iterator i = _nbsi.iterator(); i.hasNext(); ) { 84 | Integer val = i.next(); 85 | buf.append(val); 86 | } 87 | assertThat("found all vals",buf.toString(),anyOf(is("12"),is("21"))); 88 | 89 | assertThat("toString works",_nbsi.toString(), anyOf(is("[1, 2]"),is("[2, 1]"))); 90 | } 91 | 92 | public void testIterationBig() { 93 | for( int i=0; i<100; i++ ) 94 | _nbsi.add(i); 95 | assertThat( _nbsi.size(), is(100) ); 96 | 97 | int sz =0; 98 | int sum = 0; 99 | for( Integer x : _nbsi ) { 100 | sz++; 101 | sum += x; 102 | assertTrue(x>=0 && x<=99); 103 | } 104 | assertThat("Found 100 ints",sz,is(100)); 105 | assertThat("Found all integers in list",sum,is(100*99/2)); 106 | 107 | assertThat( "can remove 3", _nbsi.remove(3), is(true) ); 108 | assertThat( "can remove 4", _nbsi.remove(4), is(true) ); 109 | sz =0; 110 | sum = 0; 111 | for( Integer x : _nbsi ) { 112 | sz++; 113 | sum += x; 114 | assertTrue(x>=0 && x<=99); 115 | } 116 | assertThat("Found 98 ints",sz,is(98)); 117 | assertThat("Found all integers in list",sum,is(100*99/2 - (3+4))); 118 | 119 | } 120 | 121 | public void testSerial() { 122 | assertTrue ( _nbsi.isEmpty() ); 123 | assertTrue ( _nbsi.add(1) ); 124 | assertTrue ( _nbsi.add(2) ); 125 | 126 | // Serialize it out 127 | try { 128 | FileOutputStream fos = new FileOutputStream("NBSI_test.txt"); 129 | ObjectOutputStream out = new ObjectOutputStream(fos); 130 | out.writeObject(_nbsi); 131 | out.close(); 132 | } catch(IOException ex) { 133 | ex.printStackTrace(); 134 | } 135 | 136 | // Read it back 137 | try { 138 | File f = new File("NBSI_test.txt"); 139 | FileInputStream fis = new FileInputStream(f); 140 | ObjectInputStream in = new ObjectInputStream(fis); 141 | NonBlockingSetInt nbsi = (NonBlockingSetInt)in.readObject(); 142 | in.close(); 143 | assertEquals(_nbsi.toString(),nbsi.toString()); 144 | if( !f.delete() ) throw new IOException("delete failed"); 145 | } catch(IOException ex) { 146 | ex.printStackTrace(); 147 | } catch(ClassNotFoundException ex) { 148 | ex.printStackTrace(); 149 | } 150 | } 151 | 152 | // Do some simple concurrent testing 153 | public void testConcurrentSimple() throws InterruptedException { 154 | final NonBlockingSetInt nbsi = new NonBlockingSetInt(); 155 | 156 | // In 2 threads, add & remove even & odd elements concurrently 157 | Thread t1 = new Thread() { public void run() { work_helper(nbsi,"T1",1); } }; 158 | t1.start(); 159 | work_helper(nbsi,"T0",1); 160 | t1.join(); 161 | 162 | // In the end, all members should be removed 163 | StringBuffer buf = new StringBuffer(); 164 | buf.append("Should be emptyset but has these elements: {"); 165 | boolean found = false; 166 | for( Integer x : nbsi ) { 167 | buf.append(" ").append(x); 168 | found = true; 169 | } 170 | if( found ) System.out.println(buf); 171 | assertThat( "concurrent size=0", nbsi.size(), is(0) ); 172 | for( Integer x : nbsi ) { 173 | assertTrue("No elements so never get here",false); 174 | } 175 | 176 | } 177 | 178 | void work_helper(NonBlockingSetInt nbsi, String thrd, int d) { 179 | final int ITERS = 100000; 180 | for( int j=0; j<10; j++ ) { 181 | long start = System.nanoTime(); 182 | for( int i=d; i upper ) 33 | throw new Error(msg+" must be from "+lower+" to "+upper); 34 | return x; 35 | } 36 | 37 | public static void main( String args[] ) throws Exception { 38 | // Parse args 39 | int impl; 40 | try { 41 | _read_ratio = check( args[0], "read%", 0, 100 ); 42 | _thread_min = check( args[1], "thread_min", 1, 100000 ); 43 | _thread_max = check( args[2], "thread_max", 1, 100000 ); 44 | _thread_incr = check( args[3], "thread_incr", 1, 100000 ); 45 | _table_size = check( args[4], "table_size", 1, 100000000 ); 46 | impl = check( args[5], "impl", 0, _names.length ); 47 | 48 | _gr = (_read_ratio<<20)/100; 49 | _pr = (((1<<20) - _gr)>>1) + _gr; 50 | 51 | int trips = (_thread_max - _thread_min)/_thread_incr; 52 | _thread_max = trips*_thread_incr + _thread_min; 53 | 54 | } catch( Exception e ) { 55 | System.out.print("Usage: perf_set_test read%[0=churn test] thread-min thread-max thread-increment set_size impl["); 56 | for( String s : _names ) 57 | System.out.print(s+","); 58 | System.out.println("]"); 59 | throw e; 60 | } 61 | 62 | System.out.print( _read_ratio+"% gets, "+ 63 | ((100-_read_ratio)>>1)+"% inserts, "+ 64 | ((100-_read_ratio)>>1)+"% removes, " + 65 | "table_size="+_table_size); 66 | if( _read_ratio==0 ) 67 | System.out.print(" -- churn"); 68 | 69 | String name = _names[impl]; 70 | System.out.println(" "+name); 71 | System.out.println("Threads from "+_thread_min+" to "+_thread_max+" by "+_thread_incr); 72 | 73 | // Do some warmup. Make an array of Integers as Keys 74 | KEYMAX = 1; 75 | while( KEYMAX < _table_size ) KEYMAX<<=1; 76 | if( _read_ratio == 0 ) KEYMAX = 1024*1024; // The churn test uses a large key set 77 | KEYS = new Integer[KEYMAX]; 78 | for( int i=0; i AS; 105 | switch( impl ) { 106 | case 1: AS = new HashSet(); break; 107 | case 2: AS = new NonBlockingHashSet(); break; 108 | case 3: AS = new NonBlockingSetInt(); break; 109 | default: throw new Error("unimplemented"); 110 | } 111 | System.out.printf("=== %10.10s %3d cnts/sec=",_names[impl],num_threads); 112 | 113 | // Quicky sanity check 114 | for( int i=0; i<100; i++ ) { 115 | AS.add(KEYS[i]); 116 | for( int j=0; j 2 ) { 140 | // Toss out low & high 141 | int lo=0; 142 | int hi=0; 143 | for( int j=1; j trials[j] ) hi=j; 146 | } 147 | total -= (trials[lo]+trials[hi]); 148 | trials[lo] = trials[trials.length-1]; 149 | trials[hi] = trials[trials.length-2]; 150 | // Print avg,stddev 151 | long avg = total/(trials.length-2); 152 | long stddev = compute_stddev(trials,trials.length-2); 153 | long p = stddev*100/avg; // std-dev as a percent 154 | 155 | if( trials.length-2 > 2 ) { 156 | // Toss out low & high 157 | lo=0; 158 | hi=0; 159 | for( int j=1; j trials[j] ) hi=j; 162 | } 163 | total -= (trials[lo]+trials[hi]); 164 | trials[lo] = trials[trials.length-2-1]; 165 | trials[hi] = trials[trials.length-2-2]; 166 | // Print avg,stddev 167 | avg = total/(trials.length-2-2); 168 | stddev = compute_stddev(trials,trials.length-2-2); 169 | p = stddev*100/avg; // std-dev as a percent 170 | } 171 | System.out.printf(" %10d",avg); 172 | System.out.printf(" (+/-%2d%%) %d",p,AS.size()); 173 | } 174 | System.out.println(); 175 | } 176 | 177 | static long compute_stddev(long[] trials, int len) { 178 | double sum = 0; 179 | double squ = 0.0; 180 | for( int i=0; i _set; 195 | perf_set_test( int tnum, Set set, long[] ops, long [] nanos ) { _tnum = tnum; _set = set; _ops = ops; _nanos = nanos; } 196 | 197 | static long run_once( int num_threads, Set AS, long[] ops, long [] nanos ) throws Exception { 198 | Random R = new Random(); 199 | _start = false; 200 | _stop = false; 201 | 202 | // Fill the Set with data 203 | AS.add(0); 204 | AS.remove(0); 205 | 206 | int sz = AS.size(); 207 | int xsz=0; 208 | while( sz+1024 < _table_size ) { 209 | int idx = R.nextInt(); 210 | for( int i=0; i<1024; i++ ) { 211 | int k = idx&(KEYMAX-1); 212 | AS.add(k); 213 | idx++; 214 | } 215 | sz = AS.size(); 216 | } 217 | 218 | while( sz < ((_table_size>>1)+(_table_size>>3)) ) { 219 | int trip = 0; 220 | int idx = R.nextInt(); 221 | while( true ) { 222 | int k = idx&(KEYMAX-1); 223 | if( sz < _table_size ) { 224 | if( AS.add(k) ) { sz++; break; } 225 | } else { 226 | if( AS.remove(k) ) { sz--; break; } 227 | } 228 | idx++; 229 | if( (trip & 15)==15 ) idx = R.nextInt(); 230 | if( trip++ > 1024*1024 ) { 231 | if( trip > 1024*1024+100 ) 232 | throw new Exception("barf trip "+sz+" "+AS.size()+" numkeys="+KEYMAX); 233 | System.out.println(k); 234 | } 235 | } 236 | } 237 | 238 | if( sz != AS.size() ) { 239 | throw new Error("size does not match table contents sz="+sz+" size()="+AS.size()); 240 | } 241 | 242 | // Launch threads 243 | perf_set_test thrs[] = new perf_set_test[num_threads]; 244 | for( int i=0; i)_set); 266 | } else { 267 | if( _set instanceof NonBlockingSetInt ) run_normal_prim((NonBlockingSetInt) _set); 268 | else run_normal_int ((AbstractSet)_set); 269 | } 270 | } 271 | 272 | // Force a large turnover of live keys, while keeping the total live-set 273 | // low. 10 keys kept alive per thread, out of a set of a million or so. 274 | // constantly churned, so we constantly need to 'cleanse' the table to flush 275 | // old entries. 276 | public void run_churn_int( AbstractSet as ) { 277 | int reprobe = System.identityHashCode(Thread.currentThread()); 278 | int idx = reprobe; 279 | 280 | while( !_start ) // Spin till Time To Go 281 | try { Thread.sleep(1); } catch( Exception e ){} 282 | 283 | long nano1 = System.nanoTime(); 284 | int get_ops = 0; 285 | int put_ops = 0; 286 | int del_ops = 0; 287 | while( !_stop ) { 288 | // Insert a key 10 probes in the future, 289 | // remove a key 0 probes in the future, 290 | // Net result is the thread keeps 10 random keys in table 291 | int k1 = (idx+reprobe*10) & (KEYMAX-1); 292 | as.add(k1); 293 | put_ops++; 294 | 295 | // Remove a key 0 probes in the future 296 | int k2 = (idx+reprobe* 0) & (KEYMAX-1); 297 | as.remove(k2); 298 | del_ops++; 299 | 300 | idx += reprobe; 301 | } 302 | 303 | // We stopped; report results into shared result structure 304 | long nano2 = System.nanoTime(); 305 | int total = get_ops+put_ops+del_ops; 306 | _ops[_tnum] = total; 307 | _nanos[_tnum] = (nano2-nano1); 308 | } 309 | 310 | public void run_normal_prim( NonBlockingSetInt prim ) { 311 | SimpleRandom R = new SimpleRandom(); 312 | while( !_start ) // Spin till Time To Go 313 | try { Thread.sleep(1); } catch( Exception e ){} 314 | 315 | long nano1 = System.nanoTime(); 316 | int get_ops = 0; 317 | int put_ops = 0; 318 | int del_ops = 0; 319 | while( !_stop ) { 320 | int x = R.nextInt()&((1<<20)-1); 321 | int k = R.nextInt()&(KEYMAX-1); 322 | if( x < _gr ) { 323 | get_ops++; 324 | prim.contains(k); 325 | } else if( x < _pr ) { 326 | put_ops++; 327 | prim.add( k ); 328 | } else { 329 | del_ops++; 330 | prim.remove( k ); 331 | } 332 | } 333 | // We stopped; report results into shared result structure 334 | long nano2 = System.nanoTime(); 335 | int total = get_ops+put_ops+del_ops; 336 | _ops[_tnum] = total; 337 | _nanos[_tnum] = (nano2-nano1); 338 | } 339 | 340 | public void run_normal_int( AbstractSet as ) { 341 | SimpleRandom R = new SimpleRandom(); 342 | while( !_start ) // Spin till Time To Go 343 | try { Thread.sleep(1); } catch( Exception e ){} 344 | 345 | long nano1 = System.nanoTime(); 346 | int get_ops = 0; 347 | int put_ops = 0; 348 | int del_ops = 0; 349 | while( !_stop ) { 350 | int x = R.nextInt()&((1<<20)-1); 351 | int k = R.nextInt()&(KEYMAX-1); 352 | if( x < _gr ) { 353 | get_ops++; 354 | as.contains(KEYS[k]); 355 | } else if( x < _pr ) { 356 | put_ops++; 357 | as.add(KEYS[k]); 358 | } else { 359 | del_ops++; 360 | as.remove(KEYS[k]); 361 | } 362 | } 363 | // We stopped; report results into shared result structure 364 | long nano2 = System.nanoTime(); 365 | int total = get_ops+put_ops+del_ops; 366 | _ops[_tnum] = total; 367 | _nanos[_tnum] = (nano2-nano1); 368 | } 369 | 370 | // Fairly fast random numbers 371 | public static final class SimpleRandom { 372 | private final static long multiplier = 0x5DEECE66DL; 373 | private final static long addend = 0xBL; 374 | private final static long mask = (1L << 48) - 1; 375 | static final AtomicLong seq = new AtomicLong( -715159705); 376 | private long seed; 377 | SimpleRandom(long s) { seed = s; } 378 | SimpleRandom() { seed = System.nanoTime() + seq.getAndAdd(129); } 379 | public void setSeed(long s) { seed = s; } 380 | public int nextInt() { return next(); } 381 | public int next() { 382 | long nextseed = (seed * multiplier + addend) & mask; 383 | seed = nextseed; 384 | return ((int)(nextseed >>> 17)) & 0x7FFFFFFF; 385 | } 386 | } 387 | 388 | } 389 | -------------------------------------------------------------------------------- /contrib/Testing/NBQ_Tester/NBQ_Tester.java: -------------------------------------------------------------------------------- 1 | package contrib.Testing.NBQ_Tester; 2 | import junit.framework.TestCase; 3 | import java.util.Queue; 4 | import java.util.LinkedList; 5 | import java.util.NoSuchElementException; 6 | import java.util.concurrent.LinkedBlockingQueue; 7 | 8 | /** 9 | * Created by Prashant Deva. 10 | * Date: Nov 1, 2008 11 | */ 12 | public class NBQ_Tester extends TestCase { 13 | public static void main(String args[]) { 14 | org.junit.runner.JUnitCore.main("contrib.Testing.NBQ_Tester.NBQ_Tester"); 15 | } 16 | 17 | private Queue queue; 18 | 19 | public void setUp() 20 | { 21 | queue = new LinkedList(); 22 | checkIfEmpty(); 23 | } 24 | 25 | protected void tearDown() { 26 | checkIfEmpty(); 27 | queue=null; 28 | } 29 | 30 | public void testPoll() 31 | { 32 | 33 | queue.offer(1); 34 | assertEquals(1, (int) queue.poll()); 35 | 36 | } 37 | 38 | public void testPeek() 39 | { 40 | 41 | queue.offer(2); 42 | assertEquals(2, (int) queue.peek()); 43 | 44 | assertEquals(2, (int) queue.poll()); 45 | 46 | } 47 | 48 | 49 | public void testRemove() 50 | { 51 | queue.offer(2); 52 | queue.offer(4); 53 | 54 | assertEquals(2, (int) queue.remove()); 55 | assertEquals(4, (int) queue.remove()); 56 | 57 | try{ 58 | queue.remove(); 59 | }catch(NoSuchElementException e) 60 | { 61 | return; 62 | } 63 | 64 | fail(); 65 | } 66 | 67 | 68 | public void testOffer() 69 | { 70 | queue.offer(1); 71 | queue.offer(2); 72 | queue.offer(3); 73 | assertEquals(1,(int)queue.poll()); 74 | 75 | queue.offer(4); 76 | 77 | assertEquals(2,(int)queue.remove()); 78 | assertEquals(3,(int)queue.remove()); 79 | assertEquals(4,(int)queue.remove()); 80 | } 81 | 82 | 83 | public void testElement() 84 | { 85 | queue.offer(2); 86 | queue.offer(4); 87 | 88 | assertEquals(2, (int) queue.element()); 89 | assertEquals(2, (int) queue.remove()); 90 | 91 | assertEquals(4, (int) queue.element()); 92 | assertEquals(4, (int) queue.remove()); 93 | 94 | try{ 95 | queue.element(); 96 | }catch(NoSuchElementException e) 97 | { 98 | return; 99 | } 100 | 101 | fail(); 102 | } 103 | 104 | private void checkIfEmpty() { 105 | assertNull(queue.peek()); 106 | } 107 | } 108 | -------------------------------------------------------------------------------- /java/util/Hashtable.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Written by Cliff Click and released to the public domain, as explained at 3 | * http://creativecommons.org/licenses/publicdomain 4 | */ 5 | 6 | package java.util; 7 | import java.io.*; 8 | import java.util.Map; 9 | import org.cliffc.high_scale_lib.NonBlockingHashtable; 10 | 11 | /** 12 | * A plug-in replacement for JDK1.5 {@link java.util.Hashtable}. This version 13 | * is based on {@link org.cliffc.high_scale_lib.NonBlockingHashMap}. The 14 | * undocumented iteration order is different from Hashtable, as is the 15 | * undocumented (lack of) synchronization. Programs that rely on this 16 | * undocumented behavior may break. Otherwise this solution should be 17 | * completely compatible, including the serialized forms. 18 | * 19 | * This version is not synchronized, and correctly operates 20 | * as a thread-safe Hashtable. It does not provide the 21 | * same ordering guarantees as calling synchronized methods will. The old 22 | * Hashtable's methods were synchronized and would provide ordering. This 23 | * behavior is not part of Hashtable's spec. This version's methods are not 24 | * synchronized and will not force the same Java Memory Model orderings. 25 | * 26 | * 27 | * @since 1.5 28 | * @author Cliff Click 29 | * @param the type of keys maintained by this map 30 | * @param the type of mapped values 31 | */ 32 | public class Hashtable extends NonBlockingHashtable { 33 | /** use serialVersionUID from JDK 1.0.2 for interoperability */ 34 | private static final long serialVersionUID = 1421746759512286392L; 35 | // Field included strictly to pass the serialization JCK tests 36 | private final float loadFactor = 0.75f; 37 | private int threshold = (int)(loadFactor*4.0f); 38 | 39 | public Hashtable() { super(); } 40 | public Hashtable(int initialCapacity) { super(initialCapacity); } 41 | public Hashtable(int initialCapacity, float loadFactor) { 42 | super(initialCapacity); 43 | if (!(loadFactor > 0) ) 44 | throw new IllegalArgumentException(); 45 | threshold = (int)(initialCapacity * loadFactor); 46 | } 47 | public Hashtable(Map t) { 48 | super(); 49 | putAll(t); 50 | } 51 | 52 | // Serialize. This format is painful in several ways; it requires the count 53 | // of K/V pairs ahead of time - but the Hashtable is undergoing rapid 54 | // concurrent modification, so we painfully clone the entire table to get a 55 | // stable local version. Another way to do this would be to write-lock the 56 | // table somehow until the serizalition is done, or copy-on-write. 57 | // Another useless thing is that the threshold, loadfactor and current table 58 | // size are all meaningless to the underlying NBHM. 59 | private void writeObject(java.io.ObjectOutputStream s) throws IOException { 60 | // Clone, to guard against concurrent mod during the write messing with 61 | // the element count. 62 | Hashtable t = (Hashtable)this.clone(); 63 | // Write out the threshold, loadfactor 64 | s.defaultWriteObject(); 65 | // Write out length, count of elements and then the key/value objects 66 | s.writeInt((int)(t.size()/loadFactor)); 67 | s.writeInt(t.size()); 68 | for( Object K : keySet() ) { 69 | final Object V = get(K); // Do an official 'get' 70 | s.writeObject(K); // Write the pair 71 | s.writeObject(V); 72 | } 73 | } 74 | 75 | // Reconstitute the Hashtable from a stream (i.e., deserialize it). 76 | private void readObject(java.io.ObjectInputStream s) throws IOException, ClassNotFoundException { 77 | // Read in the threshold, and loadfactor 78 | s.defaultReadObject(); 79 | initialize(); // Setup the NBHM 80 | 81 | // Read the original length of the array and number of elements 82 | int origlength = s.readInt(); 83 | int elements = s.readInt(); 84 | 85 | // Read the number of elements and then all the key/value objects 86 | for( int i=0; i the type of keys maintained by this map 19 | * @param the type of mapped values 20 | */ 21 | public class ConcurrentHashMap extends NonBlockingHashMap { 22 | private static final long serialVersionUID = 7249069246763182397L; 23 | public ConcurrentHashMap() { super(); } 24 | public ConcurrentHashMap(int initialCapacity) { super(initialCapacity); } 25 | public ConcurrentHashMap(int initialCapacity, float loadFactor, int concurrencyLevel) { 26 | super(initialCapacity); 27 | if (!(loadFactor > 0) || concurrencyLevel <= 0) 28 | throw new IllegalArgumentException(); 29 | } 30 | public ConcurrentHashMap(Map t) { 31 | super(); 32 | putAll(t); 33 | } 34 | } 35 | -------------------------------------------------------------------------------- /lib/junit-4.4.jar: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/edwardw/high-scale-java-lib/01eb822b6327fa3882c78333e4548b88d36a85d9/lib/junit-4.4.jar -------------------------------------------------------------------------------- /org/cliffc/high_scale_lib/AbstractEntry.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Written by Cliff Click and released to the public domain, as explained at 3 | * http://creativecommons.org/licenses/publicdomain 4 | */ 5 | 6 | package org.cliffc.high_scale_lib; 7 | import java.util.*; 8 | 9 | /** 10 | * A simple implementation of {@link java.util.Map.Entry}. 11 | * Does not implement {@link java.util.Map.Entry.setValue}, that is done by users of the class. 12 | * 13 | * @since 1.5 14 | * @author Cliff Click 15 | * @param the type of keys maintained by this map 16 | * @param the type of mapped values 17 | */ 18 | 19 | abstract class AbstractEntry implements Map.Entry { 20 | /** Strongly typed key */ 21 | protected final TypeK _key; 22 | /** Strongly typed value */ 23 | protected TypeV _val; 24 | 25 | public AbstractEntry(final TypeK key, final TypeV val) { _key = key; _val = val; } 26 | public AbstractEntry(final Map.Entry e ) { _key = e.getKey(); _val = e.getValue(); } 27 | /** Return "key=val" string */ 28 | public String toString() { return _key + "=" + _val; } 29 | /** Return key */ 30 | public TypeK getKey () { return _key; } 31 | /** Return val */ 32 | public TypeV getValue() { return _val; } 33 | 34 | /** Equal if the underlying key & value are equal */ 35 | public boolean equals(final Object o) { 36 | if (!(o instanceof Map.Entry)) return false; 37 | final Map.Entry e = (Map.Entry)o; 38 | return eq(_key, e.getKey()) && eq(_val, e.getValue()); 39 | } 40 | 41 | /** Compute "key.hashCode() ^ val.hashCode()" */ 42 | public int hashCode() { 43 | return 44 | ((_key == null) ? 0 : _key.hashCode()) ^ 45 | ((_val == null) ? 0 : _val.hashCode()); 46 | } 47 | 48 | private static boolean eq(final Object o1, final Object o2) { 49 | return (o1 == null ? o2 == null : o1.equals(o2)); 50 | } 51 | } 52 | 53 | -------------------------------------------------------------------------------- /org/cliffc/high_scale_lib/ConcurrentAutoTable.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Written by Cliff Click and released to the public domain, as explained at 3 | * http://creativecommons.org/licenses/publicdomain 4 | */ 5 | 6 | package org.cliffc.high_scale_lib; 7 | import java.io.Serializable; 8 | import java.util.concurrent.atomic.*; 9 | import sun.misc.Unsafe; 10 | 11 | /** 12 | * An auto-resizing table of {@code longs}, supporting low-contention CAS 13 | * operations. Updates are done with CAS's to no particular table element. 14 | * The intent is to support highly scalable counters, r/w locks, and other 15 | * structures where the updates are associative, loss-free (no-brainer), and 16 | * otherwise happen at such a high volume that the cache contention for 17 | * CAS'ing a single word is unacceptable. 18 | * 19 | *

This API is overkill for simple counters (e.g. no need for the 'mask') 20 | * and is untested as an API for making a scalable r/w lock and so is likely 21 | * to change! 22 | * 23 | * @since 1.5 24 | * @author Cliff Click 25 | */ 26 | 27 | 28 | public class ConcurrentAutoTable implements Serializable { 29 | 30 | // --- public interface --- 31 | 32 | /** 33 | * Add the given value to current counter value. Concurrent updates will 34 | * not be lost, but addAndGet or getAndAdd are not implemented because the 35 | * total counter value (i.e., {@link #get}) is not atomically updated. 36 | * Updates are striped across an array of counters to avoid cache contention 37 | * and has been tested with performance scaling linearly up to 768 CPUs. 38 | */ 39 | public void add( long x ) { add_if_mask( x,0); } 40 | /** {@link #add} with -1 */ 41 | public void decrement() { add_if_mask(-1L,0); } 42 | /** {@link #add} with +1 */ 43 | public void increment() { add_if_mask( 1L,0); } 44 | 45 | /** Atomically set the sum of the striped counters to specified value. 46 | * Rather more expensive than a simple store, in order to remain atomic. 47 | */ 48 | public void set( long x ) { 49 | CAT newcat = new CAT(null,4,x); 50 | // Spin until CAS works 51 | while( !CAS_cat(_cat,newcat) ); 52 | } 53 | 54 | /** 55 | * Current value of the counter. Since other threads are updating furiously 56 | * the value is only approximate, but it includes all counts made by the 57 | * current thread. Requires a pass over the internally striped counters. 58 | */ 59 | public long get() { return _cat.sum(0); } 60 | /** Same as {@link #get}, included for completeness. */ 61 | public int intValue() { return (int)_cat.sum(0); } 62 | /** Same as {@link #get}, included for completeness. */ 63 | public long longValue() { return _cat.sum(0); } 64 | 65 | /** 66 | * A cheaper {@link #get}. Updated only once/millisecond, but as fast as a 67 | * simple load instruction when not updating. 68 | */ 69 | public long estimate_get( ) { return _cat.estimate_sum(0); } 70 | 71 | /** 72 | * Return the counter's {@code long} value converted to a string. 73 | */ 74 | public String toString() { return _cat.toString(0); } 75 | 76 | /** 77 | * A more verbose print than {@link #toString}, showing internal structure. 78 | * Useful for debugging. 79 | */ 80 | public void print() { _cat.print(); } 81 | 82 | /** 83 | * Return the internal counter striping factor. Useful for diagnosing 84 | * performance problems. 85 | */ 86 | public int internal_size() { return _cat._t.length; } 87 | 88 | // Only add 'x' to some slot in table, hinted at by 'hash', if bits under 89 | // the mask are all zero. The sum can overflow or 'x' can contain bits in 90 | // the mask. Value is CAS'd so no counts are lost. The CAS is retried until 91 | // it succeeds or bits are found under the mask. Returned value is the old 92 | // value - which WILL have zero under the mask on success and WILL NOT have 93 | // zero under the mask for failure. 94 | private long add_if_mask( long x, long mask ) { return _cat.add_if_mask(x,mask,hash(),this); } 95 | 96 | // The underlying array of concurrently updated long counters 97 | private volatile CAT _cat = new CAT(null,4/*Start Small, Think Big!*/,0L); 98 | private static final AtomicReferenceFieldUpdater _catUpdater = 99 | AtomicReferenceFieldUpdater.newUpdater(ConcurrentAutoTable.class,CAT.class, "_cat"); 100 | private boolean CAS_cat( CAT oldcat, CAT newcat ) { return _catUpdater.compareAndSet(this,oldcat,newcat); } 101 | 102 | // Hash spreader 103 | private static final int hash() { 104 | int h = System.identityHashCode(Thread.currentThread()); 105 | // You would think that System.identityHashCode on the current thread 106 | // would be a good hash fcn, but actually on SunOS 5.8 it is pretty lousy 107 | // in the low bits. 108 | h ^= (h>>>20) ^ (h>>>12); // Bit spreader, borrowed from Doug Lea 109 | h ^= (h>>> 7) ^ (h>>> 4); 110 | return h<<2; // Pad out cache lines. The goal is to avoid cache-line contention 111 | } 112 | 113 | // --- CAT ----------------------------------------------------------------- 114 | private static class CAT implements Serializable { 115 | 116 | // Unsafe crud: get a function which will CAS arrays 117 | private static final Unsafe _unsafe = UtilUnsafe.getUnsafe(); 118 | private static final int _Lbase = _unsafe.arrayBaseOffset(long[].class); 119 | private static final int _Lscale = _unsafe.arrayIndexScale(long[].class); 120 | private static long rawIndex(long[] ary, int i) { 121 | assert i >= 0 && i < ary.length; 122 | return _Lbase + i * _Lscale; 123 | } 124 | private final static boolean CAS( long[] A, int idx, long old, long nnn ) { 125 | return _unsafe.compareAndSwapLong( A, rawIndex(A,idx), old, nnn ); 126 | } 127 | 128 | volatile long _resizers; // count of threads attempting a resize 129 | static private final AtomicLongFieldUpdater _resizerUpdater = 130 | AtomicLongFieldUpdater.newUpdater(CAT.class, "_resizers"); 131 | 132 | private final CAT _next; 133 | private volatile long _sum_cache; 134 | private volatile long _fuzzy_sum_cache; 135 | private volatile long _fuzzy_time; 136 | private static final int MAX_SPIN=2; 137 | private long[] _t; // Power-of-2 array of longs 138 | 139 | CAT( CAT next, int sz, long init ) { 140 | _next = next; 141 | _sum_cache = Long.MIN_VALUE; 142 | _t = new long[sz]; 143 | _t[0] = init; 144 | } 145 | 146 | // Only add 'x' to some slot in table, hinted at by 'hash', if bits under 147 | // the mask are all zero. The sum can overflow or 'x' can contain bits in 148 | // the mask. Value is CAS'd so no counts are lost. The CAS is attempted 149 | // ONCE. 150 | public long add_if_mask( long x, long mask, int hash, ConcurrentAutoTable master ) { 151 | long[] t = _t; 152 | int idx = hash & (t.length-1); 153 | // Peel loop; try once fast 154 | long old = t[idx]; 155 | boolean ok = CAS( t, idx, old&~mask, old+x ); 156 | if( _sum_cache != Long.MIN_VALUE ) 157 | _sum_cache = Long.MIN_VALUE; // Blow out cache 158 | if( ok ) return old; // Got it 159 | if( (old&mask) != 0 ) return old; // Failed for bit-set under mask 160 | // Try harder 161 | int cnt=0; 162 | while( true ) { 163 | old = t[idx]; 164 | if( (old&mask) != 0 ) return old; // Failed for bit-set under mask 165 | if( CAS( t, idx, old, old+x ) ) break; // Got it! 166 | cnt++; 167 | } 168 | if( cnt < MAX_SPIN ) return old; // Allowable spin loop count 169 | if( t.length >= 1024*1024 ) return old; // too big already 170 | 171 | // Too much contention; double array size in an effort to reduce contention 172 | long r = _resizers; 173 | int newbytes = (t.length<<1)<<3/*word to bytes*/; 174 | while( !_resizerUpdater.compareAndSet(this,r,r+newbytes) ) 175 | r = _resizers; 176 | r += newbytes; 177 | if( master._cat != this ) return old; // Already doubled, don't bother 178 | if( (r>>17) != 0 ) { // Already too much allocation attempts? 179 | // TODO - use a wait with timeout, so we'll wakeup as soon as the new 180 | // table is ready, or after the timeout in any case. Annoyingly, this 181 | // breaks the non-blocking property - so for now we just briefly sleep. 182 | //synchronized( this ) { wait(8*megs); } // Timeout - we always wakeup 183 | try { Thread.sleep(r>>17); } catch( InterruptedException e ) { } 184 | if( master._cat != this ) return old; 185 | } 186 | 187 | CAT newcat = new CAT(this,t.length*2,0); 188 | // Take 1 stab at updating the CAT with the new larger size. If this 189 | // fails, we assume some other thread already expanded the CAT - so we 190 | // do not need to retry until it succeeds. 191 | master.CAS_cat(this,newcat); 192 | return old; 193 | } 194 | 195 | 196 | // Return the current sum of all things in the table, stripping off mask 197 | // before the add. Writers can be updating the table furiously, so the 198 | // sum is only locally accurate. 199 | public long sum( long mask ) { 200 | long sum = _sum_cache; 201 | if( sum != Long.MIN_VALUE ) return sum; 202 | sum = _next == null ? 0 : _next.sum(mask); // Recursively get cached sum 203 | long[] t = _t; 204 | for( int i=0; i extends AbstractSet implements Serializable { 21 | private static final Object V = ""; 22 | 23 | private final NonBlockingHashMap _map; 24 | 25 | /** Make a new empty {@link NonBlockingHashSet}. */ 26 | public NonBlockingHashSet() { super(); _map = new NonBlockingHashMap(); } 27 | 28 | /** Add {@code o} to the set. 29 | * @return true if {@code o} was added to the set, false 30 | * if {@code o} was already in the set. 31 | */ 32 | public boolean add ( final E o ) { return _map.putIfAbsent(o,V) != V; } 33 | 34 | /** 35 | * @return true if {@code o} is in the set. 36 | */ 37 | public boolean contains ( final Object o ) { return _map.containsKey(o); } 38 | /** Remove {@code o} from the set. 39 | * @return true if {@code o} was removed to the set, false 40 | * if {@code o} was not in the set. 41 | */ 42 | public boolean remove ( final Object o ) { return _map.remove(o) == V; } 43 | /** 44 | * Current count of elements in the set. Due to concurrent racing updates, 45 | * the size is only ever approximate. Updates due to the calling thread are 46 | * immediately visible to calling thread. 47 | * @return count of elements. 48 | */ 49 | public int size ( ) { return _map.size(); } 50 | /** Empty the set. */ 51 | public void clear ( ) { _map.clear(); } 52 | 53 | public Iteratoriterator( ) { return _map.keySet().iterator(); } 54 | 55 | // --- 56 | 57 | /** 58 | * Atomically make the set immutable. Future calls to mutate will throw an 59 | * IllegalStateException. Existing mutator calls in other threads racing 60 | * with this thread and will either throw IllegalStateException or their 61 | * update will be visible to this thread. This implies that a simple flag 62 | * cannot make the Set immutable, because a late-arriving update in another 63 | * thread might see immutable flag not set yet, then mutate the Set after 64 | * the {@link #readOnly} call returns. This call can be called concurrently 65 | * (and indeed until the operation completes, all calls on the Set from any 66 | * thread either complete normally or end up calling {@link #readOnly} 67 | * internally). 68 | * 69 | *

This call is useful in debugging multi-threaded programs where the 70 | * Set is constructed in parallel, but construction completes after some 71 | * time; and after construction the Set is only read. Making the Set 72 | * read-only will cause updates arriving after construction is supposedly 73 | * complete to throw an {@link IllegalStateException}. 74 | */ 75 | 76 | // (1) call _map's immutable() call 77 | // (2) get snapshot 78 | // (3) CAS down a local map, power-of-2 larger than _map.size()+1/8th 79 | // (4) start @ random, visit all snapshot, insert live keys 80 | // (5) CAS _map to null, needs happens-after (4) 81 | // (6) if Set call sees _map is null, needs happens-after (4) for readers 82 | public void readOnly() { 83 | throw new RuntimeException("Unimplemented"); 84 | } 85 | } 86 | -------------------------------------------------------------------------------- /org/cliffc/high_scale_lib/NonBlockingSetInt.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Written by Cliff Click and released to the public domain, as explained at 3 | * http://creativecommons.org/licenses/publicdomain 4 | */ 5 | 6 | package org.cliffc.high_scale_lib; 7 | import java.io.IOException; 8 | import java.io.Serializable; 9 | import java.lang.reflect.*; 10 | import java.util.*; 11 | import java.util.concurrent.*; 12 | import java.util.concurrent.atomic.*; 13 | import sun.misc.Unsafe; 14 | 15 | /** 16 | * A multi-threaded bit-vector set, implemented as an array of primitive 17 | * {@code longs}. All operations are non-blocking and multi-threaded safe. 18 | * {@link #contains(int)} calls are roughly the same speed as a {load, mask} 19 | * sequence. {@link #add(int)} and {@link #remove(int)} calls are a tad more 20 | * expensive than a {load, mask, store} sequence because they must use a CAS. 21 | * The bit-vector is auto-sizing. 22 | * 23 | *

General note of caution: The Set API allows the use of {@link Integer} 24 | * with silent autoboxing - which can be very expensive if many calls are 25 | * being made. Since autoboxing is silent you may not be aware that this is 26 | * going on. The built-in API takes lower-case {@code ints} and is much more 27 | * efficient. 28 | * 29 | *

Space: space is used in proportion to the largest element, as opposed to 30 | * the number of elements (as is the case with hash-table based Set 31 | * implementations). Space is approximately (largest_element/8 + 64) bytes. 32 | * 33 | * The implementation is a simple bit-vector using CAS for update. 34 | * 35 | * @since 1.5 36 | * @author Cliff Click 37 | */ 38 | 39 | public class NonBlockingSetInt extends AbstractSet implements Serializable { 40 | private static final long serialVersionUID = 1234123412341234123L; 41 | private static final Unsafe _unsafe = UtilUnsafe.getUnsafe(); 42 | 43 | // --- Bits to allow atomic update of the NBSI 44 | private static final long _nbsi_offset; 45 | static { // 46 | Field f = null; 47 | try { 48 | f = NonBlockingSetInt.class.getDeclaredField("_nbsi"); 49 | } catch( java.lang.NoSuchFieldException e ) { 50 | } 51 | _nbsi_offset = _unsafe.objectFieldOffset(f); 52 | } 53 | private final boolean CAS_nbsi( NBSI old, NBSI nnn ) { 54 | return _unsafe.compareAndSwapObject(this, _nbsi_offset, old, nnn ); 55 | } 56 | 57 | // The actual Set of Joy, which changes during a resize event. The 58 | // Only Field for this class, so I can atomically change the entire 59 | // set implementation with a single CAS. 60 | private transient NBSI _nbsi; 61 | 62 | /** Create a new empty bit-vector */ 63 | public NonBlockingSetInt( ) { 64 | _nbsi = new NBSI(63, new Counter(), this); // The initial 1-word set 65 | } 66 | 67 | /** 68 | * Add {@code i} to the set. Uppercase {@link Integer} version of add, 69 | * requires auto-unboxing. When possible use the {@code int} version of 70 | * {@link #add(int)} for efficiency. 71 | * @throws IllegalArgumentException if i is negative. 72 | * @return true if i was added to the set. 73 | */ 74 | public boolean add ( final Integer i ) { 75 | return add(i.intValue()); 76 | } 77 | /** 78 | * Test if {@code o} is in the set. This is the uppercase {@link Integer} 79 | * version of contains, requires a type-check and auto-unboxing. When 80 | * possible use the {@code int} version of {@link #contains(int)} for 81 | * efficiency. 82 | * @return true if i was in the set. 83 | */ 84 | public boolean contains( final Object o ) { 85 | return o instanceof Integer ? contains(((Integer)o).intValue()) : false; 86 | } 87 | /** 88 | * Remove {@code o} from the set. This is the uppercase {@link Integer} 89 | * version of remove, requires a type-check and auto-unboxing. When 90 | * possible use the {@code int} version of {@link #remove(int)} for 91 | * efficiency. 92 | * @return true if i was removed to the set. 93 | */ 94 | public boolean remove( final Object o ) { 95 | return o instanceof Integer ? remove (((Integer)o).intValue()) : false; 96 | } 97 | 98 | /** 99 | * Add {@code i} to the set. This is the lower-case '{@code int}' version 100 | * of {@link #add} - no autoboxing. Negative values throw 101 | * IllegalArgumentException. 102 | * @throws IllegalArgumentException if i is negative. 103 | * @return true if i was added to the set. 104 | */ 105 | public boolean add( final int i ) { 106 | if( i < 0 ) throw new IllegalArgumentException(""+i); 107 | return _nbsi.add(i); 108 | } 109 | /** 110 | * Test if {@code i} is in the set. This is the lower-case '{@code int}' 111 | * version of {@link #contains} - no autoboxing. 112 | * @return true if i was int the set. 113 | */ 114 | public boolean contains( final int i ) { return i<0 ? false : _nbsi.contains(i); } 115 | /** 116 | * Remove {@code i} from the set. This is the fast lower-case '{@code int}' 117 | * version of {@link #remove} - no autoboxing. 118 | * @return true if i was added to the set. 119 | */ 120 | public boolean remove ( final int i ) { return i<0 ? false : _nbsi.remove (i); } 121 | 122 | /** 123 | * Current count of elements in the set. Due to concurrent racing updates, 124 | * the size is only ever approximate. Updates due to the calling thread are 125 | * immediately visible to calling thread. 126 | * @return count of elements. 127 | */ 128 | public int size ( ) { return _nbsi.size( ); } 129 | /** Empty the bitvector. */ 130 | public void clear ( ) { 131 | NBSI cleared = new NBSI(63, new Counter(), this); // An empty initial NBSI 132 | while( !CAS_nbsi( _nbsi, cleared ) ) // Spin until clear works 133 | ; 134 | } 135 | 136 | /** Verbose printout of internal structure for debugging. */ 137 | public void print() { _nbsi.print(0); } 138 | 139 | /** 140 | * Standard Java {@link Iterator}. Not very efficient because it 141 | * auto-boxes the returned values. 142 | */ 143 | public Iterator iterator( ) { return new iter(); } 144 | 145 | private class iter implements Iterator { 146 | NBSI _nbsi2; 147 | int _idx = -1; 148 | int _prev = -1; 149 | iter() { _nbsi2 = _nbsi; advance(); } 150 | public boolean hasNext() { return _idx != -2; } 151 | private void advance() { 152 | while( true ) { 153 | _idx++; // Next index 154 | while( (_idx>>6) >= _nbsi2._bits.length ) { // Index out of range? 155 | if( _nbsi2._new == null ) { // New table? 156 | _idx = -2; // No, so must be all done 157 | return; // 158 | } 159 | _nbsi2 = _nbsi2._new; // Carry on, in the new table 160 | } 161 | if( _nbsi2.contains(_idx) ) return; 162 | } 163 | } 164 | public Integer next() { 165 | if( _idx == -1 ) throw new NoSuchElementException(); 166 | _prev = _idx; 167 | advance(); 168 | return _prev; 169 | } 170 | public void remove() { 171 | if( _prev == -1 ) throw new IllegalStateException(); 172 | _nbsi2.remove(_prev); 173 | _prev = -1; 174 | } 175 | } 176 | 177 | // --- writeObject ------------------------------------------------------- 178 | // Write a NBSI to a stream 179 | private void writeObject(java.io.ObjectOutputStream s) throws IOException { 180 | s.defaultWriteObject(); // Nothing to write 181 | final NBSI nbsi = _nbsi; // The One Field is transient 182 | final int len = _nbsi._bits.length<<6; 183 | s.writeInt(len); // Write max element 184 | for( int i=0; i= 0 && idx < ary.length; 214 | return _Lbase + idx * _Lscale; 215 | } 216 | private final boolean CAS( int idx, long old, long nnn ) { 217 | return _unsafe.compareAndSwapLong( _bits, rawIndex(_bits, idx), old, nnn ); 218 | } 219 | 220 | // --- Resize 221 | // The New Table, only set once to non-zero during a resize. 222 | // Must be atomically set. 223 | private NBSI _new; 224 | private static final long _new_offset; 225 | static { // 226 | Field f = null; 227 | try { 228 | f = NBSI.class.getDeclaredField("_new"); 229 | } catch( java.lang.NoSuchFieldException e ) { 230 | } 231 | _new_offset = _unsafe.objectFieldOffset(f); 232 | } 233 | private final boolean CAS_new( NBSI nnn ) { 234 | return _unsafe.compareAndSwapObject(this, _new_offset, null, nnn ); 235 | } 236 | 237 | private transient final AtomicInteger _copyIdx; // Used to count bits started copying 238 | private transient final AtomicInteger _copyDone; // Used to count words copied in a resize operation 239 | private transient final int _sum_bits_length; // Sum of all nested _bits.lengths 240 | 241 | private static final long mask( int i ) { return 1L<<(i&63); } 242 | 243 | // I need 1 free bit out of 64 to allow for resize. I do this by stealing 244 | // the high order bit - but then I need to do something with adding element 245 | // number 63 (and friends). I could use a mod63 function but it's more 246 | // efficient to handle the mod-64 case as an exception. 247 | // 248 | // Every 64th bit is put in it's own recursive bitvector. If the low 6 bits 249 | // are all set, we shift them off and recursively operate on the _nbsi64 set. 250 | private final NBSI _nbsi64; 251 | 252 | private NBSI( int max_elem, Counter ctr, NonBlockingSetInt nonb ) { 253 | super(); 254 | _non_blocking_set_int = nonb; 255 | _size = ctr; 256 | _copyIdx = ctr == null ? null : new AtomicInteger(); 257 | _copyDone = ctr == null ? null : new AtomicInteger(); 258 | // The main array of bits 259 | _bits = new long[(int)(((long)max_elem+63)>>>6)]; 260 | // Every 64th bit is moved off to it's own subarray, so that the 261 | // sign-bit is free for other purposes 262 | _nbsi64 = ((max_elem+1)>>>6) == 0 ? null : new NBSI((max_elem+1)>>>6, null, null); 263 | _sum_bits_length = _bits.length + (_nbsi64==null ? 0 : _nbsi64._sum_bits_length); 264 | } 265 | 266 | // Lower-case 'int' versions - no autoboxing, very fast. 267 | // 'i' is known positive. 268 | public boolean add( final int i ) { 269 | // Check for out-of-range for the current size bit vector. 270 | // If so we need to grow the bit vector. 271 | if( (i>>6) >= _bits.length ) 272 | return install_larger_new_bits(i). // Install larger pile-o-bits (duh) 273 | help_copy().add(i); // Finally, add to the new table 274 | 275 | // Handle every 64th bit via using a nested array 276 | NBSI nbsi = this; // The bit array being added into 277 | int j = i; // The bit index being added 278 | while( (j&63) == 63 ) { // Bit 64? (low 6 bits are all set) 279 | nbsi = nbsi._nbsi64; // Recurse 280 | j = j>>6; // Strip off low 6 bits (all set) 281 | } 282 | 283 | final long mask = mask(j); 284 | long old; 285 | do { 286 | old = nbsi._bits[j>>6]; // Read old bits 287 | if( old < 0 ) // Not mutable? 288 | // Not mutable: finish copy of word, and retry on copied word 289 | return help_copy_impl(i).help_copy().add(i); 290 | if( (old & mask) != 0 ) return false; // Bit is already set? 291 | } while( !nbsi.CAS( j>>6, old, old | mask ) ); 292 | _size.add(1); 293 | return true; 294 | } 295 | 296 | public boolean remove( final int i ) { 297 | if( (i>>6) >= _bits.length ) // Out of bounds? Not in this array! 298 | return _new==null ? false : help_copy().remove(i); 299 | 300 | // Handle every 64th bit via using a nested array 301 | NBSI nbsi = this; // The bit array being added into 302 | int j = i; // The bit index being added 303 | while( (j&63) == 63 ) { // Bit 64? (low 6 bits are all set) 304 | nbsi = nbsi._nbsi64; // Recurse 305 | j = j>>6; // Strip off low 6 bits (all set) 306 | } 307 | 308 | final long mask = mask(j); 309 | long old; 310 | do { 311 | old = nbsi._bits[j>>6]; // Read old bits 312 | if( old < 0 ) // Not mutable? 313 | // Not mutable: finish copy of word, and retry on copied word 314 | return help_copy_impl(i).help_copy().remove(i); 315 | if( (old & mask) == 0 ) return false; // Bit is already clear? 316 | } while( !nbsi.CAS( j>>6, old, old & ~mask ) ); 317 | _size.add(-1); 318 | return true; 319 | } 320 | 321 | public boolean contains( final int i ) { 322 | if( (i>>6) >= _bits.length ) // Out of bounds? Not in this array! 323 | return _new==null ? false : help_copy().contains(i); 324 | 325 | // Handle every 64th bit via using a nested array 326 | NBSI nbsi = this; // The bit array being added into 327 | int j = i; // The bit index being added 328 | while( (j&63) == 63 ) { // Bit 64? (low 6 bits are all set) 329 | nbsi = nbsi._nbsi64; // Recurse 330 | j = j>>6; // Strip off low 6 bits (all set) 331 | } 332 | 333 | final long mask = mask(j); 334 | long old = nbsi._bits[j>>6]; // Read old bits 335 | if( old < 0 ) // Not mutable? 336 | // Not mutable: finish copy of word, and retry on copied word 337 | return help_copy_impl(i).help_copy().contains(i); 338 | // Yes mutable: test & return bit 339 | return (old & mask) != 0; 340 | } 341 | 342 | public int size() { return (int)_size.get(); } 343 | 344 | // Must grow the current array to hold an element of size i 345 | private NBSI install_larger_new_bits( final int i ) { 346 | if( _new == null ) { 347 | // Grow by powers of 2, to avoid minor grow-by-1's. 348 | // Note: must grow by exact powers-of-2 or the by-64-bit trick doesn't work right 349 | int sz = (_bits.length<<6)<<1; 350 | // CAS to install a new larger size. Did it work? Did it fail? We 351 | // don't know and don't care. Only One can be installed, so if 352 | // another thread installed a too-small size, we can't help it - we 353 | // must simply install our new larger size as a nested-resize table. 354 | CAS_new(new NBSI(sz, _size, _non_blocking_set_int)); 355 | } 356 | // Return self for 'fluid' programming style 357 | return this; 358 | } 359 | 360 | // Help any top-level NBSI to copy until completed. 361 | // Always return the _new version of *this* NBSI, in case we're nested. 362 | private NBSI help_copy() { 363 | // Pick some words to help with - but only help copy the top-level NBSI. 364 | // Nested NBSI waits until the top is done before we start helping. 365 | NBSI top_nbsi = _non_blocking_set_int._nbsi; 366 | final int HELP = 8; // Tuning number: how much copy pain are we willing to inflict? 367 | // We "help" by forcing individual bit indices to copy. However, bits 368 | // come in lumps of 64 per word, so we just advance the bit counter by 64's. 369 | int idx = top_nbsi._copyIdx.getAndAdd(64*HELP); 370 | for( int i=0; i>6; // Strip off low 6 bits (all set) 407 | } 408 | 409 | // Transit from state 1: word is not immutable yet 410 | // Immutable is in bit 63, the sign bit. 411 | long bits = old._bits[j>>6]; 412 | while( bits >= 0 ) { // Still in state (1)? 413 | long oldbits = bits; 414 | bits |= mask(63); // Target state of bits: sign-bit means immutable 415 | if( old.CAS( j>>6, oldbits, bits ) ) { 416 | if( oldbits == 0 ) _copyDone.addAndGet(1); 417 | break; // Success - old array word is now immutable 418 | } 419 | bits = old._bits[j>>6]; // Retry if CAS failed 420 | } 421 | 422 | // Transit from state 2: non-zero in old and zero in new 423 | if( bits != mask(63) ) { // Non-zero in old? 424 | long new_bits = nnn._bits[j>>6]; 425 | if( new_bits == 0 ) { // New array is still zero 426 | new_bits = bits & ~mask(63); // Desired new value: a mutable copy of bits 427 | // One-shot CAS attempt, no loop, from 0 to non-zero. 428 | // If it fails, somebody else did the copy for us 429 | if( !nnn.CAS( j>>6, 0, new_bits ) ) 430 | new_bits = nnn._bits[j>>6]; // Since it failed, get the new value 431 | assert new_bits != 0; 432 | } 433 | 434 | // Transit from state 3: non-zero in old and non-zero in new 435 | // One-shot CAS attempt, no loop, from non-zero to 0 (but immutable) 436 | if( old.CAS( j>>6, bits, mask(63) ) ) 437 | _copyDone.addAndGet(1); // One more word finished copying 438 | } 439 | 440 | // Now in state 4: zero (and immutable) in old 441 | 442 | // Return the self bitvector for 'fluid' programming style 443 | return this; 444 | } 445 | 446 | private void print( int d, String msg ) { 447 | for( int i=0; i