volatile ensures variable updates are immediately visible to all threads by preventing CPU caching. Use when: 1) One thread writes, others read, 2) No atomic operations needed besides assignment, 3) Variable doesn't depend on its current value. Provides visibility guarantee but not atomicity. Common in flag variables for thread coordination.
Concurrent collections (ConcurrentHashMap, CopyOnWriteArrayList) designed for concurrent access. Differences: 1) Better scalability through fine-grained locking, 2) Fail-safe iteration, 3) Atomic compound operations, 4) No explicit synchronization needed. Trade-offs: higher memory usage, slightly slower single-thread performance. Choose based on concurrent access patterns.
Lock striping divides single lock into multiple locks for different hashcode ranges (used in ConcurrentHashMap). Lock coarsening combines adjacent synchronized blocks for performance. JVM optimizations reduce synchronization overhead. Trade-off between contention and overhead. Monitor JVM behavior to verify optimizations.
Interruption is cooperative cancellation mechanism. Proper handling: 1) Check/clear interrupted status, 2) Throw InterruptedException or restore flag, 3) Clean up resources, 4) Propagate interruption. Common in blocking operations, long-running tasks. Challenges: third-party code handling, partial results, cleanup. Design for cancellation from start.
Race conditions occur when multiple threads access shared data concurrently and at least one modifies it. Prevention methods: 1) Synchronization (synchronized, locks), 2) Atomic classes, 3) Immutable objects, 4) Thread confinement, 5) Concurrent collections. Identification through code review, testing (stress tests), and thread analysis tools. Design for thread-safety from start.
Fork/Join framework (Java 7+) implements divide-and-conquer parallelism. Uses work-stealing algorithm: idle threads steal tasks from busy ones. Components: ForkJoinPool, RecursiveTask (returns result), RecursiveAction (void). Best for CPU-intensive tasks that can be broken into smaller subtasks. Integrates with Stream API for parallel operations.
Happens-before relationship defines memory visibility rules between operations. Key relationships: 1) Program order within thread, 2) Monitor lock/unlock, 3) volatile write/read, 4) Thread start/join. Essential for understanding when updates become visible between threads. Forms basis for Java Memory Model's consistency guarantees. Critical for writing correct concurrent code.
Thread pools maintain worker threads for task execution. Sizing considerations: 1) CPU cores (CPU-bound tasks: cores+1), 2) I/O waiting time (I/O-bound tasks: higher), 3) Memory constraints, 4) Task characteristics, 5) System resources. Monitor queue size, response time, rejection rate. Dynamic sizing possible but complex. Consider separate pools for different task types.
Phaser is flexible synchronization barrier supporting dynamic party registration. Features: 1) Multiple advance phases, 2) Tree structure for scalability, 3) Arrival/completion monitoring, 4) Termination condition. Use cases: complex phased operations, dynamic thread groups, parallel decomposition. More flexible than CyclicBarrier/CountDownLatch but more complex.
Disruptor is high-performance inter-thread messaging library. Features: 1) Ring buffer for bounded queue, 2) Lock-free design, 3) Cache-friendly, 4) Multiple producer/consumer support. Use for high-throughput scenarios: financial trading, logging, event processing. Requires understanding of memory barriers, cache effects. Complex but highly efficient.
A thread is a lightweight unit of execution within a process. Two main ways to create threads: 1) Extending Thread class, 2) Implementing Runnable interface (preferred for better flexibility). Creation methods include using anonymous classes, lambda expressions, or ExecutorService. Thread lifecycle: New, Runnable, Blocked, Waiting, Timed Waiting, Terminated.
synchronized methods lock entire method using 'this' (instance methods) or Class object (static methods). synchronized blocks allow finer-grained locking on specific objects. Blocks are more flexible: can choose lock object, reduce lock duration, improve concurrency. Both provide mutual exclusion and establish happens-before relationship between threads.
ExecutorService manages thread pools and task execution. Benefits: 1) Reuses threads, reducing creation overhead, 2) Controls number of concurrent threads, 3) Provides task queuing and scheduling, 4) Supports different execution policies, 5) Offers Future for async results. Types include: Fixed, Cached, Scheduled thread pools. Proper shutdown handling required to prevent resource leaks.
BlockingQueue implementations (ArrayBlockingQueue, LinkedBlockingQueue) provide thread-safe operations with blocking behavior. put() blocks when full, take() when empty. Used in producer-consumer pattern, thread pools, work queues. Different implementations offer trade-offs between throughput, ordering, capacity. Support fairness option at cost of throughput.
CompletableFuture extends Future with composition, chaining, combining operations. Features: 1) Async computation pipelines, 2) Exception handling, 3) Timeout management, 4) Multiple completion stages, 5) Customizable execution. Simplifies async programming compared to callbacks or raw threads. Integrates with reactive programming patterns. Supports both async and sync operations.
ThreadLocal provides thread-isolated variables. Use cases: 1) Per-thread context (transaction, security), 2) Thread-safe caches/buffers, 3) Per-thread counters/IDs. Memory leak risk if not properly cleaned in thread pools. InheritableThreadLocal passes values to child threads. Consider weak references for automatic cleanup.
Deadlock occurs when threads wait indefinitely for each other's locks. Prevention strategies: 1) Lock ordering (acquire locks in consistent order), 2) Lock timeouts, 3) Try-lock instead of blocking locks, 4) Avoid nested locks when possible, 5) Use higher-level concurrency utilities. Detection through thread dumps and monitoring. Recovery may require application restart.
Atomic variables (AtomicInteger, AtomicReference, etc.) use Compare-And-Swap (CAS) operations for lock-free thread-safety. More efficient than synchronization for single variables. Use when: 1) Single variable updates need atomicity, 2) High contention scenarios, 3) Performance critical code. Limited to single variable operations, complex operations need different approaches.
ReentrantLock advantages over synchronized: 1) Trylock with timeout, 2) Interruptible locking, 3) Fair queueing option, 4) Condition variables, 5) Non-block-structured locking. Disadvantages: explicit unlock required, more complex, easy to misuse. Use when advanced features needed, otherwise prefer synchronized for simplicity and automatic release.
Memory consistency errors occur when threads have inconsistent views of shared memory due to caching/reordering. Prevention: 1) Proper synchronization, 2) volatile for visibility, 3) final fields for initialization safety, 4) immutable objects, 5) happens-before relationships. Java Memory Model defines when updates must become visible. Tools like JCStress help detect issues.