Garbage collection (GC) automatically reclaims memory from objects no longer referenced by program. Process: 1) Mark: identify live objects by tracing references, 2) Sweep: remove unreferenced objects, 3) Compact: optionally reorganize memory to reduce fragmentation. Uses generational hypothesis: most objects die young, focusing collection efforts on younger generations.
Metaspace (Java 8+) stores class metadata in native memory, replacing PermGen. Differences: 1) Automatically grows by default, 2) Native memory instead of JVM heap, 3) Better memory deallocation, 4) Less likely to cause OutOfMemoryError. Still requires monitoring and tuning, especially for applications loading many classes or using reflection heavily.
Escape analysis determines if object allocation can be eliminated. If object doesn't 'escape' method scope, JVM can: 1) Allocate on stack instead of heap, 2) Perform scalar replacement (replace object with its fields), 3) Eliminate synchronization. These optimizations reduce GC pressure and improve performance. Enabled by default but requires proper conditions to trigger.
DirectByteBuffers allocate memory outside JVM heap (native memory) for better I/O performance. Implications: 1) Not subject to garbage collection, 2) Manual memory management needed, 3) Higher allocation/deallocation cost, 4) Risk of native memory leaks. Best used for long-lived buffers with significant I/O operations. Requires careful capacity planning and cleanup.
Object pooling reuses objects instead of creating new ones. Implementation: maintain pool of pre-allocated objects, checkout/return methods, size limits, cleanup strategies. Appropriate for: expensive object creation, memory pressure reduction, connection management. Drawbacks: complexity, potential memory leaks, synchronization overhead. Modern JVM optimizations often make pooling unnecessary except for specific cases.
Thread stack size (-Xss parameter) determines memory per thread stack. Default varies by platform. Tuning needed when: 1) Deep recursion used, 2) Many threads created, 3) StackOverflowError occurs, 4) Memory optimization required. Smaller stacks allow more threads but risk overflow, larger stacks consume more memory but handle deep call chains.
JVM memory is divided into several areas: 1) Heap: object storage, divided into young/old generation, 2) Stack: thread-specific, stores method frames and primitives, 3) Metaspace: class metadata (replaced PermGen in Java 8+), 4) Code Cache: JIT compiled code, 5) Native Memory: direct buffers and JNI. Each area has different lifecycle and garbage collection characteristics.
Stack: thread-specific, LIFO structure, stores method frames and primitives, automatically managed, fixed size, faster access. Heap: shared across threads, stores objects, garbage collected, dynamic size, slower access. Stack memory is automatically reclaimed when method returns, while heap memory requires garbage collection. Stack overflow occurs from excessive recursion, heap from object accumulation.
Memory leaks occur when objects remain referenced but unused, preventing garbage collection. Common causes: 1) Unclosed resources, 2) Static collections/fields holding references, 3) Inner class references, 4) ThreadLocal variables, 5) Cache implementations without eviction. Prevention: proper resource cleanup, weak references where appropriate, monitoring object lifecycles, regular profiling, and avoiding long-lived object references.
String literals stored in String Pool (special memory area), reused when same literal used multiple times. String objects created with 'new' stored in heap. String.intern() adds heap String to pool. Since Java 7, String Pool moved from PermGen to heap. Pool reduces memory usage through sharing but can cause memory issues if many unique strings interned.
Key parameters: 1) -Xmx: maximum heap size, 2) -Xms: initial heap size, 3) -XX:NewRatio: young/old generation ratio, 4) -XX:SurvivorRatio: Eden/Survivor space ratio, 5) -XX:MetaspaceSize: Metaspace size. Best practices: set -Xms=-Xmx to avoid resizing, tune generation sizes based on object lifetime patterns, monitor GC logs for optimization.
G1 divides heap into equal-sized regions, each can be Eden, Survivor, Old. Works incrementally by collecting regions with most garbage first. Features: predictable pause times, concurrent marking, automatic region sizing, mixed collections. Advantages: reduced fragmentation, better predictability, suitable for large heaps. Requires more CPU resources than Parallel collector.
Young generation consists of Eden space and two Survivor spaces (S0, S1). New objects allocated in Eden. During Minor GC: surviving objects moved to empty Survivor space, age incremented. Objects surviving multiple GCs promoted to Old generation (tenuring). Uses copying collection for efficiency. Eden space larger than Survivor spaces based on assumption most objects die young.
Reference types: 1) Strong: normal references, prevent GC, 2) Soft: cleared when memory needed, good for caches, 3) Weak: cleared during next GC cycle, used for non-crucial caches, 4) Phantom: for resource cleanup tracking, never accessible. Each type provides different object reachability and GC behavior. Reference queues notify when references cleared.
Card table is data structure tracking references from old to young generation, optimizing garbage collection. Divides heap into cards, marks cards when references updated (write barrier). Enables efficient young generation collection by scanning only relevant old generation areas. Critical for generational GC performance but adds slight overhead to reference updates.
Major collectors: 1) Serial: single-thread, small heaps/applications, 2) Parallel: multiple threads, throughput priority, 3) G1: large heaps, balanced latency/throughput, default since Java 9, 4) ZGC: ultra-low latency, large heaps, 5) Shenandoah: similar to ZGC, different implementation. Choice depends on application requirements: latency, throughput, heap size, and available resources.
Memory barriers ensure memory operation ordering in concurrent programs. Types: LoadLoad, StoreStore, LoadStore, StoreLoad barriers. Impact performance by preventing CPU/compiler optimizations. Used in volatile variables, synchronization, concurrent collections. Understanding crucial for high-performance concurrent code. JMM (Java Memory Model) defines when barriers required.
Class loading stores class metadata in Metaspace, bytecode in code cache. Impacts: 1) Memory footprint increases with loaded classes, 2) Class unloading only possible during full GC, 3) Dynamic class loading can cause memory issues. Best practices: avoid unnecessary class loading, use appropriate classloader hierarchy, monitor Metaspace usage.
Finalization delays object reclamation, requires two GC cycles. Reference processing (Soft/Weak/Phantom) adds overhead to GC. Both can cause memory leaks if not handled properly. Best practices: avoid finalization (use try-with-resources), carefully manage reference queues, understand reference processing order. Modern Java favors Cleaner over finalizers.