Week 2 | Lesson 7

Memory & Concurrency

JVM, JRE, JDK, Compiler, Memory Management, Multithreading, Atomic Variables, Synchronizers, Concurrent Utilities, Coroutines



© 2026 by Monika Protivová

Java Virtual Machine

Java Virtual Machine

"Write Once, Run Anywhere"
  • The main purpose of JVM is to provide a runtime environment for Java applications, that is independent of the underlying hardware and operating system.
  • Other programming languages that can run on JVM include Kotlin, Scala, Groovy and Clojure.
  • JVM also provides tools for runtime performance optimization, memory management (garbage collection), monitoring, and other.
  • Java Virtual Machine is a part of the Java Runtime Environment (JRE).

Java Virtual Machine

There are three ways to look at JVM implementations.
  • Specification
    Defines how the JVM should be implemented.
  • Implementation
    The actual JVM implementation.
  • Instance
    A running JVM process (create every time you start a Java program).

Since JVM is open source, it exists in more than one implementation. They all follow the specification, but may differ in performance, memory management, and other aspects.

Some of the most popular JVM implementations are:

  • Oracle HotSpot JVM
  • Eclipse OpenJ9
  • GraalVM

JRE, JDK, compiler

JRE, JDK, compiler

JRE - Java Runtime Environment
JRE is the part of Java required to run Java applications. It includes JVM, core libraries, and other components. If you only want to run Java applications, you only need JRE.

JDK - Java Development Kit
You need JDK if you want to develop Java applications. It includes JRE, compiler, and other development tools.

Java Compiler
Java source code is compiled into bytecode, which is then executed by JVM. To do so, you need a Java compiler.

Java Bytecode
Java bytecode is the instruction set for the Java Virtual Machine.

Java Bytecode

Write Once, Run Anywhere

Java bytecode is the intermediate representation of Java code which is output by the Java compiler (javac). It is not the machine code for any particular computer - it is not executed by the CPU of any computer.

Instead, the Java bytecode is executed by the Java Virtual Machine (JVM). You can say it is an instruction set for the JVM.

Java Bytecode

Write Once, Run Anywhere

When a Java program is compiled, each individual class file is compiled into a separate bytecode file (with a .class extension). This bytecode is platform independent, which means the same bytecode can run on any device that has a JVM.

fun main(args: Array<String>) { println("Hello, World!") }

Compiles to following bytecode:

Compiled from "HelloWorld.java"
public class lesson08.HelloWorld {
  public lesson08.HelloWorld();
    Code:
       0: aload_0
       1: invokespecial #1                  // Method java/lang/Object."<init>":()V
       4: return

  public static void main(java.lang.String[]);
    Code:
       0: getstatic     #7                  // Field java/lang/System.out:Ljava/io/PrintStream;
       3: ldc           #13                 // String Hello, World!
       5: invokevirtual #15                 // Method java/io/PrintStream.println:(Ljava/lang/String;)V
       8: return
}

Java Memory Management

Java Memory Management

Java memory management is to a large extent automatic.

Automatic memory management was one of the key features of Java when it was first introduced.

As your program runs, the JVM automatically allocates and de-allocates memory for variables, objects, methods and other data structures.

The deallocation process is known as garbage collection (GC), and the process responsible for it is called the garbage collector.

GC helps us to avoid memory leaks and optimize memory usage.

However, some memory leaks can still occur due to programming errors. (By not releasing references to objects that are no longer needed.)

Java Memory Layout

There are several types of memory spaces in Java, each serving a different purpose.
+---------------------------------------------+
|                  JVM Memory                 |
+---------------------------------------------+
|           Method Area (MetaSpace)           | ← Stores class metadata, method info, static variables
|                                             |   - Loaded class definitions
|                                             |   - Method and field descriptors
|                                             |   - Runtime constant pool
|                                             |   - Shared across all threads
+---------------------------------------------+
|                   Heap                      | ← Stores all class instances and arrays
|                                             |   - Object fields and values
|                                             |   - Managed by Garbage Collector
|                                             |   - Shared across all threads
+---------------------------------------------+
|             Stack (one per thread)          | ← Stores frames for active method calls
|                                             |   - Method arguments and return values
|                                             |   - Local variables and references
|                                             |   - Each thread has its own stack
+---------------------------------------------+
|          Native Method Stack                | ← Used by native (non-Java) methods
|                                             |   - Platform-specific, unmanaged by JVM GC
+---------------------------------------------+
|         Program Counter (PC) Register       | ← Tracks JVM bytecode instruction per thread
|                                             |   - One per thread
|                                             |   - Points to the current instruction
+---------------------------------------------+
            

MetaSpace

MetaSpace is a memory area in the JVM that stores class metadata. It is used to store information about classes, methods, and fields.

MetaSpace memory is allocated when:

  • Classes are loaded by the JVM when they are referenced in the code.
  • Methods are compiled by the JVM when they are called for the first time.
  • Static variables are initialized when the class is loaded.
  • ...

MetaSpace is garbage collected just like the heap memory.

The size of MetaSpace can be controlled using the following JVM options:

  • -XX:MetaspaceSize - sets the initial size of MetaSpace
  • -XX:MaxMetaspaceSize - sets the maximum size of MetaSpace

MetaSpace is a replacement for the Permanent Generation (PermGen) space in older JVM versions. It is allocated in native memory, which means it is not limited by the heap size.

Heap memory

Heap memory is where all objects (instances of classes) are stored.
Each time an object is created, memory is allocated on the heap at runtime.

Heap memory is shared among all threads, and it is the memory area that is maintained by the garbage collector.

You can control the size of the heap using the -Xms and -Xmx JVM options.

  • -Xms sets the initial size of the heap
  • -Xmx sets the maximum size of the heap

Stack memory

Stack memory is used to store method frames, local variables, and primitive values.
  • Stack memory is used to store:
    • Method frames (call stack)
    • Local variables and parameters of methods
    • Primitive values (int, float, etc.)
    • References to objects in heap space
  • Properties of stack are:
    • Fast access
    • Automatically managed (cleared when function ends)
    • Exists per thread
    • Cannot grow too big (overflow → StackOverflowError)
  • Stack memory is allocated when a method is called and deallocated when the method returns.
  • Each thread in Java has its own JVM stack which is created at the same time as the thread.
  • Stack memory has a specific size and is not directly controlled by the programmer.
Example of Method frame:
fun sum(a: Int, b: Int): Int { val result = a + b return result }
Method Frame (for sum):
+-----------------------+
| return address        |
| a = 3                 |
| b = 4                 |
| result = 7            |
| operand stack         |
+-----------------------+

Difference Between Stack and Heap

Stack

  • Stores method frames, local variables, and primitive values
  • Memory is allocated and deallocated automatically
  • Fast access, but limited size
  • Each thread has its own stack
  • Used for method calls and local variables

Heap

  • Stores objects and arrays
  • Memory is managed by the garbage collector
  • Slower access, but larger size
  • Shared among all threads
  • Used for dynamic memory allocation

Memory Allocation

Examples
1. Primitive Types → Stack (when local)
  • Variables are local to the main function → stored in the stack frame.
  • No object instantiation → no heap allocation.
fun main() { val a = 42 // Int – primitive value, allocated on stack val b = true // Boolean – also on stack }
💡 ️ Remember: Kotlin's Int compiles to a JVM primitive (stack), but Int? must be boxed as Integer (heap) -- because primitives cannot be null. Making a type nullable changes where it lives in memory.
2. Reference Types (Classes) → Heap
  • List reference → stack
  • Actual List and its contents → heap
  • Returned to caller → remains alive after function ends
class User(val name: String) fun main() { val user = User("Alice") // user is a reference on stack, actual User object on heap }

Memory Allocation

Examples
3. Function-local Objects → Heap
  • list reference → stack
  • actual List and its contents → heap
  • returned to caller → remains alive after function ends
fun createList(): List<Int> { val list = listOf(1, 2, 3) // list is a reference on stack, actual List on heap return list }
4. Objects Captured in Lambdas → Heap
  • Lambdas that capture variables allocate a closure object on the heap.
  • If prefix were not captured, lambda could be compiled more efficiently.
fun main() { val prefix = "Item " // captured in lambda → stored on heap val printer = { i: Int -> println(prefix + i) } printer(5) }

Memory Allocation

Examples
5. Top-Level / object Singleton → Heap (once)
  • The Logger object is allocated once and lives in heap/meta space.
  • Static-like structure with JVM guarantees.
object Logger { fun log(msg: String) { println(msg) } }
6. Array Allocation → Heap
  • Arrays, even of primitives, live on heap.
  • Elements of primitive type (Int) are stored inline; objects would be references.
val nums = IntArray(5) // array is always allocated on heap

Memory Allocation

Examples
7. Inline Functions and Reified Types → Reduced Allocation
  • Inlined at call site → may reduce allocation.
  • No closure or lambda object if no capturing → no heap cost.
inline fun <reified T> printType() { println(T::class.java.name) }
8. Example of memory leak due to static reference
  • Static reference in Leaky object → cache lives as long as program.
object Leaky { var cache: MutableList<Any> = mutableListOf() } fun main() { repeat(1_000_000) { Leaky.cache.add(ByteArray(1024 * 1024)) // 1MB } }

Garbage Collection (GC)

Garbage Collection

Garbage Collection (GC) is a process of automatically reclaiming memory.

The Garbage Collector automatically frees up heap space memory allocations that are no longer referenced by any running part of the program.

The process of GC is not predictable, and the programmer can't force garbage collection. System.gc() can be called as a hint to JVM for garbage collection, but it is not guaranteed that it will be performed.

To make the garbage collection process more efficient, the heap is divided into generations.

  • Young Generation (Eden)
  • Old Generation (Tenured)
  • Permanent Generation (Meta Space)

Garbage Collection Generations

Java uses generational garbage collection strategy that categorizes objects by age. This is because performing GC on the entire heap would be inefficient. Most objects in java are short-lived, therefore, there can be more frequent GC events for those.
  • Young Generation (Eden)
    This is where all new objects are created. It can be further divided into Eden space and Survivor spaces (FromSpace and ToSpace).
    • When it fills up, a Minor GC event occurs.
    • Objects that are evaluated as dead or alive.
    • Dead objects are removed, and the memory is compacted.
    • If an object survives for a given number minor GC cycles, it is promoted to the Old Generation.
  • Old Generation (Tenured)
    This contains objects that have survived the garbage collection from the Young Generation.
  • Permanent Generation (Meta Space)
    This is used to store metadata about classes and methods. It's garbage collected just like the other generations but usually at a slower rate.

Garbage Collection Strategies

There is a number of GC strategies that can be used in Java. Each strategy has its own advantages and disadvantages, and is suitable for different types of applications.
  • Serial Collector (-XX:+UseSerialGC)
  • Parallel Collector (-XX:+UseParallelGC and -XX:+UseParallelOldGC)
  • Z Garbage Collector (ZGC) (-XX:+UseZGC)
  • Concurrent Mark Sweep (CMS) Collector (-XX:+UseConcMarkSweepGC)
  • G1 (Garbage-First) Collector (-XX:+UseG1GC)
  • Shenandoah GC (-XX:+UseShenandoahGC)

Each JVM implementation may implement GC differently, and may have its own GC strategies, although they should all follow the JVM specification.

GC Animation: Object Lifecycle

See how objects are created, referenced, and collected when they become unreachable.
Stack
var a
→ A
var a
→ A
var c
→ C
var a
→ null
Heap
Object A
(references B)
Object B
(referenced by A)
Object C
Object A
Object B
A
B

Step 1: Empty heap and stack

Step 2: Create Object A, referenced by stack variable varA

Step 3: Create Object B, referenced by Object A

Step 4: Create Object C, referenced by stack variable varC

Step 5: Remove stack reference to A (A and B become unreachable)

Step 6: GC identifies unreachable objects - A and B are marked for collection

Step 7: GC collects unreachable objects - A and B are removed from heap

Step 8: Only Object C remains (still referenced by varC)

GC Animation: Generational Collection

Watch objects age and move through generations: Eden → Survivor → Old Generation
Young Generation
Eden
Obj1
Obj2
Obj3
Obj3
Obj4
Obj5
S0
Obj1
1
Obj2
1
S1
Obj1
2
Obj2
2
Obj4
1
Old Generation (Tenured)
Obj1
8

Objects created in Eden Space

Minor GC #1

Survivors (Obj1, Obj2) moved to S0 with age=1. Dead objects (Obj3) collected.

New objects (Obj4, Obj5) allocated in Eden

Minor GC #2

S0 objects moved to S1 with incremented age. Eden survivors join them.

This cycle continues: objects swap between Survivor spaces, aging with each Minor GC until they reach the promotion threshold (typically age 8-15).

Promotion

Obj1 reaches age threshold and is promoted to Old Generation, where it will stay until a Major GC.

GC Animation: Mark-and-Sweep Algorithm

Step-by-step visualization of how GC determines which objects to keep and which to remove

Phase 1: Mark - Traverse object graph from GC roots, marking all reachable objects

Stack Var
Static Field
A
A
B
B
C
C
D
E

Starting from GC roots (stack variables, static fields), the GC traverses all object references...

Objects A and B are marked as reachable (blue) from Stack Var

Object C is marked as reachable (blue) from Static Field

Mark phase complete. Objects D and E remain white (unmarked) - they are unreachable!

Phase 2: Sweep

Sweep - Remove all unmarked (white) objects from heap

A
B
C
D
E
D
E

Unmarked objects D and E identified for removal (red)

Sweep complete! Memory occupied by D and E is reclaimed. Only reachable objects (A, B, C) remain in heap.

This two-phase approach ensures only truly unreachable objects are collected, preventing memory leaks while avoiding premature deletion.

GC Animation: Memory Compaction

See how GC eliminates memory fragmentation by compacting objects into contiguous space

The Problem: Memory Fragmentation

After many allocations and deallocations, memory becomes fragmented with gaps. Even with enough total free space, you may not have enough contiguous space for a large object.

Obj A (30KB)
free 20KB
Obj B (25KB)
free 15KB
Obj C (40KB)
free 25KB

Total free space: 60KB (20 + 15 + 25)

But if we need to allocate a 50KB object, it FAILS - no single gap is large enough!

Memory Compaction Begins

Solution: Compact - Slide all live objects together, eliminating gaps

Obj A (30KB)
Obj B (25KB)
Obj C (40KB)
free 60KB

Objects moved together (addresses updated). All 60KB free space is now contiguous!

Compaction complete! Now we can successfully allocate the 50KB object in the contiguous free space.

Obj A (30KB)
Obj B (25KB)
Obj C (40KB)
NEW (50KB)
free 10KB

Success! 50KB object allocated. Only 10KB free space remains.

Compaction happens during GC. The GC must update all references to moved objects, which takes time but significantly improves memory utilization and allocation performance.

Benefits:

  • Eliminates fragmentation → better memory utilization
  • Enables allocation of large objects
  • Improves allocation speed (bump-the-pointer allocation)
  • Reduces OutOfMemoryError due to fragmentation

Multithreading

Multithreading

Multithreading allows execution of multiple parts of a program concurrently, using lightweight processes called threads. It aims to maximize the use of CPU time.

Generally, there is always at least one thread running in a Java program - the main thread.

To create a new thread, need to create new instance Thread class. To start a thread, need to call its start() method of the Thread class instance.

Another way to create a thread is by implementing the Runnable interface and passing an instance of it to a new thread.

Alternatively, we can use the Executor Framework provided by java.util.concurrent.

Synchronization in Java / Kotlin is an important feature that allows only one thread to have access to the shared resource.

Thread

The Java way.

Thread is a class in Java that allows you to create and manage threads. You can create thread directly, or by extending the Thread class and overriding the run() method.

fun main() { val thread = Thread { println("Hello from '${Thread.currentThread().name}' thread") } thread.start() println("Hello from '" + Thread.currentThread().name + "' thread") }
fun main() { MyThread().start() } class MyThread : Thread() { override fun run() { println("Hello from '${currentThread().name}' thread"); } }

Thread

The Java way.

Thread is started by calling the start() method. When the start() method is called, the JVM calls the run() method of the thread.

When the run() method finishes, the thread is considered to be terminated.

If at any time you want to stop a thread, you can call the interrupt() method.

To wait for a thread to finish, you can call the join() method. However, beware that this will block the current thread until the other thread finishes.

Thread

The Kotlin way

While we use Java Thread class in Kotlin, as usual, Kotlin provides a more concise way to create and manage threads.

You can use the thread function to create a new thread.

The function simplifies the creation of a new thread by allowing you to configure the thread properties in a more concise way. The arguments available are:

  • start: Boolean = true - start the thread immediately
  • isDaemon: Boolean = false - create a daemon thread
  • contextClassLoader: ClassLoader? = null, - class loader to use for loading classes and resources
  • name: String? = null, - name of the thread
  • priority: Int = -1, - priority of the thread
  • block: () -> Unit - code to be executed in the thread
import kotlin.concurrent.thread fun main() { thread(start = true, isDaemon = true, name = "my-thread", priority = 999) { println("Hello from '${Thread.currentThread().name}' thread") } println("Hello from '" + Thread.currentThread().name + "' thread") }

Runnable

Runnable is an interface in Java that allows you to create and manage threads.

To use Runnable, you pass an instance of Runnable to a new Thread.

class MyRunnable : Runnable { override fun run() { println("Hello from '${Thread.currentThread().name}' thread") } } fun main() { val thread = Thread(MyRunnable(), "Runner 1") println("Starting thread '${thread.name}'") thread.start() try { // this will block the main thread until the other thread finishes thread.join() } catch (e: InterruptedException) { Thread.currentThread().interrupt() throw RuntimeException(e) } println("Thread '${thread.name}' finished") }

Memory Synchronization

Memory synchronization ensures that the changes made by one thread to the shared data are visible to other threads.
  • @Volatile

    Used to mark a field as volatile to the JVM. It ensures that all reads of a volatile variable are read directly from main memory, and all writes to a volatile variable are written directly to main memory. By itself, volatile does not provide atomicity, but it ensures visibility.

    @Volatile private var flag: Boolean = true
  • @Synchronized

    If method is synchronized, only one thread can access the method or block at a time. This ensures that the changes made by one thread to the shared data are visible to other threads.

    @Synchronized fun someMethod() { // ... }
  • synchronized block

    You can also use synchronized block to synchronize access to shared data within a block of code.

    The difference between @Synchronized and synchronized block is that the former is used to synchronize a method, while the latter is used to synchronize a block of code. Synchronizing on method level can be more efficient than synchronizing on block level, especially if the code is called frequently, because synchronized needs to acquire and release the lock every time the method is called.

Memory Synchronization

Remember that incorrect synchronization can lead to issues like race conditions, deadlocks or even data inconsistency. It advised to avoid shared mutable data for threads and use thread confinement or immutability.

Memory Synchronization

No synchronization - NOT thread safe!
var sharedCounter = 0 fun main() { val thread1 = Thread(::incrementCounter) val thread2 = Thread(::incrementCounter) thread1.start() thread2.start() // wait for both threads to finish thread1.join() thread2.join() println("Final Counter Value: $sharedCounter") } fun incrementCounter() { repeat(1000) { sharedCounter++ } }

Memory Synchronization

With synchronization using @Synchronized - thread safe.
@Volatile // doesn't ensure atomicity, but ensures visibility var sharedCounter = 0 fun main() { val thread1 = thread { incrementCounter() } val thread2 = thread { incrementCounter() } thread1.start() thread2.start() // wait for both threads to finish thread1.join() thread2.join() println("Final Counter Value: $sharedCounter") } @Synchronized // ensures that only one thread can execute this function at a time, acquire lock on whole function fun incrementCounter() { repeat(1000) { sharedCounter++ } }

Memory Synchronization

With synchronization with synchronized function - thread safe.
@Volatile // doesn't ensure atomicity, but ensures visibility var sharedCounter = 0 fun main() { val thread1 = thread { incrementCounter() } val thread2 = thread { incrementCounter() } thread1.start() thread2.start() // wait for both threads to finish thread1.join() thread2.join() println("Final Counter Value: $sharedCounter") } private val lock = Any() // because incrementCounter is a top-level function, we need and container object to lock on fun incrementCounter() { repeat(1000) { synchronized(lock) { // ensures atomicity, acquire lock on this object sharedCounter++ } } }

Concurrent Utilities

java.util.concurrent

Besides the low-level synchronization mechanisms such as volatile and synchronized keywords, Java provides a number of classes and interfaces in the java.util.concurrent package to help with multithreading.

The package includes:

  • Atomic Variables
    This includes classes that support atomic operations on single variables, such asAtomicInteger, AtomicLong andAtomicReference.
  • Synchronizers
    These are higher-level synchronization constructs such as CountDownLatch, CyclicBarrier, and Semaphore.
  • Concurrent Collections
    This includes thread-safe collection classes used in place of synchronized wrappers such as .Hashtable or Collections.synchronizedMap(Map).
  • Locks
    More advanced and flexible locking mechanism compared to intrinsic locking.
  • Callable and Future
    Callable tasks are similar to Runnable tasks but they can return a result and are capable of throwing checked exceptions. Futures represent result of an asynchronous computation - a way to handle the results of callable tasks.
  • Executor Framework
    This is a higher-level replacement for working with threads directly. Executors are capable of managing a pool of threads, so we do not need to manually create new threads and run tasks in an asynchronous fashion.

Thread Exercises

Exercise: Download Manager

Simulate downloading multiple files concurrently using threads.

Create a simple download simulator that:

  • Simulates downloading 2-3 files concurrently using Thread
  • Each file has a different size (number of chunks to download)
  • Print progress for each file (e.g., "File1: Downloaded chunk 1/5")
  • Use Thread.sleep(100) to simulate download time per chunk
  • Start all downloads and observe concurrent progress

Expected behavior: All downloads run in parallel, and progress messages are interleaved.

Exercise: Marathon Race

Create a Runner class that simulates runners with different speeds and rest strategies racing to a finish line.

Write a race simulation where:

  • Create a Runner class that implements Runnable
  • Constructor parameters: name, speed (ms per meter), maxDistance, pauseEvery (meters), pauseDuration (ms)
  • Runners move meter by meter, taking breaks when needed
  • Create at least 2 runners: one fast with breaks, one slow without breaks
  • Use join() to wait for all runners to finish

Example: Fast runner: speed=50ms, pauses every 5m for 300ms | Slow runner: speed=100ms, no pauses

class Runner( private val name: String, private val speed: Long, private val maxDistance: Int, private val pauseEvery: Int = 0, private val pauseDuration: Long = 0 ) : Runnable { // Implement the runner logic ... }

Exercise: Bank Account

Simulate concurrent deposits to a shared bank account and fix race conditions using @Synchronized.

Create a bank account simulation where:

  • A shared variable represents the account balance (starts at 0)
  • Create a deposit() function that adds money to the balance
  • Multiple threads (customers) make 1000 deposits of $1 each
  • First, run without @Synchronized and observe money lost due to race conditions
  • Then add @Synchronized to the deposit function to fix it

Expected: Without sync, balance < $3000. With sync, balance = $3000 exactly.

Atomic Variables

Atomic Variables

The java.util.concurrent package defines classes that support atomic operations on single variables. All atomic operations are thread-safe.

There are several classes in this package, for example AtomicBoolean, AtomicInteger, AtomicLong, etc.

Here is what you can do with atomic variables ...

Atomic Variables

Atomic Read and Write
You can read or write the value of atomic variables in a thread-safe manner. When you update an atomic variable, it ensures that the new value is immediately visible to other threads.

val atomicInteger = AtomicInteger(0) atomicInteger.set(78) val value = atomicInteger.get()

Atomic Update
This allows you to atomically update the value of atomic variables. For Atomic integers and longs, it includes methods to increment, decrement, and add a certain value atomically.

val atomicInteger = AtomicInteger(0) atomicInteger.incrementAndGet() atomicInteger.addAndGet(46)

Atomic Variables

Compare and Set/Swap (CAS)
It enables you to update the value of a variable only when it has a certain expected value. It's a way of managing concurrency, without traditional lock-based synchronization. For example, to atomically update a value only if it's currently equal to 10, you can use:

val atomicInteger = AtomicInteger(10) val updated = atomicInteger.compareAndSet(10, 78)

getAndIncrement, getAndDecrement, getAndAdd
These are atomic operations that atomically increment, decrement, or add the value and return the old value.

val atomicInteger = AtomicInteger(0) val oldValue = atomicInteger.getAndIncrement()

Atomic Variables

import java.util.concurrent.atomic.AtomicInteger import kotlin.concurrent.thread private val counter = AtomicInteger(10) fun main() { thread(name = "thread-1") { while (counter.getAndDecrement() > 0) { println("Hello from '" + Thread.currentThread().name + "' thread. Counter = ${counter.get()}"); } } thread(name = "thread-2") { while (counter.getAndDecrement() > 0) { println("Hello from '" + Thread.currentThread().name + "' thread. Counter = ${counter.get()}"); } } }

Thread Exercises

Exercise: Ticket Booking System

Use AtomicInteger to manage a shared ticket inventory across multiple booking agents without explicit locks.

Create a ticket booking system where:

  • Use an AtomicInteger to represent available tickets (e.g., 50 tickets)
  • Multiple booking agents (threads) try to sell tickets concurrently
  • Each agent uses decrementAndGet() to book a ticket atomically
  • Agents stop when no tickets remain (value reaches 0)
  • Print which agent sold which ticket number
  • Add Thread.sleep() to simulate booking processing time

Key point: AtomicInteger ensures no two agents sell the same ticket, without needing synchronized blocks!

Note: No @Synchronized needed! AtomicInteger guarantees atomicity.

Synchronizers

Synchronizers

  • Semaphore
    It controls the access to a shared resource through the use of a counter. If the counter is greater than zero, the access is allowed, otherwise the access is denied. This is often used to limit the number of threads that can access a particular resource.
  • CountDownLatch
    It allows one or more threads to wait until a set of operations being performed in other threads completes. Once the count is zero, all waiting threads proceed. It's a one-time phenomenon, once the latch reaches zero it cannot be reset.
  • CyclicBarrier
    It's used when multiple threads carry out different sub tasks and the output of these sub tasks need to be combined to form the final output. It's called cyclic because it can be reused after waiting threads are released.
  • Phaser
    It's more flexible than both CountDownLatch and CyclicBarrier. It's called Phaser because it phases all the threads into stages of execution.
  • Exchanger
    It's used to exchange data between two threads. It waits for both the threads to reach the exchange point. If the threads do not appear simultaneously to exchange their objects, they'll be paused until the arrival of the other thread.

Semaphore

import java.util.concurrent.Semaphore import kotlin.concurrent.thread private val semaphore = Semaphore(1) fun main() { thread(name = "A") { execute(semaphore) } thread(name = "B") { execute(semaphore) } } fun execute(semaphore: Semaphore) { try { semaphore.acquire() println("Thread '${Thread.currentThread().name}' acquired the semaphore") } catch (e: InterruptedException) { Thread.currentThread().interrupt() throw RuntimeException(e) } finally { println("Thread '${Thread.currentThread().name}' is releasing the semaphore") semaphore.release() } }
Thread 'A' acquired the semaphore
Thread 'A' is releasing the semaphore
Thread 'B' acquired the semaphore
Thread 'B' is releasing the semaphore

CountDownLatch

import java.util.concurrent.CountDownLatch import kotlin.concurrent.thread private val latch = CountDownLatch(3) fun main() { thread(name = "WAITING") { println("Thread '${Thread.currentThread().name}' started") try { latch.await() } catch (e: InterruptedException) { Thread.currentThread().interrupt() throw RuntimeException(e) } println("Thread '${Thread.currentThread().name}' finished") } thread(name = "COUNTING") { println("Thread '${Thread.currentThread().name}' started") while (latch.count > 0) { println("Thread '${Thread.currentThread().name}' counting down ${latch.count}...") latch.countDown() } println("Thread '${Thread.currentThread().name}' finished") } }
Thread 'COUNTING' started
Thread 'WAITING' started
Thread 'COUNTING' counting down 3...
Thread 'COUNTING' counting down 2...
Thread 'COUNTING' counting down 1...
Thread 'COUNTING' finished
Thread 'WAITING' finished

CyclicBarrier

import java.util.concurrent.CyclicBarrier import kotlin.concurrent.thread private var barrier = CyclicBarrier(3) { println("Barrier reached") } fun main() { thread(name = "A") { execute(barrier) } thread(name = "B") { execute(barrier) } thread(name = "C") { execute(barrier) } } fun execute(barrier: CyclicBarrier) { try { println("Thread '${Thread.currentThread().name}' is waiting on the barrier") barrier.await() println("Thread '${Thread.currentThread().name}' has passed the barrier") } catch (e: Exception) { throw RuntimeException(e) } }
Thread 'C' is waiting on the barrier
Thread 'B' is waiting on the barrier
Thread 'A' is waiting on the barrier
Barrier reached
Thread 'A' has passed the barrier
Thread 'C' has passed the barrier
Thread 'B' has passed the barrier

Phaser

import java.util.concurrent.Phaser import kotlin.concurrent.thread private val phaser = Phaser(2) fun main() { thread(name = "PRE-PROCESSOR") { preProcessor(phaser) } thread(name = "POST-PROCESSOR") { postProcessor(phaser) } } fun postProcessor(phaser: Phaser) { println("Thread '${Thread.currentThread().name}' has arrived. Waiting for others...") phaser.arriveAndAwaitAdvance() println("Thread '${Thread.currentThread().name}' has finished.") } fun preProcessor(phaser: Phaser) { try { Thread.sleep(1000) } catch (e: InterruptedException) { Thread.currentThread().interrupt() throw RuntimeException(e) } println("Thread '${Thread.currentThread().name}' has arrived.") phaser.arriveAndDeregister() println("Thread '${Thread.currentThread().name}' has finished.") }
Thread 'POST-PROCESSOR' has arrived. Waiting for others...
Thread 'PRE-PROCESSOR' has arrived.
Thread 'PRE-PROCESSOR' has finished.
Thread 'POST-PROCESSOR' has finished.

Exchanger

import java.util.concurrent.Exchanger import kotlin.concurrent.thread private val exchanger = Exchanger<String>() fun main() { thread(name = "A") { exchange(exchanger) } thread(name = "B") { exchange(exchanger) } } fun exchange(exchanger: Exchanger<String>) { try { val message = exchanger.exchange("Hello from ${Thread.currentThread().name}") println("Thread '${Thread.currentThread().name}' received message: $message") } catch (e: InterruptedException) { Thread.currentThread().interrupt() throw RuntimeException(e) } }
Thread 'A' received message: Hello from B
Thread 'B' received message: Hello from A

Concurrent Utilities

Future example

Again, Future is one of the objects you may encounter when working with Java libraries in Kotlin.

import java.util.concurrent.Executors import java.util.concurrent.Future fun main() { val messenger = Messenger() val message: Future<String> = messenger.receiveMessage() while (!message.isDone) { println("Waiting for message...") try { Thread.sleep(500) } catch (e: InterruptedException) { Thread.currentThread().interrupt() throw RuntimeException(e) } } try { println("Received message: ${message.get()}") } catch (e: Exception) { throw RuntimeException(e) } } class Messenger { private val executor = Executors.newSingleThreadExecutor() fun receiveMessage(): Future<String> { return executor.submit<String> { Thread.sleep(3000) "Hello from future!" } } }
Waiting for message...
Waiting for message...
Waiting for message...
Waiting for message...
Waiting for message...
Waiting for message...
Received message: Hello from future!

Coroutines

Coroutines

Coroutines are light-weight alternative to threads, which are used for asynchronous programming.

A coroutine is an instance of a suspendable computation. It allows for asynchronous code executions, like threads, but they are not bound to any particular thread. A coroutine may start executing in one thread, suspend its execution and resume in another one. This makes coroutines more efficient than threads - by design, they are non-blocking and reuse threads.

Coroutines run in a context of a CoroutineScope, which defines the lifecycle of the coroutine. When the scope is cancelled, all coroutines started in that scope are cancelled. CoroutineScope can only finish when all its inner coroutines are finished.

Coroutines are one of the main features of Kotlin, and while working with them is straightforward, I believe it is a topic for Kotlin Advanced course.

Coroutines Basics

There are several principles to work with coroutines.

Let's explain with example:

import kotlinx.coroutines.* fun main() = runBlocking { launch { delay(1000) println("Kotlin Coroutines World!") } println("Hello") }

Coroutines follow a principle of structured concurrency which means that new coroutines can only be launched in a specific coroutine scope which delimits the lifetime of the coroutine.

  • It ensures that all coroutines are properly managed and not leaking outside of their scope.
  • It also ensures that any errors in the code are properly reported and are never lost.
suspend fun task() = coroutineScope { delay(1000) println("Kotlin Coroutines World!") }

Structured Concurrency

Sequential execution of coroutines
import kotlinx.coroutines.* import kotlin.system.measureTimeMillis fun main() = runBlocking { val time = measureTimeMillis { val task1 = task1() val task2 = task2() println("Result: $task1$task2") } println("It took $time ms") } suspend fun task1(): String { delay(1000) return "Hello" } suspend fun task2(): String { delay(2000) return "Coroutines" }

Async Coroutines

Concurrent execution of coroutines
import kotlinx.coroutines.* import kotlin.system.measureTimeMillis fun main() = runBlocking { val time = measureTimeMillis { val task1 = async { task1() } val task2 = async { task2() } println("Result: ${task1.await()}${task2.await()}") } println("It took $time ms") } suspend fun task1(): String { delay(1000) return "Hello" } suspend fun task2(): String { delay(2000) return "Coroutines" }

Coroutine Dispatchers

Coroutines are dispatched onto different threads using Dispatchers.

The coroutine context includes a CoroutineDispatcher that determines what thread or threads the corresponding coroutine uses for its execution. The coroutine dispatcher can confine coroutine execution to a specific thread, dispatch it to a thread pool, or let it run unconfined.

import kotlinx.coroutines.* fun main() { runBlocking { launch(Dispatchers.IO) { // I/O operations println("I/O work on thread: ${Thread.currentThread().name}") } launch(Dispatchers.Default) { // CPU-intensive work println("CPU work on thread: ${Thread.currentThread().name}") } } }
  • Dispatchers.Main
    - for UI updates (Android main thread)
  • Dispatchers.IO
    - for I/O operations (file, network, database)
  • Dispatchers.Default
    - for CPU-intensive computations
  • Dispatchers.Unconfined
    - not confined to any specific thread

Coroutine Cancellation, Timeouts and Exceptions

Handling cancellation, timeouts, and exceptions properly is crucial for building reliable coroutine-based applications.
import kotlinx.coroutines.* fun main() { runBlocking { try { withTimeout(1000) { repeat(1000) { i -> println("I'm sleeping $i ...") delay(500) } } } catch (e: TimeoutCancellationException) { println("Timed out!") } } }

Coroutines in Kotlin are cooperative, meaning they need to actively check for cancellation and respond appropriately.

Coroutines can be cancelled by calling the cancel() function on a coroutine's Job.

Coroutine Exception Handling

Handling Exceptions in Coroutines
import kotlinx.coroutines.* fun main() { runBlocking { val supervisor = SupervisorJob() val scope = CoroutineScope(coroutineContext + supervisor) val job1 = scope.launch { println("Job 1 started") delay(1000) throw Exception("Job 1 failed") } val job2 = scope.launch { println("Job 2 started") delay(2000) println("Job 2 completed") } job1.join() job2.join() // Job 2 is not affected by Job 1's failure } }

Exceptions in coroutines are handled differently depending on the coroutine builder (launch, async, etc.):

  • launch
    - exceptions are thrown immediately and can crash the parent scope unless caught
  • async
    - exceptions are stored and thrown when await() is called

Coroutine Best Practices

  1. Always use runBlocking in a limited scope (typically only in main functions or tests).
  2. Prefer CoroutineScope to manage coroutines in more complex applications.
  3. Use withContext(Dispatchers.IO) for blocking I/O operations to prevent UI blocking.
  4. Use structured concurrency principles to manage coroutine lifecycles (avoid orphaned coroutines).

Practice

Practice: Coffee Shop

In this assignment, you will implement a simulation of a coffee shop using Kotlin coroutines.

The simulation consists of the following components:

Order Generation

  • Orders are randomly generated at regular intervals while the coffee shop is open
  • The shop only accepts orders if they can be completed before closing
  • When the shop closes, new orders are rejected

Order Processing

  • The coffee shop has multiple baristas who process orders in parallel
  • Each barista picks up one order at a time from a shared queue
  • The barista must use a shared coffee machine, which only one person can use at a time
  • Once an order is completed, the barista moves on to the next available order

Concurrency Requirements

  • Use coroutines for parallel processing
  • Use channels for order distribution among baristas
  • Use mutex for shared resource access (coffee machine)
  • Use AtomicInteger for thread-safe counters
  • Handle proper shutdown when the shop closes