should help you out Your requirement matches the Producer Consumer Pattern. Java's concurrent package a offers thread-safe collections (Like BlockingQueue etc ). Producer will create the object and put it on BlockingQueue. The Consumer then picks up the object. Here is example implementation.
Can I safely use mutable objects in RDD.aggregate in PySpark?
around this issue The traditional ways: deep copy - prevents mutations from impacting the client who is reading immutable objects - instead of copying for the client, you copy to update and the client gets an old pointer reference. customer iterator - you provide your own iterator / navigation interface, which is sensitive to a "version" field embedded with the data structure. Before visiting each element, it checks that the version has not been changed since the iterator was created (java collections does this). strong synchronization - while a reader is reading, the reader holds a lock on the data structure preventing update. Generally a bad solution, but occasionally useful (included for completeness). lazy copy - you construct an object that mostly references the original, but is triggered (as a listener) to the original, such that when a mutation is done on the original, you copy the pre-mutated value locally. This is like a lazy deep copy strategy.
What will happen in Rust if create mutable variable and mutable reference and change one of them in separate thread?