I wish this helpful for you Try adding GC.WaitForPendingFinalizers(); after your call to GC.Collect(); I think that will get you what you are after. Complete source below: EDIT
static void Main(string args)
for (int i = 0; i < 1000000; i++)
Thread CE = new Thread(SendCEcho);
CE.Priority = ThreadPriority.Normal;
CE.IsBackground = true;
CE = null;
public static void SendCEcho()
int Counter = 0;
for (int i = 0; i < 5; i++ )
I think the issue was by ths following , There is no "issue" here. This is all normal, expected behavior. If this is causing you some kind of problem, you haven't explained what it is. There's no reason to return virtual memory to the operating system because virtual memory isn't a scarce resource. There's no reason to return physical memory to the operating system because the operating system will take it if it has a better use for it anyway.
it should still fix some issue Substitute a ThreadPoolExecutor for your Executor so that you have control over the size of the pool. If ThreadPoolExecutor is basically an Executor with exposed methods, it may just be a case where the default maximum pool size is set very high. Official doc here.
//Sets the core number of threads.
setKeepAliveTime(long time, TimeUnit unit)
//Sets the time limit for which threads may remain idle before being terminated.
//Sets the maximum allowed number of threads.
will help you The leak you see is because the terminating thread decrements the mutex-protected thread counter, and pauses for a second before the thread actually terminates. The main execution thread will immediately see that the thread counter reached 0, and terminate before the actual detached threads have exited. Each running thread, even a detached thread, consumes and allocates a little bit of internal memory, which does not get released until the thread actually terminates. This is the leak you see, from execution threads that did not terminate before the main execution thread stopped.
will help you After n'th thread arrive, all n will immediately will be allowed to cross barrier and contend for critical section. (n+1)'th to (2n-1)'th will again be waiting until 2n'th thread arrive at barrier, once arrive, all of (n+1)' th to 2n' th will cross the barrier together and contend for critical section. You can have an AtomicInteger initialized at 0 and increment it every time just before your critical section. also put a check if its value becomes n, then block/exit/return all further threads. By the way only AtomicInteger increment and checking it has become n, is sufficient, for n threads to be allowed,and remaining to reject, what Cyclic Barrier will do here if used is, will cause all first n thread to contend for critical section together. (if only one thread should be executing code portion then only call it critical section or else call it "n thread allowed region ") Like if people waiting to eat on a dinner table are not allowed to eat until there are at least n people, and once there are n people, all allowed together to jump on the dinner :)