I wish this helpful for you I am sorry I can't help you with the fragmentation issue, so I am only going to address your second question. Vista introduced ASLR which changes the way DLLs are loaded. For more info see this wiki entry and for a more specific discussion this post may be useful.
seems to work fine You know, I somewhat doubt the memory profiler here. The memory management system in .NET actually tries to defragment the heap for you by moving around memory (that's why you need to pin memory for it to be shared with an external DLL). Large memory allocations taken over longer periods of time is prone to more fragmentation. While small ephemeral (short) memory requests are unlikely to cause fragmentation in .NET.
.NET Memory issues loading ~40 images, memory not reclaimed, potentially due to LOH fragmentation
Hope that helps This blog post appears to descibe what you are seeing, and the proposed solution was to create an implementation of Stream that wraps another stream. The Dispose method of this wrapper class needs to release the wrapped stream, so that it can be garbage collected. Once the BitmapImage is initialised with this wrapper stream, the wrapper stream can be disposed, releasing the underlying stream, and allowing the large byte array itself to be freed.
help you fix your problem Well, you can't do it using "top" command. The only way to detect memory leaks is by using special debugging tools called memory debugger. One example is "Valgrind" but there are many of them. Another consideration is what is the program language of the program. If it is some modern script language with garbage collector - the memory leaks are not possible at all (of course if the language interpreter/compiler is not buggy).
dealing with memory fragmentation for a simulation of dynamic memory allocation
wish of those help The best approach depends on the modus operandi of your program (the user of your memory manager). If the usage pattern is to allocate many small fragments and delete them frequently, you don't need to be overly aggressive with defragmentation. In that case rare large block users will pay for the defragmentation operation. Similarly, if large block allocations are frequent, it might make sense to defragment more often. But the best strategy (assuming you still want to roll your own) is to program it in a general, tunable way and then measure performance impact (in fragmentation ops or otherwise) based on real program run.