Why the shorter the memory latency, the smaller the cache block

Why the shorter the memory latency, the smaller the cache block

Content Index :

Why the shorter the memory latency, the smaller the cache block
Tag : caching , By : user184406
Date : November 29 2020, 04:01 AM

around this issue That is a simple consequence of the limited speed of light. Signals need time to travel. For a copper wire it is ca. 20cm/ns. If you have a memory chips 10cm away from your CPU you can send a signal with ACK with a rate of 1 GHz (0,5 ns to send the data from CPU to memory and 0,5 ns from memory to CPU for the ACK).
If you put the memory modules nearer to the CPU lets say only 5 cm you can reduce the cache by some margin because you are already two times faster and the benefit of the cache will be less.

No Comments Right Now !

Boards Message :
You Must Login Or Sign Up to Add Your Comments .

Share : facebook icon twitter icon

C++ , Latency in loading data Structure from memory to cache

Tag : cpp , By : Adam
Date : March 29 2020, 07:55 AM
I hope this helps you . What happened in between your calls? The most obvious thing is that the CPU L1, L2 cache is filled with other data since you did nothing for several seconds. When you access the memory locations again the data has to be loaded from the main memory again which is much slower. C++ has no GC or such thing so there is nothing in between you and the machine. Only the OS and the hardware. You should check how much time it takes when you measure again after lunch when your code and data has gone into the page file. Then you are over 1000 times slower for the first call.
Yours, Alois Kraus

Are memory address separate from data in cache block?

Tag : caching , By : techthumb
Date : March 29 2020, 07:55 AM
should help you out As your picture shows, the data portion is 32b (as you said, each set contains only 1 word).
The tag is a required portion of each set that allows us to know if the requesting address is located in the cache (a "hit"). Your picture says the tag is 27 bits in size.

How to allocate a memory block and place it into Cache?

Tag : cpp , By : nd27182
Date : March 29 2020, 07:55 AM

which is optimal a bigger block cache size or a smaller one?

Tag : algorithm , By : Peter Leung
Date : March 29 2020, 07:55 AM

Refreshing cache without impacting latency to access the cache

Tag : java , By : pttr
Date : March 29 2020, 07:55 AM
it helps some times Best way would be to use functional programming paradigm whereby you have immutable state (in this case a Set), instead of adding and removing elements to that set you create an entirely new Set every time you want to add or remove elements. This is in Java9.
It can be a bit awkward or infeasible however to achieve this method for legacy code. So instead what you could do is have 2 Sets 1 which has the get method on it which is volatile, and then this is assigned a new instance in the refresh method.
public class Test {

    volatile Set<Integer> cache = new HashSet<>();

    public boolean contain(int num) {
        return cache.contains(num);

    public void refresh() {
        Set<Integer> privateCache = new HashSet<>();
        cache = privateCache;
Related Posts Related QUESTIONS :
Privacy Policy - Terms - Contact Us © scrbit.com