![]() ![]() The logic here is that if a block hasn't been used in a while, it's less likely to be part of the current working set than a block that was more recently used.Īn LRU algorithm, though ideal, isn't quite feasible to implement in real life. The optimal replacement policy that doesn't involve predicting the future would be to evict the block that has gone the longest period of time without being used, or the Least Recently Used (LRU) block. With these algorithms, we wind up evicting blocks that will be used again shortly, thereby increasing the cache miss rate and eating up valuable memory bus bandwidth with a bunch of unnecessary fetches. However, none of these policies take account of the fact that any block that was recently used is likely to be used again, soon. Other alternatives would be FIFO (First in, First Out) or LIFO (Last In, First Out), or some other such variation. (Or, another way of putting it would be that an eviction policy dictates which of the blocks currently in the cache will be evicted in order to make room for any new block that is fetched in.) One way to do things would be to pick a block at random to be replaced. A replacement policy dictates which of the blocks currently in the cache will be replaced by any new block that gets fetched in. Temporal and Spatial Locality Revisited: Replacement/Eviction Policies and Block Sizes Replacement/Eviction policiesĬaches can increase the amount of benefit they derive from temporal locality by implementing an intelligent replacement policy (also called, conversely, an eviction policy). And it's actually more fun to do that it probably sounds. It seems like it might be a boring and pointless exercise, but if you take 5 minutes and place a few blocks using this simple formula in conjunction with the preceding diagrams, every thing I've said in this section will really fall into place for you in a "big picture" kind of way. I recommend trying this formula out on some of the examples above. ( Block address) MOD ( Number of sets in cache) ![]() From page 377 of Henessey and Patterson, the cache placement formula is as follows: The second thing you should know is the formula for computing which set in which an arbitrary block of memory should be stored in an n-way associative cache. First, though you've probably already figured it out, a direct-mapped cache is simply a one-way associative cache and a fully associative cache is an n-way associative cache where n is equal to the total number of blocks in the cache. ![]() Advertisementīefore I conclude the discussion of associativity, there are two minor bits of information that I should include for the sake of completeness. There are some direct-mapped caches out there, though. Any less than two-way associativity and the number of collisions often increase the miss rate to the point that the decrease in latency isn't worth it. Any more than eight-way associativity and the cache's latency is increased so much that the decrease in miss rate isn't worth it. the speed of tag RAM, the range of sizes that most caches fall into, etc.) some level of associativity less than or equal to eight-way turn out to be optimal for most caches. In general, it turns out that when you factor in current technological conditions (i.e. Just as with the preceding comparison, how a two-way associative cache compares to a four-way associative cache depends on just how much latency the increase in associativity ends up costing versus the decrease in conflict miss rate. Though a two-way cache's latency is less than that of the four-way scheme, its number of potential collisions (and hence its miss rate) is higher than that of the four-way cache. ![]() Depending on the relative sizes of the cache and main memory, this may or may not increase the cache's overall latency to the point that the decreased conflict miss rate is worth it. However, the number of tags that must be searched for each fetch is twice as high. With the two-way cache, the number of potential collisions (and hence the miss rate) is reduced versus the direct-mapped scheme. from two-way to four-way, or from four-way to eight-way) also increases the number of tags that must be checked in order to locate a specific block, an increase in associativity also means an increase in the cache's latency. And as you read the comparison, keep in mind that since each increase in the level of associativity (i.e. For the sake of comparison, let's assume that the cache size and the memory size both stay constant. To get a better idea of what's going on, let's briefly compare the two-way associative cache to both the direct-mapped and the four-way cache. ![]()
0 Comments
Leave a Reply. |