PhD Student Junyao Yang, Computer Science, to Present Dissertation Proposal October 13

PhD student Junyao Yang, Computer Science, will present their PhD dissertation proposal on Friday, October 13, 2023, from 8-9:30 am in Fisher 131 and via Zoom online meeting. Yang’s PhD advisor is Professor Zhenlin Wang, Computer Science.

Join the Zoom meeting here.

Proposal Title

Design and Modeling of Sampling-Based Caching Policies

Proposal Abstract

As modern web services continue to expand in size and complexity, Key-value caches such as Redis and Memcached are a crucial component for ensuring low-latency, high-throughput services. To optimize cache performance, a multitude of works in caching optimization have emerged in the literature, including caching policy design and cache performance modeling.

While the significance of caching remains indisputable, the choice of an ideal caching strategy is far from straightforward. Recently, the random sampling-based replacement mechanism, as opposed to replacement relying on the rigid data structure, has gained more popularity due to its
lightweight and flexibility. Sampling-based replacement policy randomly samples K cached objects and evicts objects based on their priority score on replacement time, eliminating the need for additional data structures to maintain object order. The sampling-based caching mechanism makes the caching system support wide set of caching strategies easy. For instance, Redis uses random sampling to approximate LRU (least recently used), LFU (least frequently used), and TLL (time to live) policies without relying on ordered data structures.

Traditional cache replacement models predominantly focus on modeling exact LRU cache (in lieu of sampling-based LRU), posing challenges when it comes to accurately monitoring the performance of caching systems configured with a random sampling-based replacement policy, especially for a non-LRU related policy. In our preliminary works, we designed an efficient cache modeling technique, namely KRR, which can be used to accurately model random sampling based-LRU cache under arbitrary sampling size K. We also conducted a thorough analysis of sampling based-LRU’s performance on both Redis and Memcached. On Redis, we explored the impacts of the sampling size K on the cache’s hit ratio, and then further proposed a new replacement policy called DLRU (Dynamic LRU), which dynamically configures the sampling size K for Redis to achieve the best cache hit ratio. On Memcached, we explored the performance difference between Memcached’s default multi-queue LRU and sampling-based LRU implementation. In our evaluation, the lock-free nature of the sampling-based LRU demonstrated near-linear scalability under Memcached’s multi-threaded environment.

Our preliminary works focus predominantly on sampling-based LRU policy; this proposal extends it to non-LRU-based policies. We introduce two major contributions: First, we propose a lightweight sampling-based caching replacement policy based on Redis’s LFU implementation, which demonstrates competitive cache hit performance compared to peers. Second, we further provide a novel technique that efficiently models cache hit performance beyond the scope of traditional LRU caches.