论文部分内容阅读
内存键值缓存系统广泛应用于当今的互联网服务系统中,通过保存最有可能被访问的键值对象来加速查询。目前的内存键值缓存系统均使用独立的GPU来提高系统性能,然而CPU和GPU之间的PCIe传输开销阻碍了CPU与GPU进行细粒度协作的可能,导致系统计算资源得不到充分利用。本文利用耦合CPU-GPU架构的新型处理器APU中CPU和GPU共享系统内存的特性,提出在CPU和GPU之间进行细粒度的任务划分来充分发挥各自的计算优势,并首次在这种架构的处理器上实现了一个内存键值缓存系统。该系统针对内存键值缓存系统的任务特点对CPU和GPU的细粒度协作进行了探索,并解决了共享内存模型下CPU与GPU的数据访问冲突。实验表明,在以读请求为主的工作负载下该系统的性能均优于已有的内存键值系统。
Memory key caching systems are widely used in today’s Internet service systems to speed up queries by saving key-value objects that are most likely to be accessed. Current memory key caching systems all use independent GPUs to improve system performance. However, the PCIe overhead between CPU and GPU hinders the fine-grained cooperation between the CPU and the GPU, resulting in underutilized system computing resources. In this paper, we use CPU and GPU to share the system memory in a new processor APU coupled with CPU-GPU architecture. We propose a fine-grained task partitioning between CPU and GPU to give full play to their respective computing advantages, and for the first time in this architecture A memory key cache system is implemented on the processor. The system explores fine-grained cooperation between the CPU and the GPU for the task characteristics of the memory key-value cache system and resolves the data access conflicts between the CPU and the GPU in the shared memory model. Experiments show that the performance of the system is better than the existing memory key system under the workload of read request.