I am a software engineer. I earned my B.S. in Computer Science from Caltech in December 2010.
Towards an Adaptable Systems Architecture for Memory Tiering at Warehouse-Scale, Padmapriya Duraisamy, Wei Xu, Scott Hare, Ravi Rajwar, David Culler, Zhiyi Xu, Jianing Fan, Chris Kennelly, Bill McCloskey, Danijela Mijailovic, Brian Morris, Chiranjit Mukherjee, Jingliang Ren, Greg Thelen, Paul Turner, Carlos Villavieja, Parthasarathy Ranganathan, Amin Vahdat, International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS 23), March 2023 (to appear).
Carbink: Fault-Tolerant Far Memory, Yang Zhou, Hassan Wassel, Sihang Liu, Jiaqi Gao, James Mickens, Minlan Yu, Chris Kennelly, Paul Turner, David Culler, Hank Levy, Amin Vahdat, 16th USENIX Symposium on Operating Systems Design and Implementation (OSDI 22), July 2022. Paper
A Hardware Accelerator for Protocol Buffers, Sagar Karandikar, Chris Leary, Chris Kennelly, Jerry Zhao, Dinesh Parimi, Borivoje Nikolic, Krste Asanovic, Parthasarathy Ranganathan, 54th IEEE/ACM International Symposium on Microarchitecture (MICRO 2021), October 2021. Paper
Adaptive Huge-Page Subrelease for Non-moving Memory Allocators in Warehouse-Scale Computers, Martin Maas, Chris Kennelly, Khanh Nguyen, Darryl Gove, Kathryn S. McKinley, Paul Turner, International Symposium on Memory Management (ISMM '21), June 2021. Paper
automemcpy: A framework for automatic generation of fundamental memory operations, Guillaume Chatelet, Chris Kennelly, Sam Xi, Ondrej Sykora, Clement Courbet, David Li, Bruno De Backer, International Symposium on Memory Management (ISMM '21), June 2021. Paper
Beyond malloc efficiency to fleet efficiency: a hugepage-aware memory allocator, Andrew Hamilton Hunter, Chris Kennelly, Darryl Gove, Parthasarathy Ranganathan, Paul Jack Turner, Tipp James Moseley, 15th USENIX Symposium on Operating Systems Design and Implementation (OSDI 21), July 2021. Paper
From time to time, I've worked on various independent projects.
Panoptes provides on-the-fly instrumentation of pre-compiled programs which use nVidia's CUDA libraries to interact with GPUs. These instrumentation capabilities allow for Valgrind-like memory error detection to occur on the GPU at native levels of parallelism.
The source code is available on Github.
This project lead to a talk at nVidia's GTC 2012 conference, Panoptes: A Binary Translation Framework for CUDA
Other small projects are available on my Github account.
pub 4096R/74353699 2009-10-20 [expires: 2023-10-19] Key fingerprint = 35BF 0EA4 A4EF 8368 0442 FF29 4FC6 2B84 7435 3699