DATE: Nov 6, 2020 (Friday) TIME: 12:00 pm EST SPEAKER: Rathish Das EVENT: PhD Thesis Defense TITLE: Algorithmic Foundation of Parallel Paging and Scheduling under Memory Constraints ABSTRACT: Efficient management of memory shared by multiple machines is critical to achieving good performance in parallel computation. The contribution of this dissertation is two-fold: (1) we develop an algorithmic foundation for the problem of automated management of shared-memory in multilevel-memory systems, also known as the parallel paging problem, and (2) we give provably good algorithms that schedule space/memory in a parallel computation to achieve optimal or near-optimal span (parallel running time) in many parallel computation models. The first problem we consider is parallel paging. In the parallel paging problem there are p processors that share a memory of size k. The goal of the problem is to partition the memory among the processors over time to minimize the average completion time. We resolve this long-standing open problem by designing online algorithms with tight upper and lower bounds of \Theta(\log p) on the competitive ratio with O(1) resource augmentation. We further extend the parallel paging problem to a new kind of multilevel-memory system where there are bandwidth differences among the memory levels. This new kind of memory is called high-bandwidth memory (HBM), and is common in new supercomputers. Systems equipped with HBM do not fit in the classical memory-hierarchy models due to HBM’s atypical characteristics, and hence natural and classical paging policies perform poorly. We present a simple but counterintuitive constant-competitive online algorithm for HBM management. The second problem we consider is memory-constrained scheduling. In parallel computation, race conditions often lead to reduced parallelism. While locks mitigate races, they serialize the update operations. One can instead use reducers, which allow parallel race-free updates at the expense of using some extra space. This gives rise to the following question: given a fixed budget of extra memory/space for mitigating the cost of races in a parallel program, how should space be used or scheduled in order to minimize the overall running time? We provide NP-hardness results and constant factor single and bicriteria approximation algorithms for this problem. We also investigate space-parallelism tradeoffs for many important problems in the binary-forking model. We design efficient parallel algorithms in the binary-forking model for fundamental problems such as Strassen's matrix multiplication and Strassen-like algorithms that achieve near-optimal span and total work for a given amount of memory.