Abstract |
The demand for storing data has been growing at a high rate. In addition,
the shift towards data-centric applications that will process all stored information
results in the need to improve the performance of the I/O path between application
memory and physical devices in future servers. Although traditionally this path
has been limited by device technology, today technologies such as solid-state disks
(SSDs) can be used to increase throughput and reduce latency. However, with the
advent of multicore CPUs and the increasing number of cores in servers, bottlenecks
have shifted from devices to the host processor. The systems that runs on modern
CPUs has not been designed for the levels of spatial parallelism that future servers
will exhibit in terms of storage devices, cores, memory, and related interconnects.
In addition, resource sharing between different workloads in multi-tenant setups
results in increased interference as the amount of physical resources managed by
the I/O path grows and applications are becoming more I/O intensive.
In this thesis we examine how partitioning of the I/O path can address both
contention due to spatial parallelism as well as workload interference. We present
bladefs, a kernel-level file system that supports partitioning of the I/O path. bladefs
is a transparent, vfs-compliant file system that provides the minimum required
functionality to handle file I/O and execute real applications and workloads. It
relies on three underlying layers, a partitioned allocator, a partitioned cache and
a partitioned journal for complementary functionality. We present the design of
bladefs and the division of functionality across layers to build a partitioned I/O
path. Our main contribution in the design of bladefs is that by enabling partitioning
of the I/O path we have both contention reduce and isolation between
workloads. Eliminating the contention using partitions allows us performance scaling
by increasing their number.
We evaluate our approach using real-life workloads including OLAP, OLTP
along with micro benchmarks. Our results show that our approach to partitioning
the I/O path can isolate workloads for interfering for host-level resources (cores,
storage devices and memory) in the I/O path, resulting in eliminating any performance
variations in multi-tenant workloads. In addition, our approach is able to
scale when increasing the number of partitions.
|