Doctoral theses
Search command : Author="Παπαγιαννάκης"
And Author="Γεώργιος"
Current Record: 29 of 125
|
Identifier |
000438370 |
Title |
Memory-mapped I/O for fast storage |
Alternative Title |
Πρόσβαση σε γρήγορες συσκευές αποθήκευσης μέσω απεικόνισης στη μνήμη |
Author
|
Παπαγιάννης, Αναστάσιος Ελευθέριος
|
Author
|
Μπίλας, Άγγελος
|
Reviewer
|
Κατεβαίνης, Μανώλης
Πρατικάκης, Πολύβιος
Μαγκούτης, Κωνσταντίνος
Νικολόπουλος, Δημήτριος
Κοζυράκης, Χρήστος
Αμβροσιάδης, Γεώργιος
Αργυρός, Αντώνης
|
Abstract |
Applications typically access storage devices using read/write system calls.
Additionally, they use a storage cache to reduce expensive accesses to the devices.
Fast storage devices provide high sequential throughput and low access latency.
Consequently, the cost of cache lookups and system calls in the I/O path becomes
significant at high I/O rates.
In this dissertation, we propose the use of memory-mapped I/O to manage storage
caches and remove software overheads in the case of hits. With memory-mapped I/O
(i.e. mmap), a user can map a file in the process virtual address space and access its
data using processor load/store instructions. In this case, the operating system is
responsible for moving data between DRAM and the storage devices,
creating/destroying memory mappings, and handling page evictions/writebacks. Hits
in memory-mapped I/O are handled entirely in hardware through the virtual memory
mappings.
First, we design and implement a persistent key-value store that uses memorymapped I/O to interact with storage devices, and we show the advantages of memorymapped I/O for hits compared to explicit lookups in the storage cache. Then we show
that the Linux memory-mapped I/O path suffers from several issues in the case of
data-intensive applications over fast storage devices when the dataset does not fit in
memory. These include: (1) the lack of user control for evictions of I/Os, especially in
the case of writes, (2) scalability issues with increasing the number of threads, and (3)
the high cost of page faults that happen in the common path for misses.
Next, we propose techniques to deal with these shortcomings. We propose a
mechanism that handles evictions in memory-mapped I/O based on application
needs. To show the applicability of this mechanism, we build an efficient memorymapped I/O persistent key-value store that uses this mechanism. Subsequently, we
remove all centralized contention points and provide scalable performance with
increasing I/O concurrency and number of threads. Finally, we separate protection
and common operations in the memory-mapped I/O path. We leverage CPU
virtualization extensions to reduce the overhead of page faults and maintain the
protection semantics of the OS.
We evaluate the proposed extensions using mainly persistent key-value stores that
are a central component for many analytics processing frameworks and data serving
systems. We show significant benefits in terms of CPU consumption, performance
(throughput and average latency), and predictability (tail latency).
|
Language |
English |
Subject |
Key-Value Store |
|
mmap |
|
Απεικόνιση συσκευών στη μνήμη |
|
Συσκευές αποθήκευσης υψηλής ταχύτητας |
|
Σύστημα αποθήκευσης ζευγαριών κλειδιού-τιμής |
Issue date |
2021-03-26 |
Collection
|
School/Department--School of Sciences and Engineering--Department of Computer Science--Doctoral theses
|
|
Type of Work--Doctoral theses
|
Permanent Link |
https://elocus.lib.uoc.gr//dlib/d/a/d/metadata-dlib-1615972815-306070-18599.tkl
|
Views |
813 |