Doctoral theses
Current Record: 88 of 121
|
Identifier |
000355149 |
Title |
Distributed algorithms for support vector machine training in wireless sensor networks |
Alternative Title |
Κατανεμημένοι αλγόριθμοι εκμάθησης μηχανών εδραίων διανυσμάτων σε ασύρματα δίκτυα αισθητήρων |
Author
|
Φλουρή, Καλλιρρόη Εμμανουήλ
|
Thesis advisor
|
Τσακαλίδης, Παναγιώτης
|
Abstract |
Advancements in low-power electronic devices integrated with wireless
communication capabilities and sensors have opened up an exciting new field
in computer science. Wireless sensor networks (WSN) can be developed at a relatively low-cost and can be deployed in a variety of different settings. The emergence of WSNs, has brought a
significant interest towards decentralized detection, estimation and
classification for use in monitoring, surveillance, location sensing, and
distributed learning applications.
Processing sensor data locally requires considerably less energy than
communicating it to a distant node, yielding an interesting communication-computation tradeoff. To reduce global communication requirements, one needs to perform signal processing to extract key information in a distributed fashion and without losing fidelity. In this dissertation, we focus on the distributed training of Support Vector Machine (SVM) classifiers by taking advantage of the sparseness representation of their decision boundary determined only by a small subset of the training samples, the so-called support vectors.
First, we present two incremental algorithms for distributed SVM training
where the updates of the classifier are diffused sequentially in the network. We show that after a single complete pass through all the clusters, a good approximation of the optimal separating hyperplane is achieved. Then,
we present two gossip-based algorithms, that are robust to unexpected failures of nodes and to changes in the network topology. In the first scheme, each sensor updates its hyperplane at every iteration by combining its support vectors with the support vectors communicated by the neighbors, resulting in a close-to-optimal efficient distributed scheme. In the second approach, the information exchanged between sensors describes uniquely and completely the convex hulls of the two classes. This approach guarantees convergence to the optimal classifier at the expense of increased complexity associated with the construction of the convex hulls.
Finally, by exploiting notions of convex optimization and duality theory, we
derive a novel mathematical characterization for the sparse representation of the most important measurements that neighboring sensors should exchange in order to reach an agreement to the optimal linear classifier. We propose
a function which ranks the training vectors in order of importance in the
learning process. The amount of information to be exchanged is controlled by a user defined threshold, depending on the desired tradeoff between classification accuracy and power consumption. We prove that a threshold value exists for which the proposed algorithm converges to the optimal classifier. We explore the effect of the network topology to the convergence and we study the performance through simulation experiments.
|
Language |
English |
Issue date |
2009-11-05 |
Collection
|
School/Department--School of Sciences and Engineering--Department of Computer Science--Doctoral theses
|
|
Type of Work--Doctoral theses
|
Permanent Link |
https://elocus.lib.uoc.gr//dlib/c/f/d/metadata-dlib-649495bc90986191e8a52355b5380cf0_1275557195.tkl
|
Views |
700 |