Equipe BD
Equipe BD
Laboratoire d'InfoRmatique en Images et Systèmes d'information
UMR 5205 CNRS/INSA de Lyon/Université Claude Bernard Lyon 1/Université Lumière Lyon 2/Ecole Centrale de Lyon

You are here

Managing Very Large Data Sets in a Cloudy World

Qui: 
Jorge Quiané
Quand: 
Friday, November 18, 2011 - 14:00 to 16:00
Où: 
Blaise Pascal - Amphi FC

Nowadays, many enterprises and organizations are faced to large volumes of data that have to be analyzed in a per-day basis. In particular, scientific datasets are growing at unprecedented rates and are likely to continue growing to the order of Exabytes. These current needs of data management require applications to run over a large number of computing nodes. However, databases management systems (DBMS) have proven inefficient to deal with very large datasets as well as to scale out to a large number of computing nodes. In this context, MapReduce and the Cloud computing are two alternative technologies that respond to this challenge. While MapReduce allows enterprises, organizations, and researchers to easily process very large volumes of data, the Cloud provides the required computing infrastructure to scale applications out to a large number of computing nodes. The beauty of these approaches are their ease-to-use and almost-free-admin cost properties. However, this simplicity comes at a price: the performance of MapReduce applications in the Cloud often do not match the one of a well-configured parallel DBMS. In this talk, we present some of the main features that allow DBMS to achieve orders of magnitude better performance than MapReduce applications. Then, we analyze how our Hadoop++ project allows MapReduce applications to match DBMS’ performance in the Cloud. We also discussed the design choices we made in the Hadoop++ project in order to preserve the ease-of-use and the almost-free-admin cost of MapReduce applications in the Cloud. Finally, we conclude this talk by discussing some of the challenges imposed by the Cloud to achieve data management efficiently.