Data Science or not Data Science? : Différence entre versions

De BIGDATA
Aller à : navigation, rechercher
 
(Une révision intermédiaire par le même utilisateur non affichée)
Ligne 16 : Ligne 16 :
 
     [http://spark-notebook.io/ Spark-notebook]
 
     [http://spark-notebook.io/ Spark-notebook]
 
     [http://spark.apache.org/ Apache Spark]
 
     [http://spark.apache.org/ Apache Spark]
 +
    [https://www.rosettahub.com/welcome  Rosettahub]
 
     [https://flink.apache.org/ Apache Flink]
 
     [https://flink.apache.org/ Apache Flink]
 
     [https://www.tensorflow.org/ TensorFlow]
 
     [https://www.tensorflow.org/ TensorFlow]
Ligne 56 : Ligne 57 :
 
   [Hippolyte Léger]: Apprentissage Relationnel Massif (2018)
 
   [Hippolyte Léger]: Apprentissage Relationnel Massif (2018)
  
 +
'''Contact'''
 +
  [http://www-lipn.univ-paris13.fr/~cerin/] Christophe Cerin
 +
  [https://sites.google.com/site/lebbah] Mustapha Lebbah
 +
 
  
 
----
 
----

Version actuelle en date du 31 mai 2017 à 08:21

Welcome to LIPN Wiki on Big Data and Cloud Computing

With more and more data produced every day, we need to pay a special attention on the technologies to use in order to be able to analyze large amount of data. Big Data is often characterized by the 4 V for Volume, Variety, Velocity, Veracity that constitute challenges for the required tools.

Machine learning is to extract knowledge from data. In short it's a family of algorithms that transform data into model or description with the aim to predict or categorize data. In this field we use also analytics tools consisting to present informations in a more readable way as for the Square Predict project. Others projects related to big data and cloud computing are the Wendelin and the Resilience projects. This research has been supported by the French Foundation FSN, PIA Grant Big data-Investissements d'Avenir.

The wiki is related to our experience with the Grid5000, Teralab and CIRRUS testbeds for the study of the Software, Platform, Infrastructure and Network layers that push forward the Data Science field according to an experimental scientific method. We must not confuse the 'scientific problem' term and the 'System scientific problem' term to serve the scientific problem.

General discussion on Systems for Big-Data

  Infrastructure, programming models, frameworks
  Tutorial on moving data around Grid'5000
  Clustering geolocated data using Spark and DBSCAN (illustrative example)
  Spark and Scikit-Learn (illustrative example)

Our experience is with the following tools:

   Spark-notebook
   Apache Spark
   Rosettahub
   Apache Flink
   TensorFlow
   Wendelin See also this link
   SlapOS
   IBM Bluemix
   

Testbeds we use in conjunction with our experimental method:

   Grid5000
   CIRRUS @ Université Sorbonne Paris Cité
   Teralab
   Amazon

Machine Learning and Apache Spark : From models to platform

  NEW: More interesting for you : Spark-clustering-notebook [1]
  Some Apache Spark implementations (since 2011/2012)
  How to use Spark on Grid5000
  How to use Spark on Magi([2]) 
  Spark on your own machine


SlapOS cloud

  General information on SlapOS
  BOINC as a Service for the SlapOS Cloud: Tools and Methods
  Déploiement de la plate-forme SlapOS dans l'environnement Grid'5000
  A Cloud-Hosted SaaS with SlapOS for BLAST Benchmark on Grid5000 See also this link and this journal paper
  Volunteer Cloud for e-Science
  Synthesis of the LIPN work for the FUI Resilience project

TeraLab

  General information on TeraLab
  How to use TeraLab
  TeraLab and SlapOS and VFIB fees for managing your infrasructure.

Thesis corner

  Leila Abidi: Revisiter les grilles de PCs avec des technologies du Web et le cloud computing (2015)
  [Walid Saad]: Gestion de données pour le calcul scientifique dans les environnements grilles et cloud (2016)
  [Mohammed Ghesmoune]: Fouille de flux de données massives. Application aux “BigData” d’assurance (2016)
  [Hippolyte Léger]: Apprentissage Relationnel Massif (2018)

Contact

  [3] Christophe Cerin
  [4] Mustapha Lebbah