| Call for papers | Committees | Submission | Registration | Attending | Program

Computing on a urban heat network

Speakers: Yanik Ngoko, Grégoire Sirou -- Qarnot computing, France

Qarnot computing promotes a utility computing model in which computing and heating are delivered from a single cloud infrastructure. The model is implemented by means of a geo-distributed cloud platform based on special server nodes named digital heaters (See https://www.qarnot.com/computing-heater_qh-1/). Each of these heaters embeds 3-4 processors or GPU cards, connected to a heat diffusion system. Qarnot servers are deployed in homes, offices, schools etc. and the network of space heaters that they constitute is the physical infrastructure of a distributed data center.

The Qarnot model is based on a new and customized resource manager named Q.ware. In comparison with traditional ones, Q.ware supports a service provisioning model that distinguishes two types of requests: requests for heating and for computing. In addition, the requirements in computations must be balanced with those in heating. Q.ware also supports a REST computing API that serves to send and monitor batch computing jobs on the platform.

The goal of this tutorial is to present the Qarnot concepts and to teach users to exploit it. More precisely the attendees will learn:

  • The processing architecture of the platform (how to create a geo-distributed cloud that is also a urban heating network)
  • To create, submit and monitor tasks on a geo-distributed cloud (we will consider Machine Learning jobs and 3D rendering jobs)
  • To run parallel jobs (We will focus on embarrassingly parallel jobs defined with the SPMD paradigm)
  • To get a cluster from the Qarnot API (Typically such a cluster could be used to run MPI jobs)
  • To define resource constraints in the definition of jobs