Différences
Ci-dessous, les différences entre deux révisions de la page.
Les deux révisions précédentes Révision précédente Prochaine révision | Révision précédente Prochaine révision Les deux révisions suivantes | ||
public:support:labex-efl-gpu [2021/11/30 14:16] garciaflores |
public:support:labex-efl-gpu [2023/02/13 14:57] garciaflores |
||
---|---|---|---|
Ligne 1: | Ligne 1: | ||
- | # GPU accelerators for Deep Learning at @LipnLab | + | # GPU |
- | @LipnLab has two servers dedicated to calculations with GPU accelerators for Deep Learning research related tasks: | + | [@LipnLab](https:// |
- | ## `lipn-gpu` : This server has **11 GPUs Nvidia Quadro RTX 5000 with 16 GB of RAM each **, distributed in three nodes. Access to this server is reserved to @LipnLab researchers, | + | ## LIPN GPU |
+ | This server has **11 GPUs Nvidia Quadro RTX 5000 with 16 GB of RAM each **, distributed in three nodes. Access to this server is reserved to @LipnLab researchers, | ||
- | ## tal-labex-gpu` : gives access to **8 GPUs Nvidia GEForce RTX 2080 with 8 GB of RAM each** in one node. This server is reserved to external @LipnLab research partners. | + | You can find documentation on how to run your code on the [[public: |
+ | |||
+ | ## TAL/Labex EFL | ||
+ | The server | ||
+ | |||
+ | You can read [tal-labex-gpu documentation here](https:// | ||
+ | |||
+ | ## LIPN-L2TI | ||
+ | We are currently deploying a new server with ** 8 Nvidia A40 GPUs accelerators with 48Gb of RAM each ** (documentation soon). | ||
+ | |||
+ | (Actually, here is some [[public: |