Tensorflow在jupyter中设置CUDA_VISIBLE_DEVICES

时间:2022-03-15 01:22:08

I have two GPUs and would like to run two different networks via ipynb simultaneously, however the first notebook always allocates both GPUs.

我有两个GPU,并希望同时通过ipynb运行两个不同的网络,但第一个笔记本总是分配两个GPU。

Using CUDA_VISIBLE_DEVICES, I can hide devices for python files, however I am unsure of how to do so within a notebook.

使用CUDA_VISIBLE_DEVICES,我可以隐藏python文件的设备,但是我不确定如何在笔记本中这样做。

Is there anyway to hide different GPUs in to notebooks running on the same server?

反正有没有将不同的GPU隐藏到运行在同一台服务器上的笔记本中?

2 个解决方案

#1


80  

You can set environment variables in the notebook using os.environ. Do the following before initializing TensorFlow to limit TensorFlow to first GPU.

您可以使用os.environ在笔记本中设置环境变量。在初始化TensorFlow之前执行以下操作以将TensorFlow限制为第一个GPU。

import os
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID"   # see issue #152
os.environ["CUDA_VISIBLE_DEVICES"]="0"

You can double check that you have the correct devices visible to TF

您可以仔细检查TF是否可以看到正确的设备

from tensorflow.python.client import device_lib
print device_lib.list_local_devices()

I tend to use it from utility module like notebook_util

我倾向于使用像notebook_util这样的实用程序模块

import notebook_util
notebook_util.pick_gpu_lowest_memory()
import tensorflow as tf

#2


16  

You can do it faster without any imports just by using magics:

只需使用魔法,你就可以更快地完成它而无需任何进口:

%env CUDA_DEVICE_ORDER=PCI_BUS_ID
%env CUDA_VISIBLE_DEVICES=0

Notice that all env variable are strings, so no need to use ". You can verify that env-variable is set up by running: %env <name_of_var>. Or check all of them with %env.

请注意,所有env变量都是字符串,因此不需要使用“。您可以通过运行:%env 来验证是否设置了env变量。或者使用%env检查所有变量。

#1


80  

You can set environment variables in the notebook using os.environ. Do the following before initializing TensorFlow to limit TensorFlow to first GPU.

您可以使用os.environ在笔记本中设置环境变量。在初始化TensorFlow之前执行以下操作以将TensorFlow限制为第一个GPU。

import os
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID"   # see issue #152
os.environ["CUDA_VISIBLE_DEVICES"]="0"

You can double check that you have the correct devices visible to TF

您可以仔细检查TF是否可以看到正确的设备

from tensorflow.python.client import device_lib
print device_lib.list_local_devices()

I tend to use it from utility module like notebook_util

我倾向于使用像notebook_util这样的实用程序模块

import notebook_util
notebook_util.pick_gpu_lowest_memory()
import tensorflow as tf

#2


16  

You can do it faster without any imports just by using magics:

只需使用魔法,你就可以更快地完成它而无需任何进口:

%env CUDA_DEVICE_ORDER=PCI_BUS_ID
%env CUDA_VISIBLE_DEVICES=0

Notice that all env variable are strings, so no need to use ". You can verify that env-variable is set up by running: %env <name_of_var>. Or check all of them with %env.

请注意,所有env变量都是字符串,因此不需要使用“。您可以通过运行:%env 来验证是否设置了env变量。或者使用%env检查所有变量。