Difference between revisions of "HPC:Tensorflow"
From hpcwiki
(→Basic Anaconda Tensorflow GPU installation) |
(→Basic Anaconda Tensorflow GPU installation) |
||
Line 33: | Line 33: | ||
# Runs the op. | # Runs the op. | ||
print(sess.run(c)) | print(sess.run(c)) | ||
+ | |||
+ | * Test output should looks like (alongside warnings and other stuff) | ||
+ | Device mapping: | ||
+ | /job:localhost/replica:0/task:0/device:GPU:0 -> device: 0, name: Tesla K40c, pci bus | ||
+ | id: 0000:05:00.0 | ||
+ | b: /job:localhost/replica:0/task:0/device:GPU:0 | ||
+ | a: /job:localhost/replica:0/task:0/device:GPU:0 | ||
+ | MatMul: /job:localhost/replica:0/task:0/device:GPU:0 | ||
+ | [[ 22. 28.] | ||
+ | [ 49. 64.]] |
Revision as of 18:42, 21 April 2018
Basic Anaconda Tensorflow GPU installation
By April 21st, 2018, Anaconda 3 (i.e. running Python 3.5 or above) has evolved enough to provide functional GPU-based instances of Tensorflow, running on iPython or Jupyter notebook.
In the worker nodes, it is recommended to use non interactive scripts; however, it is under study the provision of Jupyter-notebook and/or iPython in the worker nodes (not recommended). A better alternative to be addressed is the installation of Jupyter-hub in the frontend to control python slave processes in the worker nodes.
To test the GPU-based Tensorflow functionality on the worker nodes, the following procedure has been performed
- Create a test user:
# adduser test # passwd test # su test $ cd ~
- Download and install Anaconda3 v. 5.1.0
$ wget https://repo.anaconda.com/archive/Anaconda3-5.1.0-Linux-x86_64.sh $ bash Anaconda3-5.1.0-Linux-x86_64.sh $ cd anaconda3/bin $ ./conda install tensorflow-gpu
- Test in iPython the GPU functionality and Tensorflow general behavior
$ ./ipython
- Code for iPython:
import tensorflow as tf # Creates a graph. a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a') b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b') c = tf.matmul(a, b) # Creates a session with log_device_placement set to True. sess = tf.Session(config=tf.ConfigProto(log_device_placement=True)) # Runs the op. print(sess.run(c))
- Test output should looks like (alongside warnings and other stuff)
Device mapping: /job:localhost/replica:0/task:0/device:GPU:0 -> device: 0, name: Tesla K40c, pci bus id: 0000:05:00.0 b: /job:localhost/replica:0/task:0/device:GPU:0 a: /job:localhost/replica:0/task:0/device:GPU:0 MatMul: /job:localhost/replica:0/task:0/device:GPU:0 [[ 22. 28.] [ 49. 64.]]