Google CoLab

Montag 19 Februar 2018

On my laptop it takes forever to train my TensorFlow models. I was looking for cheap online services where I could run the code and not having any luck finding anything, Google Cloud Computing does give you $300 worth of free processing time, but that's not really free. I did find Google Colab which is a Python notebook based environment where you can run code for free, and it includes GPU support!

It took me a little while to get everything set up, but it was relatively easy and it runs incredibly fast. The tricky part was getting my data into the notebook. While Colab saves the notebooks to your Google Drive, they do not run on your Google Drive so you can't just put the data on the Drive and then access it.

I used wget to download the data from a URL to wherever the notebook is running, then unzipped it with Python and then I was able to read the data, so it wasn't all that complicated. When I tried to follow the instructions on importing data from Google Drive via an API I was unable to get it to work - I kept getting errors about directories and files not existing despite the fact that they showed up when I did !ls.

They have Tesla K80 GPUs available and the code runs incredibly fast. I'm still training my first model, but it seems like it's going to finish in about 20 minutes whereas it would have taken 3+ hours to train it locally. This difference in speed makes it possible to do things like tune the learning rate and hyperparameters, which are not practical to do locally if it takes hours to train the model.

This is an amazing service from Google and I am already using it heavily, just hours after having discovered it.

Etiketten: coding, python, machine_learning, google
Keine Kommentare

Update on TensorFlow GPU Windows Errors

Freitag 16 Februar 2018

After playing with TensorFlow GPU on Windows for a few days I have more information on the errors. I am running TensorFlow 1.6, currently the latest version, with Python 3.6 and Nvidia CUDA 9.0 on an Nvidia GE Force GT 750M.

When the Python Windows process crashes with an error that says CUDA_ERROR_LAUNCH_FAILED, the problem can be solved by reducing the fraction of the GPU memory available with:

config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.7

If the Python script fails with an error about exhausted resources or being unable to allocate enough memory, then you need to use a smaller batch size. This problem does not crash the Python process, Python throws an Exception but does not crash.

Once I figured these out, I have had no problems running models on the GPU at all.

Etiketten: python, machine_learning, tensorflow
Keine Kommentare

Batch Normalization with TensorFlow

Dienstag 13 Februar 2018

I was trying to use batch normalization in order to improve the accuracy of my CIFAR classifier with tf.layers.batch_normalization, and it seemed to have little to no effect. According to this StackOverflow post you need to do something extra, which is not mentioned in the documentation, in order to get the batch normalization to work.

extra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
sess.run([train_op, extra_update_ops], ...)

The batch norm update operations are added to UPDATE_OPS collection, so you need to create that operation and then feed it into the session along with the training op. Before I had added the extra_update_ops the batch normalization was definitely not running, now it is, whether it helps or not remains to be seen.

Also make sure to use a training=[BOOLEAN | TENSOR] in the call to batch_normalization() to prevent it from being applied during evaluation. I use a placeholder and pass whether it is training or not in via the feed_dict:

training = tf.placeholder(dtype=tf.bool)

And then use this in my batch norm and dropout layers:

training=training

There were a few other things I had to do to get batch normalization to work properly:

  1. I had been using local response normalization, which apparently doesn't help that much. I removed those layers and replaced them with batch normalization layers.
  2. Remove the activation from the conv2d layers. I run the output through the batch normalize layers and then apply the relu.

Before I made these changes the model with the batch normalization didn't seem to be training at all, the accuracy was just going up and down right around the baseline of .10. After these changes it seems to be training properly now.

Etiketten: data_science, machine_learning, tensor_flow
Keine Kommentare

TensorFlow GPU Errors on Windows

Montag 12 Februar 2018

I have been loving TensorFlow lately and have installed tensorflow-gpu on my Windows 10 laptop. Given that the GPU on my laptop is not a really great one I have run into quite a few issues, most of which I have solved. My GPU is an Nvidia GeForce GT 750M with 2GB of RAM and I am running the latest release of tensorflow as of February 2018, with Python 3.6. 

If you are running into errors I would suggest you try these things in this order:

  1. Try reducing the batch size for training AND validation. I always use batches for training but would evaluate on the validation data all at once. By using batches for validation and averaging the results I am able to avoid most of the memory errors.
  2. If this doesn't work try to restrict the amount of GPU RAM available to tensorflow with config.gpu_options.per_process_gpu_memory_fraction = 0.7
    which restricts the amount  available to 70%. Note that I am unable to ever run the GPU with the memory fraction above 0.7
  3. If all else fails turn the GPU off and use the CPU: 
    config = tf.ConfigProto()
    config = tf.ConfigProto(device_count = {'GPU': 0})

The difference between using the CPU and the GPU is like night and day... With the CPU it takes all day to train through 20 epochs, with the GPU the same can be done in a few hours. I think the main roadblock with my GPU is the amount of RAM, which can easily be managed by controlling the batch size and the config settings above. Just remember to feed the config into the session.

Etiketten: python, data_science, machine_learning, tensor_flow
Keine Kommentare

Why I Stopped Programming in PHP

Montag 12 Februar 2018

A few months ago I went to a university to interview for a job working on their website. Up until that day I had been programming in PHP with Laravel and Symfony. These MVC frameworks are object-oriented, and they make PHP seem like a "real" programming language. I love them for that. Everything is nicely organized and segmented and you can do almost all the programming in an object oriented fashion.

The university was using a CMS and they asked me to take a little test by looking at some of the code. The code was written in non-object-oriented PHP, which means that everything was done in the page. If you need data from the database you do the query right there in the page, do whatever manipulation you need, then loop through the results with everything embedded right in the HTML with <?php tags. 

I was shocked and dismayed looking at the code. It was about as elegant as the BASIC code I was writing when I was 10 years old, but with HTML mixed in for good measure. It was horrifying to realize that this was the language I was programming in, despite the fact that OOP frameworks make it into something that resembles nicely written code.

The other thing that happened that day was they asked me what my ideal job was. I said I would really like to be involved in some sort of research, but there was no need for PHP in that field. When I got home I thought about it for a few hours, then decided that if that was what I wanted to do I should learn whatever I needed to learn to do it. And that is what I did, and that is why I stopped programming in PHP.

Don't get me wrong - I'm not saying PHP is a horrible language, it can be used very well. But the majority of employers here in Switzerland looking for PHP programmers are not looking for people who do it very well, they are looking for people who have done a three month bootcamp and will work for very low pay. For me, it just wasn't worth the effort to do something I don't really even like that much.

Etiketten: coding, php
Keine Kommentare

Archiv