Image Style Transfer

As part of my DeepLearning Course at Georgia Tech, I use a Neural Network and two different loss functions to transfer the style from one image to another based on ”Image Style Transfer Using Convolutional Neural Networks” by Gatys et al. in 2015.

The idea is to have two image, one that determines the content, and a second that determines the style of the output image. For both of these we can program a loss function. Both loss functions will then be added to perform gradient descent on the output image.

I used the following two images: The first is a picture of me surfing and will provide the content of the image, the second is a painting of the Golden Gate Bridge and will provide the style:

Content Image – Surfing
Style Image – Painting of Golden Gate Bridge

The resulting image could clearly show the content of the surfing image. While the style was not exactly matching with style image, it definitly looked like a blurry version of the painting’s style:

What I did as part of the project:

  • Pre-processed Images
  • Implemented Style Loss Function
  • Implemented Content Loss Function
  • Implemented Total Variation Loss Function (For smootheness within the image)
  • Generated Images and Tuned Hyperparamertes using PyTorch

Unfortunately, I cannot share the code due to the course’s code of conduct to avoid plagarism by future students.