Disclaimer: Esta página está libre de lactosa, gluten y microblogging. Puede contener trazas de gato. Enlace al trabajo original aquí.
Interactive Image Translation with pix2pix-tensorflow
Trained on about 2k stock cat photos and edges automatically generated from those photos. Generates cat-colored objects, some with nightmare faces. The best one I've seen yet was a cat-beholder
Some of the pictures look especially creepy, I think because it's easier to notice when an animal looks wrong, especially around the eyes. The auto-detected edges are not very good and in many cases didn't detect the cat's eyes, making it a bit worse for training the image translation model.
Trained on a database of ~50k shoe pictures
collected from Zappos along with edges generated from those pictures automatically. If you're really good at drawing the edges of shoes, you can try to produce some new designs. Keep in mind it's trained on real objects, so if you can draw more 3D things, it seems to work better.
Similar to the previous one, trained on a database of ~137k handbag pictures collected from Amazon and automatically generated edges from those pictures. If you draw a shoe here instead of a handbag, you get a very oddly textured shoe.
The models were trained and exported with the pix2pix.py
script from pix2pix-tensorflow
a hosted Tensorflow service run by Google.
The pre-trained models are available in the Datasets section
on GitHub. All the ones released alongside the original pix2pix implementation should be available. The models can be exported from the pre-trained ones using the pix2pix.py
script, and the exported models are linked from the server README
all code samples on this site are in the public domain unless otherwise stated