casespax.blogg.se

Monkey shape word cloud generator
Monkey shape word cloud generator













monkey shape word cloud generator

(C) Distribution of similarity between images evolved for the same unit vs. Activation for each image is noted on the top-right of the image. Each row corresponds to a unit and each column a random initialization.

monkey shape word cloud generator

(A,B) XDREAM-synthesized stimuli for (A) 3 CaffeNet fc8 units: ‘goldfish’, ‘ambulance’, and ‘loud-speaker’, and (B) the corresponding 3 ResNet-101 fc1000 units, starting from different initial populations (random draws of 40 initial codes from a bank of 1,000).

monkey shape word cloud generator monkey shape word cloud generator

Quantification of image similarity across repeated experiments using simple measures. Public domain image: ‘Neptune’ (NASA).Ĥ: Figure S4. Public domain artwork: ‘Monet’: ‘The Bridge at Argenteuil’ (National Gallery of Art) ‘object’: ‘Moon jar’ (The Metropolitan Museum of Art). ‘Monkey face’: ILSVRC2012 ( Russakovsky et al., 2015). Images reproduced from published work with permission: ‘Curvature-position’:( Pasupathy and Connor, 2002) ‘3d shape’:( Hung et al., 2012). The genetic algorithm is also able to find codes that produced images (third row) similar to the target images, indicating that not only is the generator expressive, its latent space can be searched with a genetic algorithm. To do so, we created dummy ‘neurons’ that calculated the Euclidean distance between the target image and any given image in pixel space (left group) or CaffeNet pool5 space (right group), and used XDREAM to maximize the ‘neuron responses’ (thereby minimizing distance to target), similar to how this network could be used to maximize firing of real neurons in electrophysiology experiments. We then asked whether, given that these images can be approximately encoded by the generator, a genetic algorithm searching in code space (‘XDREAM’) is able to recover them. The existence of codes that produced the images in the second row, regardless of how they are found, demonstrates that the deep generative network is able to encode a variety of images. To find an image code that would approximately generate each target image (second row), we used either 1) backpropagation to optimize a zero-initialized image code to minimize pixel-space distance (left group STAR methods-Initial generation), or 2) the CaffeNet fc6 representations of the target image, as the generator was originally trained to use (right group ( Dosovitskiy and Brox, 2016)). To qualitatively estimate the expressiveness of the deep generative network, we selected arbitrary images in various styles and categories outside of the training set of the network (first row). The deep generative adversarial network is expressive and searchable by a genetic algorithm Related to Figure 1.















Monkey shape word cloud generator