An example of the original scene (top left) and Deep-Dreamed scenes (top right, bottom left and right). The top right image was generated by selecting a higher DCNN layer that responds selectively to higher-level categorical features (layers = ‘inception_4d/pool’, octaves = 3, octave scale = 1.8, iterations = 32, jitter = 32, zoom = 1, step size = 1.5, blending ratio for optical flow = 0.9, blending ratio for background = 0.1, for more detail see46). We used these higher-level parameters to generate the Deep Dream video used throughout the reported experiments. The bottom left image was generated by fixing the activity of a lower DCNN layer that responds selectively to geometric image features (layer = ’conv2/3 × 3’, other parameters as above). The bottom right image was generated by selecting a middle DCNN layer responding selectively to parts of objects (layer = ’inception_3b/output’, other parameters as above).