From human to fruit flies, the ability to identify individual objects by their features is crucial for visual behaviours and the fitness of seeing animals. However, it is still unsolved how information processing in the network of neurons in the eye makes object recognition possible. While fly and human eyes have a very different architecture, they both must extract distinct object features from natural scenes, and linked this information to internal activity maps to execute goal-oriented behaviour. Using genetic, electrophysiological, and optical imaging tools I wish to determine in the fruit fly Drosophila melanogaster how distinct object features are processed and represented by neurons in Drosophila’s first visual neuropil, the Lamina output neurons L1 – L5. How does the representation of these objects change when physical conditions are changing?