Disparity of Abstract Representations in Convolutional Networks

Mikkel Pedersen, Henrik Bulskov

Research output: Contribution to conferencePaperResearchpeer-review

Abstract

A common occurrence in the deep learning com-
munity is to theorize on what abstract representa-
tion data takes inside a network. The aim of this
paper is to demonstrate that the expected represen-
tation from a simple well defined problem takes
on other forms than what human thought processes
would expect it to do.
The experiment uses CIFAR-10 and a FRUITS
dataset as its base and compares models trained
on RGB variants to models trained on separated
red, green, and blue color channels. A simpli-
fied FRUITS dataset variant represents a much sim-
pler problem of classifying tomatoes and capsicum
based on their significant red and green color.
Generally the RGB model variant outperformed the
separated variants but for the simplified FRUITS
experiment we observed the blue color channel out-
performing the red and green model variants with
its performance on par with the RGB variant. This
suggest that the model has chosen a blue filter to
represent the classification features. We also con-
firm that the blue color variant is most prominent
feature contributor through cross testing the RGB
model variant.
For a simple classification problem the models
choose to represent the classes in a less intuitive
form than what was expected from the simplicity of
the data representation. We aim for further discus-
sion about the disconnect of internal data represen-
tation of deep learning to the human counterpart.
Original languageDanish
Publication date2022
Number of pages5
Publication statusSubmitted - 2022

Cite this