Structural Constraints in Neural Network Representations
This thesis examines structural constraints on neural network representations as a way of encoding prior knowledge in neural networks. Neural networks have proved to have excellent ability to process perceptual data by mapping between perceptual entities and predicting missing or future information. Despite their modeling prowess, neural networks do not encode or represent general knowledge or concepts and generally do not provide understanding or insight into the modeled object. One possible way of employing neural networks as tools that allow scientific analysis and understanding is to examine ways of combining prior conceptual knowledge with perceptual information extracted from data. This thesis examines graph partitions, subsets, discrete variables and differential equations as specific structural constraints on neural network representations for representing prior knowledge with the aim of making neural networks more interpretable and analyzable.