Mitigating bias in multi-task learning
In this thesis, we focus on multi-task learning in computer vision. The goal of multi-task learning is to train several related and different tasks simultaneously and perform well on each task with bi-directional knowledge transfer among tasks. We aim to address existing challenges associated with biases in multi-task learning, such as data insufficiency, category shifts, and task imbalance. Previous MTL methods often require extensive and complete training data from all tasks, leading to issues like overfitting and suboptimal model performance. The thesis is structured around addressing four pivotal research questions: 1) Mitigating data insufficiency by leveraging task-relatedness through a variational Bayesian framework called VMTL. 2) Exploring historical information to address data insufficiency by developing Heterogeneous Neural Processes (HNPs) in an episodic training setup. 3) Addressing category shifts with an association graph learning (AGL) that facilitates knowledge transfer among different tasks and classes to maintain the model's discriminative ability. 4) Mitigating task imbalance effectively and efficiently via GO4Align, a novel optimization approach that uses a group risk minimization strategy to align task optimizations. Each chapter in the thesis introduces innovative methods with detailed methodologies and experimental results for each specific MTL bias, suggesting a comprehensive approach to improving MTL systems.