arxiv V2
This commit is contained in:
@@ -2,14 +2,18 @@
|
||||
|
||||
\begin{abstract}
|
||||
Transformers, particularly Vision Transformers (ViTs), have achieved state-of-the-art performance in large-scale image classification.
|
||||
However, they often require large amounts of data and can exhibit biases that limit their robustness and generalizability.
|
||||
This paper introduces \schemename, a novel data augmentation scheme that addresses these challenges and explicitly includes inductive biases, which commonly are part of the neural network architecture, into the training data.
|
||||
However, they often require large amounts of data and can exhibit biases, such as center or size bias, that limit their robustness and generalizability.
|
||||
This paper introduces \schemename, a novel data augmentation operation that addresses these challenges by explicitly imposing invariances into the training data, which are otherwise part of the neural network architecture.
|
||||
% This paper introduces \name, a novel dataset derived from ImageNet that addresses these challenges.
|
||||
\schemename is constructed by using pretrained foundation models to separate and recombine foreground objects with different backgrounds, enabling fine-grained control over image composition during training.
|
||||
It thus increases the data diversity and effective number of training samples.
|
||||
We demonstrate that training on \name, the application of \schemename to ImageNet, significantly improves the accuracy of ViTs and other architectures by up to 4.5 percentage points (p.p.) on ImageNet and 7.3 p.p. on downstream tasks.
|
||||
Importantly, \schemename enables novel ways of analyzing model behavior and quantifying biases.
|
||||
Namely, we introduce metrics for background robustness, foreground focus, center bias, and size bias and show that training on \name substantially reduces these biases compared to training on ImageNet.
|
||||
\schemename is constructed by using pretrained foundation models to separate and recombine foreground objects with different backgrounds.
|
||||
% enabling fine-grained control over image composition during training.
|
||||
% Missing sentence here of how you use it to generate data in what way and with what purpose wrt to bias
|
||||
This recombination step enables us to take fine-grained control over object position and size, as well as background selection.
|
||||
% It thus increases the data diversity and effective number of training samples.
|
||||
We demonstrate that using \schemename significantly improves the accuracy of ViTs and other architectures by up to 4.5 percentage points (p.p.) on ImageNet, which translates to 7.3 p.p. on downstream tasks.
|
||||
% Importantly, \schemename enables novel ways of analyzing model behavior and quantifying biases.
|
||||
Importantly, \schemename not only improves accuracy but also opens new ways to analyze model behavior and quantify biases.
|
||||
Namely, we introduce metrics for background robustness, foreground focus, center bias, and size bias and show that using \schemename during training substantially reduces these biases.
|
||||
In summary, \schemename provides a valuable tool for analyzing and mitigating biases, enabling the development of more robust and reliable computer vision models.
|
||||
Our code and dataset are publicly available at \url{https://github.com/tobna/ForAug}.
|
||||
Our code and dataset are publicly available at \code{https://github.com/tobna/ForAug}.
|
||||
\end{abstract}
|
||||
Reference in New Issue
Block a user