A Data-centric Approach to Class-specific Bias in Image Data Augmentation: Appendices A-L

Data augmentation enhances model generalization in computer vision but may introduce biases, impacting class accuracy unevenly.


This content originally appeared on HackerNoon and was authored by Computational Technology for All

:::info Authors:

(1) Athanasios Angelakis, Amsterdam University Medical Center, University of Amsterdam - Data Science Center, Amsterdam Public Health Research Institute, Amsterdam, Netherlands

(2) Andrey Rass, Den Haag, Netherlands.

:::

Appendices

Appendix A: Image dimensions (in pixels) off training images after being randomly cropped and before being resized

[32x32, 31x31, 30x30,

29x29, 28x28, 27x27,

26x26, 25x25, 24x24,

22x22, 21x21, 20x20,

19x19, 18x18, 17x17,

16x16, 15x15, 14x14,

13x13, 12x12, 11x11,

10x10, 9x9, 8x8,

6x6,5x5, 4x4, 3x3]

Appendix B: Dataset samples corresponding to the Fashion-MNIST segment used in training

Appendix C: Dataset samples corresponding to the CIFAR-10 segment used in training

Appendix D: Dataset samples corresponding to the CIFAR-100 segment used in training

Appendix E: Full collection of class accuracy plots for CIFAR-100

\ (a) The results in all figures employ official ResNet50 models from Tensorflow trained from scratch on the CIFAR-100 dataset with random crop data augmentation applied. All results in this figure are averaged over 4 runs. During training, the proportion of the original image obscured by the augmentation varies from 100% to 10%. We observe The vertical dotted lines denote the best test accuracy for every class.

\

\ (a) The results in all figures employ official ResNet50 models from Tensorflow trained from scratch on the CIFAR-100 dataset with random crop and random horizontal flip data augmentations applied. All results in this figure are averaged over 4 runs. During training, the proportion of the original image obscured by the augmentation varies from 100% to 10%. We observe The vertical dotted lines denote the best test accuracy for every class.

Appendix F: Full collection of best test performances for CIFAR100

Without Random Horizontal Flip:

\

\ With Random Horizontal Flip

\

Appendix G: Per-class and overall test set performances samples for the Fashion-MNIST + ResNet50 + Random Cropping + Random Horizontal Flip experiment

Appendix H: Per-class and overall test set performances samples for the CIFAR-10 + ResNet50 + Random Cropping + Random Horizontal Flip experiment

Appendix I: Per-class and overall test set performances samples for the Fashion-MNIST + EfficientNetV2S + Random Cropping + Random Horizontal Flip experiment

Appendix J: Per-class and overall test set performances samples for the Fashion-MNIST + ResNet50 + Random Cropping experiment

Appendix K: Per-class and overall test set performances samples for the CIFAR-10 + ResNet50 + Random Cropping experiment

Appendix L: Per-class and overall test set performances samples for the Fashion-MNIST + SWIN Transformer + Random Cropping + Random Horizontal Flip experiment

\

:::info This paper is available on arxiv under CC BY 4.0 DEED license.

:::

\


This content originally appeared on HackerNoon and was authored by Computational Technology for All


Print Share Comment Cite Upload Translate Updates
APA

Computational Technology for All | Sciencx (2024-08-31T18:00:22+00:00) A Data-centric Approach to Class-specific Bias in Image Data Augmentation: Appendices A-L. Retrieved from https://www.scien.cx/2024/08/31/a-data-centric-approach-to-class-specific-bias-in-image-data-augmentation-appendices-a-l/

MLA
" » A Data-centric Approach to Class-specific Bias in Image Data Augmentation: Appendices A-L." Computational Technology for All | Sciencx - Saturday August 31, 2024, https://www.scien.cx/2024/08/31/a-data-centric-approach-to-class-specific-bias-in-image-data-augmentation-appendices-a-l/
HARVARD
Computational Technology for All | Sciencx Saturday August 31, 2024 » A Data-centric Approach to Class-specific Bias in Image Data Augmentation: Appendices A-L., viewed ,<https://www.scien.cx/2024/08/31/a-data-centric-approach-to-class-specific-bias-in-image-data-augmentation-appendices-a-l/>
VANCOUVER
Computational Technology for All | Sciencx - » A Data-centric Approach to Class-specific Bias in Image Data Augmentation: Appendices A-L. [Internet]. [Accessed ]. Available from: https://www.scien.cx/2024/08/31/a-data-centric-approach-to-class-specific-bias-in-image-data-augmentation-appendices-a-l/
CHICAGO
" » A Data-centric Approach to Class-specific Bias in Image Data Augmentation: Appendices A-L." Computational Technology for All | Sciencx - Accessed . https://www.scien.cx/2024/08/31/a-data-centric-approach-to-class-specific-bias-in-image-data-augmentation-appendices-a-l/
IEEE
" » A Data-centric Approach to Class-specific Bias in Image Data Augmentation: Appendices A-L." Computational Technology for All | Sciencx [Online]. Available: https://www.scien.cx/2024/08/31/a-data-centric-approach-to-class-specific-bias-in-image-data-augmentation-appendices-a-l/. [Accessed: ]
rf:citation
» A Data-centric Approach to Class-specific Bias in Image Data Augmentation: Appendices A-L | Computational Technology for All | Sciencx | https://www.scien.cx/2024/08/31/a-data-centric-approach-to-class-specific-bias-in-image-data-augmentation-appendices-a-l/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.