I am using pre-trained and fine-tuned models for cell classification. The images were not seen before by any of the transfer learning models. In the process, I found that a fine-tuned VGG19 model outperforms the fine-tuned Inception-V3 mode though both these models are trained on the ImageNet data. What contributes to this difference? is it because of the model architecture?