You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was computing some inferences with my trained models and I susprised when I saw that the min and max values stored during training didn't match with the values I was seeing during my inference. I mean, based on anomalib's code, those "min" and "max" (which corresponds to the src.anomalib.utils.metrics.min_max.py.MinMax metric) the min and max are, respectively, the min and max values of the anomaly maps of the validation calculated during training. So, if my validation set is the same, why the metdata's stored "max" is much higher than the maximum of the anomaly maps I was inferencing? And the same with "min" and also, with "image_threshold".
So I just realized that it's normal that the "max" stored in metadata is higher than real "max" of anomaly maps with the already trained model, and "min" is much lower. It's because of the way of calculating those values is updating them if you see an anomaly map with higher values than stored "max" or lower than stored "min" (self.max = torch.max(self.max, torch.max(predictions))) and **in the firsts epochs, when the network is not yet very fit, the "max" would be very high and the "min" very low, thus making it very difficult to update these values again and leaving us at the end of training using values calculated during the first epochs of an "early model". **
The point here is that metadata's "min", "max", and "image_threshold" when not masks are provided (in this case the image threshold would be the max of the good anomaly maps) are not so useful in the inferences of the already trined model. These values are used normally within the post_process step inside the predict of the Inferencers like TorchInferencer.
Maybe should they reset at each epoch_start??
Thank you so much for your valuable work!!
Dataset
MVTec
Model
N/A
Steps to reproduce the behavior
Train an anomalib model
Perform inference with the trained model in the same validation set that was used during trainig to validate and compute the metrics
See how "min", "max" and (if not masks provided for anoamalous classes) "image_threshold" don't match with the anomaly maps you are inferencing
OS information
OS information:
OS: Windows 10
Python version: 3.10.14
Anomalib version: 1.0.1
PyTorch version: 2.2.0+cu121
CUDA/cuDNN version: 12.4/8.9.2.26
GPU models and configuration: 1x GeForce RTX 3090
Expected behavior
To min", "max" and (if not masks provided for anoamalous classes) "image_threshold" match with the anomaly maps you are inferencing with an anomalib's trained model.
Screenshots
No response
Pip/GitHub
GitHub
What version/branch did you use?
No response
Configuration YAML
Not relevant
Logs
Not relevant
Code of Conduct
I agree to follow this project's Code of Conduct
The text was updated successfully, but these errors were encountered:
Describe the bug
I was computing some inferences with my trained models and I susprised when I saw that the min and max values stored during training didn't match with the values I was seeing during my inference. I mean, based on anomalib's code, those "min" and "max" (which corresponds to the
src.anomalib.utils.metrics.min_max.py.MinMax
metric) the min and max are, respectively, the min and max values of the anomaly maps of the validation calculated during training. So, if my validation set is the same, why the metdata's stored "max" is much higher than the maximum of the anomaly maps I was inferencing? And the same with "min" and also, with "image_threshold".So I just realized that it's normal that the "max" stored in metadata is higher than real "max" of anomaly maps with the already trained model, and "min" is much lower. It's because of the way of calculating those values is updating them if you see an anomaly map with higher values than stored "max" or lower than stored "min" (
self.max = torch.max(self.max, torch.max(predictions))
) and **in the firsts epochs, when the network is not yet very fit, the "max" would be very high and the "min" very low, thus making it very difficult to update these values again and leaving us at the end of training using values calculated during the first epochs of an "early model". **The point here is that metadata's "min", "max", and "image_threshold" when not masks are provided (in this case the image threshold would be the max of the good anomaly maps) are not so useful in the inferences of the already trined model. These values are used normally within the
post_process
step inside thepredict
of theInferencers
likeTorchInferencer
.Maybe should they reset at each epoch_start??
Thank you so much for your valuable work!!
Dataset
MVTec
Model
N/A
Steps to reproduce the behavior
OS information
OS information:
Expected behavior
To min", "max" and (if not masks provided for anoamalous classes) "image_threshold" match with the anomaly maps you are inferencing with an anomalib's trained model.
Screenshots
No response
Pip/GitHub
GitHub
What version/branch did you use?
No response
Configuration YAML
Not relevant
Logs
Code of Conduct
The text was updated successfully, but these errors were encountered: