Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Modify the configuration parameters in the model and data, but the generated configuration file after training shows that these parameters are still the default values, not the values I modified[Bug]: #2064

Open
1 task done
MMYY-yy opened this issue May 16, 2024 · 0 comments

Comments

@MMYY-yy
Copy link

MMYY-yy commented May 16, 2024

Describe the bug

The modified data and config parameters in the model are not used for training

Dataset

Other (please specify in the text field below)

Model

PADiM

Steps to reproduce the behavior

Training via CLI
anomalib train --model Padim --data anomalib.data.MVTec

OS information

OS information:

  • OS: windows11

  • Python version: 3.10

  • Anomalib version: 1.1.0.dev0
    -CPU

  • Any other relevant information: [e.g. I'm using a custom dataset]

Expected behavior

I want to modify the config parameters of data and model, which can be directly used for parameters during training

Screenshots

I want to modify the config parameters of data and model, which can be directly used for parameters during training

Pip/GitHub

pip

What version/branch did you use?

0.7.0

Configuration YAML

I modified{  image_size: [64, 64]
  train_batch_size: 16
  eval_batch_size: 16
 backbone: wide_resnet50_2}


class_path: anomalib.data.MVTec
init_args:
  root: ./datasets/MVTec
  category: bottle
  image_size: [64, 64]
  train_batch_size: 16
  eval_batch_size: 16
  num_workers: 8
  task: segmentation
  transform: null
  train_transform: null
  eval_transform: null
  test_split_mode: from_dir
  test_split_ratio: 0.2
  val_split_mode: same_as_test
  val_split_ratio: 0.5
  seed: null

model:
  class_path: anomalib.models.Padim
  init_args:
    layers:
      - layer1
      - layer2
      - layer3
    backbone: wide_resnet50_2     #  resnet18
    pre_trained: true
    n_features: null

metrics:
  pixel: AUROC

Logs

But the generated config file shows that these parameters are still default values, such as
{  image_size: null
  train_batch_size: 32
  eval_batch_size: 32
 backbone: resnet18}


# anomalib==1.1.0dev
seed_everything: true
trainer:
  accelerator: auto
  strategy: auto
  devices: 1
  num_nodes: 1
  precision: null
  logger: null
  callbacks: null
  fast_dev_run: false
  max_epochs: null
  min_epochs: null
  max_steps: -1
  min_steps: null
  max_time: null
  limit_train_batches: null
  limit_val_batches: null
  limit_test_batches: null
  limit_predict_batches: null
  overfit_batches: 0.0
  val_check_interval: null
  check_val_every_n_epoch: 1
  num_sanity_val_steps: null
  log_every_n_steps: null
  enable_checkpointing: null
  enable_progress_bar: null
  enable_model_summary: null
  accumulate_grad_batches: 1
  gradient_clip_val: null
  gradient_clip_algorithm: null
  deterministic: null
  benchmark: null
  inference_mode: true
  use_distributed_sampler: true
  profiler: null
  detect_anomaly: false
  barebones: false
  plugins: null
  sync_batchnorm: false
  reload_dataloaders_every_n_epochs: 0
normalization:
  normalization_method: MIN_MAX
task: SEGMENTATION
metrics:
  image:
  - F1Score
  - AUROC
  pixel: null
  threshold:
    class_path: anomalib.metrics.F1AdaptiveThreshold
    init_args:
      default_value: 0.5
      thresholds: null
      ignore_index: null
      validate_args: true
      compute_on_cpu: false
      dist_sync_on_step: false
      sync_on_compute: true
      compute_with_cache: true
logging:
  log_graph: false
default_root_dir: results
ckpt_path: null
model:
  class_path: anomalib.models.Padim
  init_args:
    backbone: resnet18
    layers:
    - layer1
    - layer2
    - layer3
    pre_trained: true
    n_features: null
data:
  class_path: anomalib.data.MVTec
  init_args:
    root: datasets\MVTec
    category: bottle
    train_batch_size: 32
    eval_batch_size: 32
    num_workers: 8
    image_size: null
    transform: null
    train_transform: null
    eval_transform: null
    test_split_mode: FROM_DIR
    test_split_ratio: 0.2
    val_split_mode: SAME_AS_TEST
    val_split_ratio: 0.5
    seed: null

Code of Conduct

  • I agree to follow this project's Code of Conduct
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant