13. Skip to content

13. How to run hyperparameter search

This guide focuses on the benchmark runner search workflow so you can sweep method parameters in a controlled way. It keeps the rest of the experiment fixed and points back to the config schema when you need field definitions. For config fields and schema details, see the Configuration reference.

13.1 Problem statement

You want to run grid or random search over method parameters using the benchmark runner. [1][2] The search space only touches method.params.*, so data, sampling, and preprocessing stay fixed while you compare trials. [1]

13.2 When to use

Use this to tune method.params.* for an inductive or transductive method while keeping the rest of the experiment fixed. [1][2]

13.3 Steps

1) Create a bench config with a search block. [1]

2) Run the benchmark runner with that config. [3]

3) Inspect runs/<run>/hpo/trials.jsonl and the best patch in run.json. [2][4]

13.4 Copy-paste example

Use the CLI command when you want to execute a full search from a YAML config. Use the Python example when you want to build or inspect a search space in code. [3][5][6]

CLI:

python -m bench.main --config bench/configs/experiments/toy_inductive_hpo.yaml

Python:

from modssc.hpo import Space

space = Space.from_dict({"method": {"params": {"max_iter": [5, 10], "confidence_threshold": [0.7, 0.9]}}})
for trial in space.iter_grid():
    print(trial.index, trial.params)

The full HPO example config is in bench/configs/experiments/toy_inductive_hpo.yaml. The space primitives are in src/modssc/hpo/space.py. [5][6]

13.5 Pitfalls

Warning

The bench schema restricts search.space to method.params.* paths in v1. [1]

Tip

Random search requires both search.seed and search.n_trials. [1]

Sources
  1. bench/schema.py
  2. bench/orchestrators/hpo.py
  3. bench/main.py
  4. bench/orchestrators/reporting.py
  5. bench/configs/experiments/toy_inductive_hpo.yaml
  6. src/modssc/hpo/space.py