Logging & Visualization
Logging in Pipelines
There are three options to log your experiments when working with Pipelines:
Tensorboard — is active by default if there is no
logger
in config. It will log charts (losses, metrics, statistics) and visualisation of model’s mistakes for further analysis. If you want to change the logging directory, you may configure the logger explicitly:... logger: name: tensorboard args: save_dir: "." ...
Neptune — an option for advanced logging & collaboration with a team. It will log everything logged by Tensorboard, but also the original source code, all the configs for easier reproducibility and telemetry such as GPU, CPU and Memory utilization. Your config and run command may look like this:
... logger: name: neptune # requires <NEPTUNE_API_TOKEN> as global env args: project: "oml-team/test" ...
export NEPTUNE_API_TOKEN=your_token; python train.py
Weights and Biases — an option for advanced logging & collaboration with a team. The functionality & usage is similar to the Neptune’s ones. Your config and run command may look like this:
... logger: name: wandb args: project: "test_project" ...
export WANDB_API_KEY=your_token; python train.py
Let’s consider an example of what you get using Neptune for the feature extractor pipeline.

In the example above you can observe the graphs of:
Metrics such as
CMC@1
,Precision@5
,MAP@5
, which were provided in a config file asmetric_args
. Note, you can setmetrics_args.return_only_overall_category: False
to log metrics independently for each of the categories (if your dataset has ones).Loss values averaged over batches and epochs. Some of the built-in OML’s losses have their unique additional statistics that is also logged. We used TripletLossWithMargin in our example, which comes along with tracking positive distances, negative distances and a fraction of active triplets (those for which loss is greater than zero).

The image above shows the worst model’s predictions in terms of MAP@5 metric. In particular, each row contains:
A query (blue)
Five closest items from a gallery to the given query & the corresponding distances (they are all red because they are irrelevant to the query)
At most two ground truths (grey), to get an idea of what model should return
There is also the slide bar that helps to estimate your model’s progress from epoch to epoch
Logging in Python
Using Lightning
The easiest is to use Lightning’s integrations with Tensorboard, Neptune or Weights and Biases.
Take a look at the following example: Training + Validation [Lightning and logging].
Using plain Python
Log whatever information you want using the tool of your choice. We just provide some tips on how to get this information. There are two main sources of logs:
Criterion (loss). Some of the built-in OML’s losses have their unique additional statistics, which is stored in the
last_logs
field. See Training in the examples.Metrics calculator — EmbeddingMetrics. It has plenty of methods useful for logging. See Validation in the examples.
We also recommend you take a look at:
Visualisation notebook for interactive errors analysis and visualizing attention maps.