Log tables
7 minute read
wandb.Table
to log data to visualize and query with W&B. In this guide, learn how to:
Create tables
To define a Table, specify the columns you want to see for each row of data. Each row might be a single item in your training dataset, a particular step or epoch during training, a prediction made by your model on a test item, an object generated by your model, etc. Each column has a fixed type: numeric, text, boolean, image, video, audio, etc. You do not need to specify the type in advance. Give each column a name, and make sure to only pass data of that type into that column index. For a more detailed example, see this report.
Use the wandb.Table
constructor in one of two ways:
- List of Rows: Log named columns and rows of data. For example the proceeding code snippet generates a table with two rows and three columns:
- Pandas DataFrame: Log a DataFrame using
wandb.Table(dataframe=my_df)
. Column names will be extracted from the DataFrame.
From an existing array or dataframe
Add data
Tables are mutable. As your script executes you can add more data to your table, up to 200,000 rows. There are two ways to add data to a table:
- Add a Row:
table.add_data("3a", "3b", "3c")
. Note that the new row is not represented as a list. If your row is in list format, use the star notation,*
,to expand the list to positional arguments:table.add_data(*my_row_list)
. The row must contain the same number of entries as there are columns in the table. - Add a Column:
table.add_column(name="col_name", data=col_data)
. Note that the length ofcol_data
must be equal to the table’s current number of rows. Here,col_data
can be a list data, or a NumPy NDArray.
Adding data incrementally
This code sample shows how to create and populate a W&B table incrementally. You define the table with predefined columns, including confidence scores for all possible labels, and add data row by row during inference. You can also add data to tables incrementally when resuming runs.
Adding data to resumed runs
You can incrementally update a W&B table in resumed runs by loading an existing table from an artifact, retrieving the last row of data, and adding the updated metrics. Then, reinitialize the table for compatibility and log the updated version back to W&B.
Retrieve data
Once data is in a Table, access it by column or by row:
- Row Iterator: Users can use the row iterator of Table such as
for ndx, row in table.iterrows(): ...
to efficiently iterate over the data’s rows. - Get a Column: Users can retrieve a column of data using
table.get_column("col_name")
. As a convenience, users can passconvert_to="numpy"
to convert the column to a NumPy NDArray of primitives. This is useful if your column contains media types such aswandb.Image
so that you can access the underlying data directly.
Save tables
After you generate a table of data in your script, for example a table of model predictions, save it to W&B to visualize the results live.
Log a table to a run
Use wandb.log()
to save your table to the run, like so:
Each time a table is logged to the same key, a new version of the table is created and stored in the backend. This means you can log the same table across multiple training steps to see how model predictions improve over time, or compare tables across different runs, as long as they’re logged to the same key. You can log up to 200,000 rows.
To log more than 200,000 rows, you can override the limit with:
wandb.Table.MAX_ARTIFACT_ROWS = X
However, this would likely cause performance issues, such as slower queries, in the UI.
Access tables programmatically
In the backend, Tables are persisted as Artifacts. If you are interested in accessing a specific version, you can do so with the artifact API:
For more information on Artifacts, see the Artifacts Chapter in the Developer Guide.
Visualize tables
Any table logged this way will show up in your Workspace on both the Run Page and the Project Page. For more information, see Visualize and Analyze Tables.
Artifact tables
Use artifact.add()
to log tables to the Artifacts section of your run instead of the workspace. This could be useful if you have a dataset that you want to log once and then reference for future runs.
Refer to this Colab for a detailed example of artifact.add() with image data and this Report for an example of how to use Artifacts and Tables to version control and deduplicate tabular data.
Join Artifact tables
You can join tables you have locally constructed or tables you have retrieved from other artifacts using wandb.JoinedTable(table_1, table_2, join_key)
.
Args | Description |
---|---|
table_1 | (str, wandb.Table , ArtifactEntry) the path to a wandb.Table in an artifact, the table object, or ArtifactEntry |
table_2 | (str, wandb.Table , ArtifactEntry) the path to a wandb.Table in an artifact, the table object, or ArtifactEntry |
join_key | (str, [str, str]) key or keys on which to perform the join |
To join two Tables you have logged previously in an artifact context, fetch them from the artifact and join the result into a new Table.
For example, demonstrates how to read one Table of original songs called 'original_songs'
and another Table of synthesized versions of the same songs called 'synth_songs'
. The proceeding code example joins the two tables on "song_id"
, and uploads the resulting table as a new W&B Table:
Read this tutorial for an example on how to combine two previously stored tables stored in different Artifact objects.
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.