Using a GPU, we can train a model in a few minutes and evaluate several hundred parameter combinations in a matter of hours, which would take many days using the sklearn implementation. For the LightGBM model, we explore both a factor version that uses the libraries' ability to handle categorical variables and a dummy version that uses one-hot encoding.
The results are available in the model_tuning.h5 HDF5 store. The model evaluation code samples are in the eval_results.ipynb notebook.