We use the torchtext BucketIterator for creating batches, and the size of the batches will be sequence length and batches. For our case, the size will be [200, 32], where 200 is the sequence length and 32 is the batch size.
The following is the code used for batching:
train_iter, test_iter = data.BucketIterator.splits((train, test), batch_size=32, device=-1)
train_iter.repeat = False
test_iter.repeat = False