Benchmark T5

This is used to benchmark the performance of T5 model on text generation tasks. We evaluate it using 3 frameworks. Tensorflow-Transformers (default), HuggingFace PyTorch, HuggingFace Tensorflow and HuggingFace JAX (pending). Executing these scripts are fairly straightfoward and expect users to install the necessary libraries before executing the benchmark script.

All the configuration are managed using Hydra.

-> Machine - Tesla V100-SXM2 32GB

-> Tensorflow-version - 2.4.1

-> Huggingface-Transformer-Version - 4.12.5

-> PyTorch-Version - 1.9.0

Tensorflow-Transformers. (tft)

The default benchmark mode is tft.

  1. To execute Greedy Decoding (default) : python run.py benchmark=tft

  2. To execute Beam Decoding : python run.py benchmark=tft benchmark.text_generation.mode=beam +benchmark.text_generation.num_beams=3

     * a. textdecoder_keras_model    -  Uses tf.keras.Model.
     * b. textdecoder_saved_model    -  Uses tf.saved_model, ```for``` loop to decode .
     * c. textdecoder_serializable   -  Uses tf.saved_model, ```model + text generation``` is serialized together.
    
  3. To execute Greedy Decoding using let’s say textdecoder_keras_model : python run.py benchmark=tft benchmark.model.type=textdecoder_keras_model

[Note] - You can pass any arguments to tf_transformers.text.TextDecoder.decode arguments to the hydra configuration using + .

HuggingFace-Tensorflow. (hf-tf)

  1. To execute Greedy Decoding (default) : python run.py benchmark=hf benchmark.model.type=tf

  2. To execute Beam Decoding : python run.py benchmark=hf benchmark.model.type=tf +benchmark.text_generation.num_beams=3

[Note] - You can pass any arguments to model.generate arguments to the hydra configuration using + .

HuggingFace-PyTorch. (hf-pt)

  1. To execute Greedy Decoding (default) : python run.py benchmark=hf benchmark.model.type=pt

  2. To execute Beam Decoding : python run.py benchmark=hf benchmark.model.type=pt +benchmark.text_generation.num_beams=3

HuggingFace-JAX. (hf-jax) (Not Available)

  1. To execute Greedy Decoding (default) : python run.py benchmark=hf benchmark.model.type=jax

  2. To execute Beam Decoding : python run.py benchmark=hf benchmark.model.type=jax +benchmark.text_generation.num_beams=3

Official Benchmarks on XSUM

Greedy Decode:
|                            |   batch_size | mode   |   max_length | time (s)      |   samples/second |
|:---------------------------|-------------:|:-------|-------------:|:-----------|-----------------:|
| tf_transformers_serialized |           32 | greedy |           64 | 514 sec    |               22 |
| tf_transformers + tf-text  |           32 | greedy |           64 | 517 sec    |               22 |
| hf_tf                      |           32 | greedy |           64 | 1800 sec   |                6 |
| hf_pt                      |           32 | greedy |           64 | 278 sec    |               41 |
| hf_jax (model.generate)    |           32 | greedy |           64 | N/A        |              N/A |
| hf_jax (pmap)              |           32 | greedy |           64 | N/A        |              N/A |
Beam Decode:
|                            |   batch_size | mode               |   max_length | time        | samples/second   |
|:---------------------------|-------------:|:-------------------|-------------:|:------------|:-----------------|
| tf_transformers_serialized |           32 | beam - num_beams=3 |           64 | 503 sec     |                23|
| tf_transformers + tf-text  |           32 | beam - num_beams=3 |           64 | 509 sec     |                23|
| hf_tf                      |           32 | beam - num_beams=3 |           64 | 3240 sec    |                3 |
| hf_pt                      |           32 | beam - num_beams=3 |           64 | 660 sec     |                17|