RoBERTa

Overview

The RoBERTa model was proposed in RoBERTa: A Robustly Optimized BERT Pretraining Approach by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov. It is based on Google’s BERT model released in 2018.

It builds on BERT and modifies key hyperparameters, removing the next-sentence pretraining objective and training with much larger mini-batches and learning rates.

The abstract from the paper is the following:

Language model pretraining has led to significant performance gains but careful comparison between different approaches is challenging. Training is computationally expensive, often done on private datasets of different sizes, and, as we will show, hyperparameter choices have significant impact on the final results. We present a replication study of BERT pretraining (Devlin et al., 2019) that carefully measures the impact of many key hyperparameters and training data size. We find that BERT was significantly undertrained, and can match or exceed the performance of every model published after it. Our best model achieves state-of-the-art results on GLUE, RACE and SQuAD. These results highlight the importance of previously overlooked design choices, and raise questions about the source of recently reported improvements. We release our models and code.

Tips:

  • This implementation is the same as BertModel with a tiny embeddings tweak as well as a setup for Roberta pretrained models.

  • RoBERTa has the same architecture as BERT, but uses a byte-level BPE as a tokenizer (same as GPT-2) and uses a different pretraining scheme.

  • CamemBERT is a wrapper around RoBERTa. Refer to this page for usage examples.

Paper👆 Official Code👆

RobertaConfig

class tf_transformers.models.roberta.RobertaConfig(vocab_size=50265, embedding_size=768, num_hidden_layers=12, num_attention_heads=64, attention_head_size=64, intermediate_size=3072, hidden_act='gelu', intermediate_act='gelu', hidden_dropout_prob=0, attention_probs_dropout_prob=0, max_position_embeddings=512, type_vocab_size=1, initializer_range=0.02, layer_norm_epsilon=1e-05, position_embedding_type='absolute')[source]

This is the configuration class to store the configuration of a RobertaModel. It is used to instantiate an ALBERT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the ALBERT base architecture.

Configuration objects inherit from TransformerConfig and can be used to control the model outputs. Read the documentation from TransformerConfig for more information.

Parameters
  • vocab_size (int, optional, defaults to 30522) – Vocabulary size of the ALBERT model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling RobertaModel or RobertaEncoder.

  • embedding_size (int, optional, defaults to 128) – Dimensionality of vocabulary embeddings.

  • embedding_projection_size (int) – Dimensionality of the encoder layers and the pooler layer. Useful for Roberta.

  • num_hidden_layers (int, optional, defaults to 12) – Number of hidden layers in the Transformer encoder.

  • num_attention_heads (int, optional, defaults to 12) – Number of attention heads for each attention layer in the Transformer encoder.

  • attention_head_size (int) – Size of attention heads in each layer. Normally (embedding_size//num_attention_heads).

  • intermediate_size (int, optional, defaults to 3072) – The dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder.

  • hidden_act (str or Callable, optional, defaults to "gelu") – The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "silu" and many are supported.

  • hidden_dropout_prob (float, optional, defaults to 0) – The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.

  • max_position_embeddings (int, optional, defaults to 512) – The maximum sequence length that this model might ever be used with. Typically set this to something large (e.g., 512 or 1024 or 2048).

  • type_vocab_size (int, optional, defaults to 2) – The vocabulary size of the token_type_ids passed when calling RobertaModel or TFRobertaModel.

  • initializer_range (float, optional, defaults to 0.02) – The standard deviation of the truncated_normal_initializer for initializing all weight matrices.

  • layer_norm_epsilon (float, optional, defaults to 1e-12) – The epsilon used by the layer normalization layers.

  • classifier_dropout_prob (float, optional, defaults to 0.1) – The dropout ratio for attached classifiers.

  • position_embedding_type (str, optional, defaults to "absolute") – Type of position embedding. Choose one of "absolute", "relative_key", "relative_key_query". For positional embeddings use "absolute". For more information on "relative_key", please refer to Self-Attention with Relative Position Representations (Shaw et al.). For more information on "relative_key_query", please refer to Method 4 in Improve Transformer Models with Better Relative Position Embeddings (Huang et al.).

  • num_hidden_groups (int, optional, defaults to 1) – Number of groups for the hidden layers, parameters in the same group are shared.

Examples:

>>> from tf_transformers.models import RobertaConfig, RobertaModel
>>> # Initializing an bert-base-uncased style configuration
>>> configuration = RobertaConfig()

>>> # Initializing an Roberta different style configuration
>>> configuration_new = RobertaConfig(
...      embedding_size=768,
...      num_attention_heads=12,
...      intermediate_size=3072,
...  )

>>> # Initializing a model from the original configuration
>>> model = RobertaModel.from_config(configuration)

>>> # Accessing the model configuration
>>> configuration = model._config_dict # This has more details than original configuration

RobertaModel

RobertaEncoder