BERT

Overview

The BERT model was proposed in BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova. It’s a bidirectional transformer pretrained using a combination of masked language modeling objective and next sentence prediction on a large corpus comprising the Toronto Book Corpus and Wikipedia.

The abstract from the paper is the following:

We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications.

BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).

Tips:

  • BERT is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right rather than the left.

  • BERT was trained with the masked language modeling (MLM) and next sentence prediction (NSP) objectives. It is efficient at predicting masked tokens and at NLU in general, but is not optimal for text generation.

Paper👆 Official Code👆

BertConfig

class tf_transformers.models.bert.BertConfig(vocab_size=30522, embedding_size=768, num_hidden_layers=12, num_attention_heads=12, attention_head_size=64, intermediate_size=3072, hidden_act='gelu', intermediate_act='gelu', hidden_dropout_prob=0, attention_probs_dropout_prob=0, max_position_embeddings=512, type_vocab_size=2, initializer_range=0.02, layer_norm_epsilon=1e-12, position_embedding_type='absolute')[source]

This is the configuration class to store the configuration of a BertModel. It is used to instantiate an BERT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the ALBERT base architecture.

Configuration objects inherit from TransformerConfig and can be used to control the model outputs. Read the documentation from TransformerConfig for more information.

Parameters
  • vocab_size (int, optional, defaults to 30522) – Vocabulary size of the ALBERT model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling BertModel or BertEncoder.

  • embedding_size (int, optional, defaults to 128) – Dimensionality of vocabulary embeddings.

  • embedding_projection_size (int) – Dimensionality of the encoder layers and the pooler layer. Useful for Bert.

  • num_hidden_layers (int, optional, defaults to 12) – Number of hidden layers in the Transformer encoder.

  • num_attention_heads (int, optional, defaults to 12) – Number of attention heads for each attention layer in the Transformer encoder.

  • attention_head_size (int) – Size of attention heads in each layer. Normally (embedding_size//num_attention_heads).

  • intermediate_size (int, optional, defaults to 3072) – The dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder.

  • hidden_act (str or Callable, optional, defaults to "gelu") – The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "silu" and many are supported.

  • hidden_dropout_prob (float, optional, defaults to 0) – The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.

  • max_position_embeddings (int, optional, defaults to 512) – The maximum sequence length that this model might ever be used with. Typically set this to something large (e.g., 512 or 1024 or 2048).

  • type_vocab_size (int, optional, defaults to 2) – The vocabulary size of the token_type_ids passed when calling BertModel or TFBertModel.

  • initializer_range (float, optional, defaults to 0.02) – The standard deviation of the truncated_normal_initializer for initializing all weight matrices.

  • layer_norm_epsilon (float, optional, defaults to 1e-12) – The epsilon used by the layer normalization layers.

  • classifier_dropout_prob (float, optional, defaults to 0.1) – The dropout ratio for attached classifiers.

  • position_embedding_type (str, optional, defaults to "absolute") – Type of position embedding. Choose one of "absolute", "relative_key", "relative_key_query". For positional embeddings use "absolute". For more information on "relative_key", please refer to Self-Attention with Relative Position Representations (Shaw et al.). For more information on "relative_key_query", please refer to Method 4 in Improve Transformer Models with Better Relative Position Embeddings (Huang et al.).

  • num_hidden_groups (int, optional, defaults to 1) – Number of groups for the hidden layers, parameters in the same group are shared.

Examples:

>>> from tf_transformers.models import BertConfig, BertModel
>>> # Initializing an bert-base-uncased style configuration
>>> configuration = BertConfig()

>>> # Initializing an Bert different style configuration
>>> configuration_new = BertConfig(
...      embedding_size=768,
...      num_attention_heads=12,
...      intermediate_size=3072,
...  )

>>> # Initializing a model from the original configuration
>>> model = BertModel.from_config(configuration)

>>> # Accessing the model configuration
>>> configuration = model._config_dict # This has more details than original configuration

BertModel

BertEncoder