CLIP

Overview

The CLIP model was proposed in [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever. CLIP (Contrastive Language-Image Pre-Training) is a neural network trained on a variety of (image, text) pairs. It can be instructed in natural language to predict the most relevant text snippet, given an image, without directly optimizing for the task, similarly to the zero-shot capabilities of GPT-2 and 3.

The abstract from the paper is the following:

State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at this https URL.

CLIP is a multi-modal vision and language model. It can be used for image-text similarity and for zero-shot image classification. CLIP uses a ViT like transformer to get visual features and a causal language model to get the text features. Both the text and visual features are then projected to a latent space with identical dimension. The dot product between the projected image and text features is then used as a similar score.

Paper👆 Official Code👆

CLIPTextConfig

class tf_transformers.models.clip.CLIPTextConfig(vocab_size=49408, embedding_size=512, num_hidden_layers=12, num_attention_heads=8, attention_head_size=64, intermediate_size=2048, hidden_act='quick_gelu', intermediate_act='quick_gelu', hidden_dropout_prob=0, attention_probs_dropout_prob=0, max_position_embeddings=77, type_vocab_size=- 1, initializer_range=0.02, layer_norm_epsilon=1e-05, position_embedding_type='absolute', image_size=None, patch_size=None, num_channels=None, num_labels=None, projection_dim=512)[source]

This is the configuration class to store the configuration of a ViTModel. It is used to instantiate an ViT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the ViT google/vit-base-patch16-224 architecture.

Configuration objects inherit from TransformerConfig and can be used to control the model outputs. Read the documentation from TransformerConfig for more information.

Parameters
  • vocab_size (int, optional, defaults to 30522) – Vocabulary size of the ALBERT model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling BertModel or BertEncoder.

  • embedding_size (int, optional, defaults to 128) – Dimensionality of vocabulary embeddings.

  • embedding_projection_size (int) – Dimensionality of the encoder layers and the pooler layer. Useful for Bert.

  • num_hidden_layers (int, optional, defaults to 12) – Number of hidden layers in the Transformer encoder.

  • num_attention_heads (int, optional, defaults to 12) – Number of attention heads for each attention layer in the Transformer encoder.

  • attention_head_size (int) – Size of attention heads in each layer. Normally (embedding_size//num_attention_heads).

  • intermediate_size (int, optional, defaults to 3072) – The dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder.

  • hidden_act (str or Callable, optional, defaults to "gelu") – The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "silu" and many are supported.

  • hidden_dropout_prob (float, optional, defaults to 0) – The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.

  • max_position_embeddings (int, optional, defaults to 512) – The maximum sequence length that this model might ever be used with. Typically set this to something large (e.g., 512 or 1024 or 2048).

  • type_vocab_size (int, optional, defaults to 2) – The vocabulary size of the token_type_ids passed when calling BertModel or TFBertModel.

  • initializer_range (float, optional, defaults to 0.02) – The standard deviation of the truncated_normal_initializer for initializing all weight matrices.

  • layer_norm_epsilon (float, optional, defaults to 1e-12) – The epsilon used by the layer normalization layers.

  • classifier_dropout_prob (float, optional, defaults to 0.1) – The dropout ratio for attached classifiers.

  • position_embedding_type (str, optional, defaults to "absolute") – Type of position embedding. Choose one of "absolute", "relative_key", "relative_key_query". For positional embeddings use "absolute". For more information on "relative_key", please refer to Self-Attention with Relative Position Representations (Shaw et al.). For more information on "relative_key_query", please refer to Method 4 in Improve Transformer Models with Better Relative Position Embeddings (Huang et al.).

  • num_hidden_groups (int, optional, defaults to 1) – Number of groups for the hidden layers, parameters in the same group are shared.

  • image_size (int, optional, defaults to 224) – The size (resolution) of each image.

  • patch_size (int, optional, defaults to 16) – The size (resolution) of each patch.

  • num_channels (int, optional, defaults to 3) – The number of input channels.

  • num_labels (int, optional, defaults to 1000) – Total number of labels by which model has been pre-trained

Examples:

>>> from tf_transformers.models import CLIPImageConfig, CLIPImageEncoder
>>> # Initializing an 'openai/clip-vit-base-patch32' style configuration
>>> configuration = CLIPImageConfig()

>>> vision_config = configuration['vision_config']
>>> text_config   = configuration['text_config]
>>> # Initializing a model from the original configuration
>>> vision_encoder = CLIPImageEncoder.from_config(vision_config)
>>> text_encoder = CLIPTextEncoder.from_config(text_config)
>>> model = CLIPEncoder(vision_encoder, text_encoder, is_training=False, use_dropout=False)
>>> configuration = model._config_dict # This has more details than original configuration

>>> # To get a model config
>>> model_name = 'openai/clip-vit-base-patch32'
>>> config = CLIPImage.get_config(model_name)

CLIPImageConfig

class tf_transformers.models.clip.CLIPImageConfig(vocab_size=None, embedding_size=768, num_hidden_layers=12, num_attention_heads=12, attention_head_size=64, intermediate_size=3072, hidden_act='quick_gelu', intermediate_act='quick_gelu', hidden_dropout_prob=0, attention_probs_dropout_prob=0, max_position_embeddings=None, type_vocab_size=- 1, initializer_range=0.02, layer_norm_epsilon=1e-05, position_embedding_type='absolute', image_size=224, patch_size=16, num_channels=3, num_labels=None, projection_dim=512)[source]

This is the configuration class to store the configuration of a ViTModel. It is used to instantiate an ViT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the ViT google/vit-base-patch16-224 architecture.

Configuration objects inherit from TransformerConfig and can be used to control the model outputs. Read the documentation from TransformerConfig for more information.

Parameters
  • vocab_size (int, optional, defaults to 30522) – Vocabulary size of the ALBERT model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling BertModel or BertEncoder.

  • embedding_size (int, optional, defaults to 128) – Dimensionality of vocabulary embeddings.

  • embedding_projection_size (int) – Dimensionality of the encoder layers and the pooler layer. Useful for Bert.

  • num_hidden_layers (int, optional, defaults to 12) – Number of hidden layers in the Transformer encoder.

  • num_attention_heads (int, optional, defaults to 12) – Number of attention heads for each attention layer in the Transformer encoder.

  • attention_head_size (int) – Size of attention heads in each layer. Normally (embedding_size//num_attention_heads).

  • intermediate_size (int, optional, defaults to 3072) – The dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder.

  • hidden_act (str or Callable, optional, defaults to "gelu") – The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "silu" and many are supported.

  • hidden_dropout_prob (float, optional, defaults to 0) – The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.

  • max_position_embeddings (int, optional, defaults to 512) – The maximum sequence length that this model might ever be used with. Typically set this to something large (e.g., 512 or 1024 or 2048).

  • type_vocab_size (int, optional, defaults to 2) – The vocabulary size of the token_type_ids passed when calling BertModel or TFBertModel.

  • initializer_range (float, optional, defaults to 0.02) – The standard deviation of the truncated_normal_initializer for initializing all weight matrices.

  • layer_norm_epsilon (float, optional, defaults to 1e-12) – The epsilon used by the layer normalization layers.

  • classifier_dropout_prob (float, optional, defaults to 0.1) – The dropout ratio for attached classifiers.

  • position_embedding_type (str, optional, defaults to "absolute") – Type of position embedding. Choose one of "absolute", "relative_key", "relative_key_query". For positional embeddings use "absolute". For more information on "relative_key", please refer to Self-Attention with Relative Position Representations (Shaw et al.). For more information on "relative_key_query", please refer to Method 4 in Improve Transformer Models with Better Relative Position Embeddings (Huang et al.).

  • num_hidden_groups (int, optional, defaults to 1) – Number of groups for the hidden layers, parameters in the same group are shared.

  • image_size (int, optional, defaults to 224) – The size (resolution) of each image.

  • patch_size (int, optional, defaults to 16) – The size (resolution) of each patch.

  • num_channels (int, optional, defaults to 3) – The number of input channels.

  • num_labels (int, optional, defaults to 1000) – Total number of labels by which model has been pre-trained

Examples:

>>> from tf_transformers.models import CLIPImageConfig, CLIPImageEncoder
>>> # Initializing an 'google/vit-base-patch16-224' style configuration
>>> configuration = CLIPImageConfig()

>>> # Initializing an ViT different style configuration
>>> configuration_new = CLIPImageConfig(
...      embedding_size=768,
...      num_attention_heads=12,
...      intermediate_size=3072,
...  )

>>> # Initializing a model from the original configuration
>>> model = CLIPImageEncoder.from_config(configuration)

>>> # Accessing the model configuration
>>> configuration = model._config_dict # This has more details than original configuration

>>> # To get a config
>>> model_name = 'openai/clip-vit-base-patch32'
>>> config = CLIPImage.get_config(model_name)

CLIPModel

CLIPEncoder

CLIPImageEncoder

CLIPTextEncoder

CLIPFeatureExtractorTF