Logo

Get started

  • Quick tour
  • Installation
  • Philosophy

Models

  • ALBERT
  • BERT
  • OpenAI GPT2
  • T5
  • MT5
  • RoBERTa
  • Vision Transformer (ViT)
  • CLIP
  • Sentence Transformer

Tutorials

  • Writing and Reading TFRecords
  • Classify text (MRPC) with Albert
  • Train (Masked Language Model) with tf-transformers in TPU
  • Classify Flowers (Image Classification) with ViT using multi-GPU
  • Create Sentence Embedding Roberta Model + Zeroshot from Scratch
  • Prompt Engineering using CLIP
  • GPT2 for QA using Squad V1 ( Causal LM )
  • Code Java to C# using T5
  • Read and Write Images as TFRecords

TFLite

  • Albert TFlite
  • Bert TFLite
  • Roberta TFLite

Model Usage

  • Text Generation using GPT2
  • Text Generation using T5
  • Sentence Transformer in tf-transformers

Tokenizers

  • ALBERT Tokenizer
  • BigBird Roberta Tokenizer
  • T5 Tokenizer
  • CLIP Feature Extractor
  • ViT Feature Extractor

Research

  • Glue Model Evaluation
  • Long Block Sequencer

Benchmarks

  • Benchmark GPT2
  • Benchmark T5
  • Benchmark Albert
  • Benchmark ViT
  • Benchmark CLIP
TF Transformers
  • »
  • Overview: module code

All modules for which code is available

  • tf_transformers.models.bart.configuration_bart
  • tf_transformers.models.bert.configuration_bert
  • tf_transformers.models.clip.configuration_clip
  • tf_transformers.models.gpt2.configuration_gpt2
  • tf_transformers.models.mt5.configuration_mt5
  • tf_transformers.models.roberta.configuration_roberta

© Copyright 2019, The TFT Team, Licenced under the Apache License, Version 2.0.

Built with Sphinx using a theme provided by Read the Docs.