Interactive online version: Binder badge Google Colab badge

Thai Wav2vec2 model to ONNX model

This notebook show how to convert Thai wav2vec2 model from Huggingface to ONNX model.

Thai wav2vec2 model: airesearch/wav2vec2-large-xlsr-53-th

Install

For Google Colab

[ ]:
# !pip install torch==1.10.0+cu113 torchvision==0.11.1+cu113 torchaudio==0.10.0+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html
Looking in links: https://download.pytorch.org/whl/cu113/torch_stable.html
Collecting torch==1.10.0+cu113
  Downloading https://download.pytorch.org/whl/cu113/torch-1.10.0%2Bcu113-cp37-cp37m-linux_x86_64.whl (1821.5 MB)
     |██████████████▋                 | 834.1 MB 1.5 MB/s eta 0:10:43tcmalloc: large alloc 1147494400 bytes == 0x55bf21ac6000 @  0x7faf12d1b615 0x55bf1efac4cc 0x55bf1f08c47a 0x55bf1efaf2ed 0x55bf1f0a0e1d 0x55bf1f022e99 0x55bf1f01d9ee 0x55bf1efb0bda 0x55bf1f022d00 0x55bf1f01d9ee 0x55bf1efb0bda 0x55bf1f01f737 0x55bf1f0a1c66 0x55bf1f01edaf 0x55bf1f0a1c66 0x55bf1f01edaf 0x55bf1f0a1c66 0x55bf1f01edaf 0x55bf1efb1039 0x55bf1eff4409 0x55bf1efafc52 0x55bf1f022c25 0x55bf1f01d9ee 0x55bf1efb0bda 0x55bf1f01f737 0x55bf1f01d9ee 0x55bf1efb0bda 0x55bf1f01e915 0x55bf1efb0afa 0x55bf1f01ec0d 0x55bf1f01d9ee
     |██████████████████▌             | 1055.7 MB 1.5 MB/s eta 0:08:37tcmalloc: large alloc 1434370048 bytes == 0x55bf6611c000 @  0x7faf12d1b615 0x55bf1efac4cc 0x55bf1f08c47a 0x55bf1efaf2ed 0x55bf1f0a0e1d 0x55bf1f022e99 0x55bf1f01d9ee 0x55bf1efb0bda 0x55bf1f022d00 0x55bf1f01d9ee 0x55bf1efb0bda 0x55bf1f01f737 0x55bf1f0a1c66 0x55bf1f01edaf 0x55bf1f0a1c66 0x55bf1f01edaf 0x55bf1f0a1c66 0x55bf1f01edaf 0x55bf1efb1039 0x55bf1eff4409 0x55bf1efafc52 0x55bf1f022c25 0x55bf1f01d9ee 0x55bf1efb0bda 0x55bf1f01f737 0x55bf1f01d9ee 0x55bf1efb0bda 0x55bf1f01e915 0x55bf1efb0afa 0x55bf1f01ec0d 0x55bf1f01d9ee
     |███████████████████████▌        | 1336.2 MB 1.7 MB/s eta 0:04:39tcmalloc: large alloc 1792966656 bytes == 0x55bfbb908000 @  0x7faf12d1b615 0x55bf1efac4cc 0x55bf1f08c47a 0x55bf1efaf2ed 0x55bf1f0a0e1d 0x55bf1f022e99 0x55bf1f01d9ee 0x55bf1efb0bda 0x55bf1f022d00 0x55bf1f01d9ee 0x55bf1efb0bda 0x55bf1f01f737 0x55bf1f0a1c66 0x55bf1f01edaf 0x55bf1f0a1c66 0x55bf1f01edaf 0x55bf1f0a1c66 0x55bf1f01edaf 0x55bf1efb1039 0x55bf1eff4409 0x55bf1efafc52 0x55bf1f022c25 0x55bf1f01d9ee 0x55bf1efb0bda 0x55bf1f01f737 0x55bf1f01d9ee 0x55bf1efb0bda 0x55bf1f01e915 0x55bf1efb0afa 0x55bf1f01ec0d 0x55bf1f01d9ee
     |█████████████████████████████▊  | 1691.1 MB 1.3 MB/s eta 0:01:38tcmalloc: large alloc 2241208320 bytes == 0x55bf21ac6000 @  0x7faf12d1b615 0x55bf1efac4cc 0x55bf1f08c47a 0x55bf1efaf2ed 0x55bf1f0a0e1d 0x55bf1f022e99 0x55bf1f01d9ee 0x55bf1efb0bda 0x55bf1f022d00 0x55bf1f01d9ee 0x55bf1efb0bda 0x55bf1f01f737 0x55bf1f0a1c66 0x55bf1f01edaf 0x55bf1f0a1c66 0x55bf1f01edaf 0x55bf1f0a1c66 0x55bf1f01edaf 0x55bf1efb1039 0x55bf1eff4409 0x55bf1efafc52 0x55bf1f022c25 0x55bf1f01d9ee 0x55bf1efb0bda 0x55bf1f01f737 0x55bf1f01d9ee 0x55bf1efb0bda 0x55bf1f01e915 0x55bf1efb0afa 0x55bf1f01ec0d 0x55bf1f01d9ee
     |████████████████████████████████| 1821.5 MB 54.3 MB/s eta 0:00:01tcmalloc: large alloc 1821458432 bytes == 0x55bfa7428000 @  0x7faf12d1a1e7 0x55bf1efe2067 0x55bf1efac4cc 0x55bf1f08c47a 0x55bf1efaf2ed 0x55bf1f0a0e1d 0x55bf1f022e99 0x55bf1f01d9ee 0x55bf1efb0bda 0x55bf1f01ec0d 0x55bf1f01d9ee 0x55bf1efb0bda 0x55bf1f01ec0d 0x55bf1f01d9ee 0x55bf1efb0bda 0x55bf1f01ec0d 0x55bf1f01d9ee 0x55bf1efb0bda 0x55bf1f01ec0d 0x55bf1f01d9ee 0x55bf1efb0bda 0x55bf1f01ec0d 0x55bf1efb0afa 0x55bf1f01ec0d 0x55bf1f01d9ee 0x55bf1efb0bda 0x55bf1f01f737 0x55bf1f01d9ee 0x55bf1efb0bda 0x55bf1f01f737 0x55bf1f01d9ee
tcmalloc: large alloc 2276827136 bytes == 0x55c013d3c000 @  0x7faf12d1b615 0x55bf1efac4cc 0x55bf1f08c47a 0x55bf1efaf2ed 0x55bf1f0a0e1d 0x55bf1f022e99 0x55bf1f01d9ee 0x55bf1efb0bda 0x55bf1f01ec0d 0x55bf1f01d9ee 0x55bf1efb0bda 0x55bf1f01ec0d 0x55bf1f01d9ee 0x55bf1efb0bda 0x55bf1f01ec0d 0x55bf1f01d9ee 0x55bf1efb0bda 0x55bf1f01ec0d 0x55bf1f01d9ee 0x55bf1efb0bda 0x55bf1f01ec0d 0x55bf1efb0afa 0x55bf1f01ec0d 0x55bf1f01d9ee 0x55bf1efb0bda 0x55bf1f01f737 0x55bf1f01d9ee 0x55bf1efb0bda 0x55bf1f01f737 0x55bf1f01d9ee 0x55bf1efb1271
     |████████████████████████████████| 1821.5 MB 1.4 kB/s
Collecting torchvision==0.11.1+cu113
  Downloading https://download.pytorch.org/whl/cu113/torchvision-0.11.1%2Bcu113-cp37-cp37m-linux_x86_64.whl (24.6 MB)
     |████████████████████████████████| 24.6 MB 11 kB/s
Collecting torchaudio==0.10.0+cu113
  Downloading https://download.pytorch.org/whl/cu113/torchaudio-0.10.0%2Bcu113-cp37-cp37m-linux_x86_64.whl (2.9 MB)
     |████████████████████████████████| 2.9 MB 31.9 MB/s
Requirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from torch==1.10.0+cu113) (3.10.0.2)
Requirement already satisfied: pillow!=8.3.0,>=5.3.0 in /usr/local/lib/python3.7/dist-packages (from torchvision==0.11.1+cu113) (7.1.2)
Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from torchvision==0.11.1+cu113) (1.19.5)
Installing collected packages: torch, torchvision, torchaudio
  Attempting uninstall: torch
    Found existing installation: torch 1.10.0+cu111
    Uninstalling torch-1.10.0+cu111:
      Successfully uninstalled torch-1.10.0+cu111
  Attempting uninstall: torchvision
    Found existing installation: torchvision 0.11.1+cu111
    Uninstalling torchvision-0.11.1+cu111:
      Successfully uninstalled torchvision-0.11.1+cu111
Successfully installed torch-1.10.0+cu113 torchaudio-0.10.0+cu113 torchvision-0.11.1+cu113

Install

[ ]:
!pip install transformers onnxruntime onnx pythainlp soundfile
Collecting transformers
  Downloading transformers-4.12.5-py3-none-any.whl (3.1 MB)
     |████████████████████████████████| 3.1 MB 5.3 MB/s
Collecting onnxruntime
  Downloading onnxruntime-1.9.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.8 MB)
     |████████████████████████████████| 4.8 MB 37.3 MB/s
Collecting onnx
  Downloading onnx-1.10.2-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (12.7 MB)
     |████████████████████████████████| 12.7 MB 91 kB/s
Collecting pythainlp
  Downloading pythainlp-2.3.2-py3-none-any.whl (11.0 MB)
     |████████████████████████████████| 11.0 MB 2.0 MB/s
Requirement already satisfied: soundfile in /usr/local/lib/python3.7/dist-packages (0.10.3.post1)
Collecting sacremoses
  Downloading sacremoses-0.0.46-py3-none-any.whl (895 kB)
     |████████████████████████████████| 895 kB 42.4 MB/s
Collecting huggingface-hub<1.0,>=0.1.0
  Downloading huggingface_hub-0.1.2-py3-none-any.whl (59 kB)
     |████████████████████████████████| 59 kB 5.9 MB/s
Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.7/dist-packages (from transformers) (21.2)
Requirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.7/dist-packages (from transformers) (1.19.5)
Collecting tokenizers<0.11,>=0.10.1
  Downloading tokenizers-0.10.3-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (3.3 MB)
     |████████████████████████████████| 3.3 MB 36.0 MB/s
Requirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.7/dist-packages (from transformers) (4.62.3)
Collecting pyyaml>=5.1
  Downloading PyYAML-6.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (596 kB)
     |████████████████████████████████| 596 kB 40.1 MB/s
Requirement already satisfied: filelock in /usr/local/lib/python3.7/dist-packages (from transformers) (3.3.2)
Requirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.7/dist-packages (from transformers) (2019.12.20)
Requirement already satisfied: requests in /usr/local/lib/python3.7/dist-packages (from transformers) (2.23.0)
Requirement already satisfied: importlib-metadata in /usr/local/lib/python3.7/dist-packages (from transformers) (4.8.2)
Requirement already satisfied: typing-extensions>=3.7.4.3 in /usr/local/lib/python3.7/dist-packages (from huggingface-hub<1.0,>=0.1.0->transformers) (3.10.0.2)
Requirement already satisfied: pyparsing<3,>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from packaging>=20.0->transformers) (2.4.7)
Requirement already satisfied: protobuf in /usr/local/lib/python3.7/dist-packages (from onnxruntime) (3.17.3)
Requirement already satisfied: flatbuffers in /usr/local/lib/python3.7/dist-packages (from onnxruntime) (2.0)
Requirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from onnx) (1.15.0)
Collecting tinydb>=3.0
  Downloading tinydb-4.5.2-py3-none-any.whl (23 kB)
Collecting python-crfsuite>=0.9.6
  Downloading python_crfsuite-0.9.7-cp37-cp37m-manylinux1_x86_64.whl (743 kB)
     |████████████████████████████████| 743 kB 48.6 MB/s
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests->transformers) (2021.10.8)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests->transformers) (3.0.4)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests->transformers) (1.24.3)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests->transformers) (2.10)
Requirement already satisfied: cffi>=1.0 in /usr/local/lib/python3.7/dist-packages (from soundfile) (1.15.0)
Requirement already satisfied: pycparser in /usr/local/lib/python3.7/dist-packages (from cffi>=1.0->soundfile) (2.21)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata->transformers) (3.6.0)
Requirement already satisfied: joblib in /usr/local/lib/python3.7/dist-packages (from sacremoses->transformers) (1.1.0)
Requirement already satisfied: click in /usr/local/lib/python3.7/dist-packages (from sacremoses->transformers) (7.1.2)
Installing collected packages: pyyaml, tokenizers, tinydb, sacremoses, python-crfsuite, huggingface-hub, transformers, pythainlp, onnxruntime, onnx
  Attempting uninstall: pyyaml
    Found existing installation: PyYAML 3.13
    Uninstalling PyYAML-3.13:
      Successfully uninstalled PyYAML-3.13
Successfully installed huggingface-hub-0.1.2 onnx-1.10.2 onnxruntime-1.9.0 pythainlp-2.3.2 python-crfsuite-0.9.7 pyyaml-6.0 sacremoses-0.0.46 tinydb-4.5.2 tokenizers-0.10.3 transformers-4.12.5

Build ONNX Model

We will build ONNX model.

Resource

[ ]:
import transformers
from transformers import AutoTokenizer, Wav2Vec2ForCTC
from torchaudio.models.wav2vec2.utils import import_huggingface_model
[ ]:
original = Wav2Vec2ForCTC.from_pretrained("airesearch/wav2vec2-large-xlsr-53-th")
imported = import_huggingface_model(original) # Build Wav2Vec2Model from the corresponding model object of Hugging Face https://pytorch.org/audio/stable/models.html#wav2vec2-0-hubert
/usr/local/lib/python3.7/dist-packages/transformers/configuration_utils.py:341: UserWarning: Passing `gradient_checkpointing` to a config initialization is deprecated and will be removed in v5 Transformers. Using `model.gradient_checkpointing_enable()` instead, or if you are using the `Trainer` API, pass `gradient_checkpointing=True` in your `TrainingArguments`.
  "Passing `gradient_checkpointing` to a config initialization is deprecated and will be removed in v5 "
[ ]:
imported.eval() # set the model to inference mode
Wav2Vec2Model(
  (feature_extractor): FeatureExtractor(
    (conv_layers): ModuleList(
      (0): ConvLayerBlock(
        (layer_norm): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
        (conv): Conv1d(1, 512, kernel_size=(10,), stride=(5,))
      )
      (1): ConvLayerBlock(
        (layer_norm): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
        (conv): Conv1d(512, 512, kernel_size=(3,), stride=(2,))
      )
      (2): ConvLayerBlock(
        (layer_norm): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
        (conv): Conv1d(512, 512, kernel_size=(3,), stride=(2,))
      )
      (3): ConvLayerBlock(
        (layer_norm): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
        (conv): Conv1d(512, 512, kernel_size=(3,), stride=(2,))
      )
      (4): ConvLayerBlock(
        (layer_norm): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
        (conv): Conv1d(512, 512, kernel_size=(3,), stride=(2,))
      )
      (5): ConvLayerBlock(
        (layer_norm): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
        (conv): Conv1d(512, 512, kernel_size=(2,), stride=(2,))
      )
      (6): ConvLayerBlock(
        (layer_norm): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
        (conv): Conv1d(512, 512, kernel_size=(2,), stride=(2,))
      )
    )
  )
  (encoder): Encoder(
    (feature_projection): FeatureProjection(
      (layer_norm): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
      (projection): Linear(in_features=512, out_features=1024, bias=True)
      (dropout): Dropout(p=0.0, inplace=False)
    )
    (transformer): Transformer(
      (pos_conv_embed): ConvolutionalPositionalEmbedding(
        (conv): Conv1d(1024, 1024, kernel_size=(128,), stride=(1,), padding=(64,), groups=16)
      )
      (layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
      (dropout): Dropout(p=0.1, inplace=False)
      (layers): ModuleList(
        (0): EncoderLayer(
          (attention): SelfAttention(
            (k_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (v_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (q_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (out_proj): Linear(in_features=1024, out_features=1024, bias=True)
          )
          (dropout): Dropout(p=0.1, inplace=False)
          (layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
          (feed_forward): FeedForward(
            (intermediate_dense): Linear(in_features=1024, out_features=4096, bias=True)
            (intermediate_dropout): Dropout(p=0.0, inplace=False)
            (output_dense): Linear(in_features=4096, out_features=1024, bias=True)
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
        )
        (1): EncoderLayer(
          (attention): SelfAttention(
            (k_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (v_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (q_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (out_proj): Linear(in_features=1024, out_features=1024, bias=True)
          )
          (dropout): Dropout(p=0.1, inplace=False)
          (layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
          (feed_forward): FeedForward(
            (intermediate_dense): Linear(in_features=1024, out_features=4096, bias=True)
            (intermediate_dropout): Dropout(p=0.0, inplace=False)
            (output_dense): Linear(in_features=4096, out_features=1024, bias=True)
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
        )
        (2): EncoderLayer(
          (attention): SelfAttention(
            (k_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (v_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (q_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (out_proj): Linear(in_features=1024, out_features=1024, bias=True)
          )
          (dropout): Dropout(p=0.1, inplace=False)
          (layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
          (feed_forward): FeedForward(
            (intermediate_dense): Linear(in_features=1024, out_features=4096, bias=True)
            (intermediate_dropout): Dropout(p=0.0, inplace=False)
            (output_dense): Linear(in_features=4096, out_features=1024, bias=True)
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
        )
        (3): EncoderLayer(
          (attention): SelfAttention(
            (k_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (v_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (q_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (out_proj): Linear(in_features=1024, out_features=1024, bias=True)
          )
          (dropout): Dropout(p=0.1, inplace=False)
          (layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
          (feed_forward): FeedForward(
            (intermediate_dense): Linear(in_features=1024, out_features=4096, bias=True)
            (intermediate_dropout): Dropout(p=0.0, inplace=False)
            (output_dense): Linear(in_features=4096, out_features=1024, bias=True)
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
        )
        (4): EncoderLayer(
          (attention): SelfAttention(
            (k_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (v_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (q_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (out_proj): Linear(in_features=1024, out_features=1024, bias=True)
          )
          (dropout): Dropout(p=0.1, inplace=False)
          (layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
          (feed_forward): FeedForward(
            (intermediate_dense): Linear(in_features=1024, out_features=4096, bias=True)
            (intermediate_dropout): Dropout(p=0.0, inplace=False)
            (output_dense): Linear(in_features=4096, out_features=1024, bias=True)
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
        )
        (5): EncoderLayer(
          (attention): SelfAttention(
            (k_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (v_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (q_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (out_proj): Linear(in_features=1024, out_features=1024, bias=True)
          )
          (dropout): Dropout(p=0.1, inplace=False)
          (layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
          (feed_forward): FeedForward(
            (intermediate_dense): Linear(in_features=1024, out_features=4096, bias=True)
            (intermediate_dropout): Dropout(p=0.0, inplace=False)
            (output_dense): Linear(in_features=4096, out_features=1024, bias=True)
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
        )
        (6): EncoderLayer(
          (attention): SelfAttention(
            (k_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (v_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (q_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (out_proj): Linear(in_features=1024, out_features=1024, bias=True)
          )
          (dropout): Dropout(p=0.1, inplace=False)
          (layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
          (feed_forward): FeedForward(
            (intermediate_dense): Linear(in_features=1024, out_features=4096, bias=True)
            (intermediate_dropout): Dropout(p=0.0, inplace=False)
            (output_dense): Linear(in_features=4096, out_features=1024, bias=True)
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
        )
        (7): EncoderLayer(
          (attention): SelfAttention(
            (k_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (v_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (q_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (out_proj): Linear(in_features=1024, out_features=1024, bias=True)
          )
          (dropout): Dropout(p=0.1, inplace=False)
          (layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
          (feed_forward): FeedForward(
            (intermediate_dense): Linear(in_features=1024, out_features=4096, bias=True)
            (intermediate_dropout): Dropout(p=0.0, inplace=False)
            (output_dense): Linear(in_features=4096, out_features=1024, bias=True)
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
        )
        (8): EncoderLayer(
          (attention): SelfAttention(
            (k_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (v_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (q_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (out_proj): Linear(in_features=1024, out_features=1024, bias=True)
          )
          (dropout): Dropout(p=0.1, inplace=False)
          (layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
          (feed_forward): FeedForward(
            (intermediate_dense): Linear(in_features=1024, out_features=4096, bias=True)
            (intermediate_dropout): Dropout(p=0.0, inplace=False)
            (output_dense): Linear(in_features=4096, out_features=1024, bias=True)
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
        )
        (9): EncoderLayer(
          (attention): SelfAttention(
            (k_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (v_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (q_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (out_proj): Linear(in_features=1024, out_features=1024, bias=True)
          )
          (dropout): Dropout(p=0.1, inplace=False)
          (layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
          (feed_forward): FeedForward(
            (intermediate_dense): Linear(in_features=1024, out_features=4096, bias=True)
            (intermediate_dropout): Dropout(p=0.0, inplace=False)
            (output_dense): Linear(in_features=4096, out_features=1024, bias=True)
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
        )
        (10): EncoderLayer(
          (attention): SelfAttention(
            (k_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (v_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (q_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (out_proj): Linear(in_features=1024, out_features=1024, bias=True)
          )
          (dropout): Dropout(p=0.1, inplace=False)
          (layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
          (feed_forward): FeedForward(
            (intermediate_dense): Linear(in_features=1024, out_features=4096, bias=True)
            (intermediate_dropout): Dropout(p=0.0, inplace=False)
            (output_dense): Linear(in_features=4096, out_features=1024, bias=True)
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
        )
        (11): EncoderLayer(
          (attention): SelfAttention(
            (k_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (v_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (q_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (out_proj): Linear(in_features=1024, out_features=1024, bias=True)
          )
          (dropout): Dropout(p=0.1, inplace=False)
          (layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
          (feed_forward): FeedForward(
            (intermediate_dense): Linear(in_features=1024, out_features=4096, bias=True)
            (intermediate_dropout): Dropout(p=0.0, inplace=False)
            (output_dense): Linear(in_features=4096, out_features=1024, bias=True)
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
        )
        (12): EncoderLayer(
          (attention): SelfAttention(
            (k_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (v_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (q_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (out_proj): Linear(in_features=1024, out_features=1024, bias=True)
          )
          (dropout): Dropout(p=0.1, inplace=False)
          (layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
          (feed_forward): FeedForward(
            (intermediate_dense): Linear(in_features=1024, out_features=4096, bias=True)
            (intermediate_dropout): Dropout(p=0.0, inplace=False)
            (output_dense): Linear(in_features=4096, out_features=1024, bias=True)
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
        )
        (13): EncoderLayer(
          (attention): SelfAttention(
            (k_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (v_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (q_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (out_proj): Linear(in_features=1024, out_features=1024, bias=True)
          )
          (dropout): Dropout(p=0.1, inplace=False)
          (layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
          (feed_forward): FeedForward(
            (intermediate_dense): Linear(in_features=1024, out_features=4096, bias=True)
            (intermediate_dropout): Dropout(p=0.0, inplace=False)
            (output_dense): Linear(in_features=4096, out_features=1024, bias=True)
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
        )
        (14): EncoderLayer(
          (attention): SelfAttention(
            (k_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (v_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (q_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (out_proj): Linear(in_features=1024, out_features=1024, bias=True)
          )
          (dropout): Dropout(p=0.1, inplace=False)
          (layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
          (feed_forward): FeedForward(
            (intermediate_dense): Linear(in_features=1024, out_features=4096, bias=True)
            (intermediate_dropout): Dropout(p=0.0, inplace=False)
            (output_dense): Linear(in_features=4096, out_features=1024, bias=True)
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
        )
        (15): EncoderLayer(
          (attention): SelfAttention(
            (k_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (v_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (q_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (out_proj): Linear(in_features=1024, out_features=1024, bias=True)
          )
          (dropout): Dropout(p=0.1, inplace=False)
          (layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
          (feed_forward): FeedForward(
            (intermediate_dense): Linear(in_features=1024, out_features=4096, bias=True)
            (intermediate_dropout): Dropout(p=0.0, inplace=False)
            (output_dense): Linear(in_features=4096, out_features=1024, bias=True)
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
        )
        (16): EncoderLayer(
          (attention): SelfAttention(
            (k_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (v_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (q_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (out_proj): Linear(in_features=1024, out_features=1024, bias=True)
          )
          (dropout): Dropout(p=0.1, inplace=False)
          (layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
          (feed_forward): FeedForward(
            (intermediate_dense): Linear(in_features=1024, out_features=4096, bias=True)
            (intermediate_dropout): Dropout(p=0.0, inplace=False)
            (output_dense): Linear(in_features=4096, out_features=1024, bias=True)
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
        )
        (17): EncoderLayer(
          (attention): SelfAttention(
            (k_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (v_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (q_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (out_proj): Linear(in_features=1024, out_features=1024, bias=True)
          )
          (dropout): Dropout(p=0.1, inplace=False)
          (layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
          (feed_forward): FeedForward(
            (intermediate_dense): Linear(in_features=1024, out_features=4096, bias=True)
            (intermediate_dropout): Dropout(p=0.0, inplace=False)
            (output_dense): Linear(in_features=4096, out_features=1024, bias=True)
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
        )
        (18): EncoderLayer(
          (attention): SelfAttention(
            (k_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (v_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (q_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (out_proj): Linear(in_features=1024, out_features=1024, bias=True)
          )
          (dropout): Dropout(p=0.1, inplace=False)
          (layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
          (feed_forward): FeedForward(
            (intermediate_dense): Linear(in_features=1024, out_features=4096, bias=True)
            (intermediate_dropout): Dropout(p=0.0, inplace=False)
            (output_dense): Linear(in_features=4096, out_features=1024, bias=True)
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
        )
        (19): EncoderLayer(
          (attention): SelfAttention(
            (k_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (v_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (q_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (out_proj): Linear(in_features=1024, out_features=1024, bias=True)
          )
          (dropout): Dropout(p=0.1, inplace=False)
          (layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
          (feed_forward): FeedForward(
            (intermediate_dense): Linear(in_features=1024, out_features=4096, bias=True)
            (intermediate_dropout): Dropout(p=0.0, inplace=False)
            (output_dense): Linear(in_features=4096, out_features=1024, bias=True)
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
        )
        (20): EncoderLayer(
          (attention): SelfAttention(
            (k_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (v_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (q_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (out_proj): Linear(in_features=1024, out_features=1024, bias=True)
          )
          (dropout): Dropout(p=0.1, inplace=False)
          (layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
          (feed_forward): FeedForward(
            (intermediate_dense): Linear(in_features=1024, out_features=4096, bias=True)
            (intermediate_dropout): Dropout(p=0.0, inplace=False)
            (output_dense): Linear(in_features=4096, out_features=1024, bias=True)
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
        )
        (21): EncoderLayer(
          (attention): SelfAttention(
            (k_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (v_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (q_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (out_proj): Linear(in_features=1024, out_features=1024, bias=True)
          )
          (dropout): Dropout(p=0.1, inplace=False)
          (layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
          (feed_forward): FeedForward(
            (intermediate_dense): Linear(in_features=1024, out_features=4096, bias=True)
            (intermediate_dropout): Dropout(p=0.0, inplace=False)
            (output_dense): Linear(in_features=4096, out_features=1024, bias=True)
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
        )
        (22): EncoderLayer(
          (attention): SelfAttention(
            (k_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (v_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (q_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (out_proj): Linear(in_features=1024, out_features=1024, bias=True)
          )
          (dropout): Dropout(p=0.1, inplace=False)
          (layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
          (feed_forward): FeedForward(
            (intermediate_dense): Linear(in_features=1024, out_features=4096, bias=True)
            (intermediate_dropout): Dropout(p=0.0, inplace=False)
            (output_dense): Linear(in_features=4096, out_features=1024, bias=True)
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
        )
        (23): EncoderLayer(
          (attention): SelfAttention(
            (k_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (v_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (q_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (out_proj): Linear(in_features=1024, out_features=1024, bias=True)
          )
          (dropout): Dropout(p=0.1, inplace=False)
          (layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
          (feed_forward): FeedForward(
            (intermediate_dense): Linear(in_features=1024, out_features=4096, bias=True)
            (intermediate_dropout): Dropout(p=0.0, inplace=False)
            (output_dense): Linear(in_features=4096, out_features=1024, bias=True)
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
        )
      )
    )
  )
  (aux): Linear(in_features=1024, out_features=70, bias=True)
)
[ ]:
import torch.onnx # https://docs.microsoft.com/en-us/windows/ai/windows-ml/tutorials/pytorch-convert-model
[ ]:
input_size = 100000
AUDIO_MAXLEN = input_size
[ ]:
dummy_input = torch.randn(1, input_size, requires_grad=True)
[ ]:
torch.onnx.export(imported,         # model being run
         dummy_input,       # model input (or a tuple for multiple inputs)
         "asr3.onnx",       # where to save the model
         export_params=True,  # store the trained parameter weights inside the model file
         opset_version=10,    # the ONNX version to export the model to
         do_constant_folding=True,  # whether to execute constant folding for optimization
         input_names = ['modelInput'],   # the model's input names
         output_names = ['modelOutput'], # the model's output names
         dynamic_axes={'modelInput' : {0 : 'batch_size'},    # variable length axes
                                'modelOutput' : {0 : 'batch_size'}})
/usr/local/lib/python3.7/dist-packages/torch/onnx/symbolic_helper.py:325: UserWarning: Type cannot be inferred, which might cause exported graph to produce incorrect results.
  warnings.warn("Type cannot be inferred, which might cause exported graph to produce incorrect results.")

Inference

This onnx inference with onnxruntime.

onnxruntime: https://onnxruntime.ai/

Load Audio file

[ ]:
!wget https://www.dropbox.com/s/9kpeh8eodshcqhj/common_voice_th_23646850.wav?dl=1
[ ]:
!mv common_voice_th_23646850.wav?dl=1 sound.wav

load vocab.json from huggingface model

[ ]:
!wget https://huggingface.co/airesearch/wav2vec2-large-xlsr-53-th/raw/main/vocab.json
[ ]:
with open("vocab.json","r",encoding="utf-8-sig") as f:
  d = eval(f.read())

Inference

[ ]:
import onnx
import onnxruntime
[ ]:
import numpy as np
[ ]:
import soundfile as sf
from scipy.io import wavfile
import scipy.signal as sps
import os
from pythainlp.util import normalize
[ ]:
input_size = 100000
new_rate = 16000
AUDIO_MAXLEN = input_size
[ ]:
ort_session = onnxruntime.InferenceSession('asr3.onnx') # load onnx model
[ ]:
res = dict((v,k) for k,v in d.items())
res[69]="[PAD]"
res[68]="[UNK]"
[ ]:
def _normalize(x): #
  """You must call this before padding.
  Code from https://github.com/vasudevgupta7/gsoc-wav2vec2/blob/main/src/wav2vec2/processor.py#L101
  Fork TF to numpy
  """
  # -> (1, seqlen)
  mean = np.mean(x, axis=-1, keepdims=True)
  var = np.var(x, axis=-1, keepdims=True)
  return np.squeeze((x - mean) / np.sqrt(var + 1e-5))
[ ]:
def remove_adjacent(item): # code from https://stackoverflow.com/a/3460423
  nums = list(item)
  a = nums[:1]
  for item in nums[1:]:
    if item != a[-1]:
      a.append(item)
  return ''.join(a)
[ ]:
def asr(path):
    """
    Code from https://github.com/vasudevgupta7/gsoc-wav2vec2/blob/main/notebooks/wav2vec2_onnx.ipynb
    Fork TF to numpy
    """
    sampling_rate, data = wavfile.read(path)
    samples = round(len(data) * float(new_rate) / sampling_rate)
    new_data = sps.resample(data, samples)
    speech = np.array(new_data, dtype=np.float32)
    speech = _normalize(speech)[None]
    padding = np.zeros((speech.shape[0], AUDIO_MAXLEN - speech.shape[1]))
    speech = np.concatenate([speech, padding], axis=-1).astype(np.float32)
    ort_inputs = {"modelInput": speech}
    ort_outs = ort_session.run(None, ort_inputs)
    prediction = np.argmax(ort_outs, axis=-1)
    # Text post processing
    _t1 = ''.join([res[i] for i in list(prediction[0][0])])
    return normalize(''.join([remove_adjacent(j) for j in _t1.split("[PAD]")]))
[ ]:
FILENAME = "sound.wav"
[ ]:
asr(FILENAME)