📃

BERTベースの句読点モデルによるWhisperの書き起こしの改善

2023/04/21に公開

概要

Whisper は、多言語音声認識システムです。その出力である書き起こしには、句読点が含まれていないことがあるため、自動翻訳の精度に悪影響を与えることがあります。そこで、Whisper の書き起こしに対して、句読点を復元するためのモデルを作りたいと思います。一般的な文章とは異なり、Whisper の書き起こしは文章が途切れていることや、話し言葉が多く含まれているなどの特徴があります。このため、句読点の復元には新しい手法が必要となります。今回は、Whisper が生成した書き起こしの中から句読点が含まれるものを自動的に抽出し、これを訓練データとして用いることで、モデルをトレーニングしたいと思います。結果として、既存の手法と比較して、Whisper の書き起こしに対して、より高い精度で句読点を復元することができることが示されました。

Whisperの書き起こしの自動翻訳での問題点

Whisper には、書き起こし機能の他にも自動翻訳機能も備えています。しかし、現在は英語への翻訳にのみ対応しています。書き起こしを日本語に翻訳する場合には、別の自動翻訳を使用する必要があります。また、Whisper が生成する書き起こしにはタイムスタンプが付いているので、タイムスタンプを利用することで字幕として利用することができます。下記に、実際に出力された書き起こしを示します:

Sam Altman - How to Succeed with a Startup

start	end	text
0	4440	 Okay, today I'm going to talk about how to succeed with a startup.
4440	9120	 Obviously, more than can be said here in 20 minutes, but I will do the best I can.
9120	14440	 The most important thing, the number one lesson we try to teach startups is that the degree
14440	18880	 to which you're successful approximates the degree to which you build a product that is
18880	23120	 so good people spontaneously tell their friends about it.
23120	25240	 Startups always ask us for the secret to success.
25240	28600	 They always want to believe it's something other than this because this is really hard
28600	31040	 to do, but this is it.

このままでは、自動翻訳できないため、句読点を利用して文章をまとめることで、より正確な自動翻訳行うことができます。しかし、以下のような書き起こしも出力されることがあります:

Let's build GPT: from scratch, in code, spelled out.

1482880	1487280	 saw a lot of this in a lot more depth in the make more series and here if i just run this
1488000	1494400	 then we currently get the predictions the scores the logits for every one of the four by eight
1494400	1498720	 positions now that we've made predictions about what comes next we'd like to evaluate the loss
1498720	1504240	 function and so in make more series we saw that a good way to measure a loss or like a quality of
1504240	1508960	 the predictions is to use the negative log likelihood loss which is also implemented in

このように、Whisper は句読点が存在しない書き起こしを生成することがあります。この場合、英語の書き起こしとしても読みにくく、また、日本語へ自動翻訳するときも、句読点を利用して文章をまとめることができないという問題があります。

既存の句読点モデル

Punctuation Model(句読点モデル)は、文章の中に句読点を自動的に挿入するためのモデルです。このモデルは、機械学習を使用して、文章の文法、文脈、および一般的な言語のパターンに基づいて、適切な句読点を自動的に挿入します。

既存の句読点モデルとしては Hugging Face に以下のようなものがあります。

これらは、完全な文章によって訓練されているため、Whisper の書き起こしは文章が途切れていることや、話し言葉が多く含まれているなどの特徴に対応することが難しいことがわかります。以下に Let's build GPT: from scratch, in code, spelled out. の書き起こしに、それぞれのモデルで句読点を自動的に挿入したものを示します:

  • Original
1482880	1487280	 saw a lot of this in a lot more depth in the make more series and here if i just run this
1488000	1494400	 then we currently get the predictions the scores the logits for every one of the four by eight
1494400	1498720	 positions now that we've made predictions about what comes next we'd like to evaluate the loss
1498720	1504240	 function and so in make more series we saw that a good way to measure a loss or like a quality of
1504240	1508960	 the predictions is to use the negative log likelihood loss which is also implemented in
  • felflare/bert-restore-punctuation
1482880	1487280	 Saw a lot of this in a lot more depth in the make More series and here if I just run this.
1488000	1494400	 Then we currently get the predictions, the scores, the logits for every one of the four by eight.
1494400	1498720	 Positions: Now that we've made predictions about what comes next, we'd like to evaluate the loss.
1498720	1504240	 Function and so in make more series. We saw that a good way to measure a loss or like a quality of.
1504240	1508960	 The predictions is to use the negative log likelihood loss which is also implemented in.
  • oliverguhr/fullstop-punctuation-multilang-large
1482880	1487280	 saw a lot of this in a lot more depth in the make more series and here, if i just run this,
1488000	1494400	 then we currently get the predictions, the scores, the logits for every one of the four by eight.
1494400	1498720	 positions. now that we've made predictions about what comes next, we'd like to evaluate the loss.
1498720	1504240	 function, and so in make more series, we saw that a good way to measure a loss or like a quality of.
1504240	1508960	 the predictions is to use the negative log likelihood loss, which is also implemented in.

BERTベースの句読点モデルの提案

BERTとは、2018年にGoogleが発表した自然言語処理における深層学習のモデルです。BERTは、「Bidirectional Encoder Representations from Transformers」の略で、Transformerという深層学習アーキテクチャをベースにしています。

BERTは、大量のテキストデータを用いた事前学習を行うことで、文脈を考慮した単語の表現を学習します。そして、この学習済みモデルを用いて、さまざまな自然言語処理タスクを解決するためにファインチューニングを行います。

BERTは自然言語処理における様々なタスクに利用できる汎用的なモデルであり、その中の一つであるトークン分類(Token Classification)を利用することで句読点モデルを実現することができます。

まず、トークン分類は、与えられた文章の各単語や句の位置に対して、それがどのようなタグを持つかを予測するタスクです。例えば、名詞、動詞、形容詞などの品詞タグを予測する場合があります。

句読点モデルでは、各単語の後ろに句読点をどのように挿入するかを予測することになります。そのため、トークン分類においては、各単語の位置に対して、句読点を表すタグを用意します。例えば、「,」を表すタグや「.」を表すタグを用意します。

そして、BERTを用いてトークン分類のモデルを構築し、学習を行います。BERTには、事前学習済みのモデルが公開されており、それを利用することで、より高い精度での句読点予測が可能となります。

具体的には、BERTのトークン分類のモデルである BertForTokenClassification をインポートし、データセットを用いてモデルのファインチューニングを行います。その後、未知のテキストに対して予測を行い、各単語の後ろに挿入するべき句読点を予測します。

データセットの作成

Whisper から生成される文字起こしに、どのような句読点が生成されるのか調査したところ「.」や 「.」や「?」が含まれることが分かりました。そこで、各トークンに対するラベルとして「.」 「.」「?」「O」の4つのラベルを用意することにしました。

また、Whisper から生成される文字起こしには、「...」や「—」や「�」などの余計なものも含まれているために、データセットに含まれないようにする必要があります。

最後に、Whisper が正しく句読点を付けている書き起こしから、「.」や 「.」や「?」を除いた書き起こしと、それぞれの単語の後にどの句読点が来たのかのラベルを作成して、データセットを作成します。

データセットの例は以下のようになります:

id2label = {1: ',', 2: '.', 3: '?', 0: 'O'}
label2id = {',': 1, '.': 2, '?': 3, 'O': 0}
text = "Hello, my name is Andrej and I've been training deep neural networks for a bit more than a decade"
words = ['Hello', 'my', 'name', 'is', 'Andrej', 'and', "I've", 'been', 'training', 'deep', 'neural', 'networks', 'for', 'a', 'bit', 'more', 'than', 'a', 'decade']
labels = [',', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O']

単語をトークナイズすると、1つの単語が複数のトークンに分割されることがあります。それぞれのトークンにラベルを付ける必要があるのですが、句読点のラベルが付いた単語が複数のトークンになった場合、最後のトークンに対して句読点のラベルを付け、それ以外には「O」のラベルを付けることにしました。

データセットは以下のようになります:

FULL Dataset: 173771
TRAIN Dataset: 139016
TEST Dataset: 34755

モデルのトレーニングと評価

ハイパーパラメータは以下のようなものになります:

MAX_LEN = 128
TRAIN_BATCH_SIZE = 4
VALID_BATCH_SIZE = 2
EPOCHS = 1
LEARNING_RATE = 1e-05
MAX_GRAD_NORM = 10

モデルは BertForTokenClassification を利用しています:

model = BertForTokenClassification.from_pretrained('bert-base-uncased', 
                                                   num_labels=len(id2label),
                                                   id2label=id2label,
                                                   label2id=label2id)

loss functionCrossEntropyLossで、optimizerAdamを利用しています。

Training の結果としては以下のようになりました:

Training loss epoch: 0.012938089804618604
Training accuracy epoch: 0.9565718858035926

Validation の結果としては以下のようになりました:

Validation Loss: 0.01050175565781492
Validation Accuracy: 0.9633911385449255

各クラスの適合率、再現率、F1スコアおよびサポート数は以下のようになりました:

              precision    recall  f1-score   support

           ,       0.78      0.73      0.75     29186
           .       0.77      0.88      0.82     24984
           ?       0.84      0.73      0.78      3297
           O       0.99      0.99      0.99    406983

    accuracy                           0.96    464450
   macro avg       0.85      0.83      0.84    464450
weighted avg       0.96      0.96      0.96    464450
ログ
Training epoch: 1
Training loss per 100 training steps: 1.1701152324676514
Training loss per 100 training steps: 0.16895334699218817
Training loss per 100 training steps: 0.11326425027713846
Training loss per 100 training steps: 0.08952468673256149
Training loss per 100 training steps: 0.07357716281547622
Training loss per 100 training steps: 0.06360617962418619
Training loss per 100 training steps: 0.056555041566044836
Training loss per 100 training steps: 0.05172862880099326
Training loss per 100 training steps: 0.047991394655390665
Training loss per 100 training steps: 0.04486278150660646
Training loss per 100 training steps: 0.04233817079920437
Training loss per 100 training steps: 0.04023732665586171
Training loss per 100 training steps: 0.03835905554353862
Training loss per 100 training steps: 0.0368012853826165
Training loss per 100 training steps: 0.035444459166121184
Training loss per 100 training steps: 0.034323949755531186
Training loss per 100 training steps: 0.03319607699376593
Training loss per 100 training steps: 0.03224794011013358
Training loss per 100 training steps: 0.03132716695518739
Training loss per 100 training steps: 0.030495218864347157
Training loss per 100 training steps: 0.02978593244057694
Training loss per 100 training steps: 0.029186420569525254
Training loss per 100 training steps: 0.02854902423659711
Training loss per 100 training steps: 0.027931886124786885
Training loss per 100 training steps: 0.02742186473205031
Training loss per 100 training steps: 0.026962804972653652
Training loss per 100 training steps: 0.02651965914578605
Training loss per 100 training steps: 0.02605166082595941
Training loss per 100 training steps: 0.025608586892635243
Training loss per 100 training steps: 0.02519920069805543
Training loss per 100 training steps: 0.02484535390557861
Training loss per 100 training steps: 0.02450339294460526
Training loss per 100 training steps: 0.024209330674420586
Training loss per 100 training steps: 0.02389717624198715
Training loss per 100 training steps: 0.023644952383323665
Training loss per 100 training steps: 0.02340223836446922
Training loss per 100 training steps: 0.02312298427457215
Training loss per 100 training steps: 0.022862207457382697
Training loss per 100 training steps: 0.02265974151202
Training loss per 100 training steps: 0.022436324335493613
Training loss per 100 training steps: 0.02220790626474658
Training loss per 100 training steps: 0.022024968805506244
Training loss per 100 training steps: 0.021808076839131108
Training loss per 100 training steps: 0.021642209248590984
Training loss per 100 training steps: 0.021435221318690176
Training loss per 100 training steps: 0.021226909391407065
Training loss per 100 training steps: 0.02103239220537039
Training loss per 100 training steps: 0.020890917575872473
Training loss per 100 training steps: 0.020741586668001827
Training loss per 100 training steps: 0.02061343732926988
Training loss per 100 training steps: 0.020481294608010393
Training loss per 100 training steps: 0.020369479388301748
Training loss per 100 training steps: 0.02023561978323433
Training loss per 100 training steps: 0.020131265181980707
Training loss per 100 training steps: 0.019999137096217968
Training loss per 100 training steps: 0.019904242667345928
Training loss per 100 training steps: 0.01979809494703365
Training loss per 100 training steps: 0.019687248991744424
Training loss per 100 training steps: 0.019565219832079747
Training loss per 100 training steps: 0.019480411895280356
Training loss per 100 training steps: 0.019388911385610847
Training loss per 100 training steps: 0.019297163866950257
Training loss per 100 training steps: 0.019208525789836957
Training loss per 100 training steps: 0.019120419744526402
Training loss per 100 training steps: 0.01900445185299988
Training loss per 100 training steps: 0.018896521641757858
Training loss per 100 training steps: 0.018781777211155114
Training loss per 100 training steps: 0.018698016249715718
Training loss per 100 training steps: 0.018605399598632694
Training loss per 100 training steps: 0.018515144145607648
Training loss per 100 training steps: 0.018421604797596792
Training loss per 100 training steps: 0.01836682228028731
Training loss per 100 training steps: 0.01830235037853011
Training loss per 100 training steps: 0.018236395587760447
Training loss per 100 training steps: 0.018166214017252668
Training loss per 100 training steps: 0.018084740730445092
Training loss per 100 training steps: 0.018023048294532003
Training loss per 100 training steps: 0.017952994672187486
Training loss per 100 training steps: 0.017879417788445602
Training loss per 100 training steps: 0.017816140681165785
Training loss per 100 training steps: 0.01775094675476329
Training loss per 100 training steps: 0.017692976594098
Training loss per 100 training steps: 0.01763924752106586
Training loss per 100 training steps: 0.017566714911553524
Training loss per 100 training steps: 0.01749640848950478
Training loss per 100 training steps: 0.01743388318472129
Training loss per 100 training steps: 0.01739394115003503
Training loss per 100 training steps: 0.017330222367147142
Training loss per 100 training steps: 0.017265246219702293
Training loss per 100 training steps: 0.01719835788485295
Training loss per 100 training steps: 0.01716042514534629
Training loss per 100 training steps: 0.017115974741166516
Training loss per 100 training steps: 0.01706673364381246
Training loss per 100 training steps: 0.017021704443403535
Training loss per 100 training steps: 0.0169813040176746
Training loss per 100 training steps: 0.016933035359452214
Training loss per 100 training steps: 0.01687797638446151
Training loss per 100 training steps: 0.016815293138099038
Training loss per 100 training steps: 0.0167708993073819
Training loss per 100 training steps: 0.016734951187183895
Training loss per 100 training steps: 0.0166787522644849
Training loss per 100 training steps: 0.016641168845115337
Training loss per 100 training steps: 0.01658352721552038
Training loss per 100 training steps: 0.016536628387366352
Training loss per 100 training steps: 0.016492966250020412
Training loss per 100 training steps: 0.016454286114745288
Training loss per 100 training steps: 0.016417065674785694
Training loss per 100 training steps: 0.016378117454319045
Training loss per 100 training steps: 0.016336321229719442
Training loss per 100 training steps: 0.016309263371795106
Training loss per 100 training steps: 0.016266129964878324
Training loss per 100 training steps: 0.016219933999311237
Training loss per 100 training steps: 0.016189254874446543
Training loss per 100 training steps: 0.016150568171169784
Training loss per 100 training steps: 0.016116126608775497
Training loss per 100 training steps: 0.016072067905371124
Training loss per 100 training steps: 0.01604638785189223
Training loss per 100 training steps: 0.016007818860486207
Training loss per 100 training steps: 0.01597575084646095
Training loss per 100 training steps: 0.01594349217826879
Training loss per 100 training steps: 0.015900072207145496
Training loss per 100 training steps: 0.015867211311609418
Training loss per 100 training steps: 0.01583904687030041
Training loss per 100 training steps: 0.01581568263820713
Training loss per 100 training steps: 0.01578050508160368
Training loss per 100 training steps: 0.015747402935591893
Training loss per 100 training steps: 0.015722790678867112
Training loss per 100 training steps: 0.015690184394686557
Training loss per 100 training steps: 0.015656394066757184
Training loss per 100 training steps: 0.015628011205475415
Training loss per 100 training steps: 0.015602824132016444
Training loss per 100 training steps: 0.015567717599758236
Training loss per 100 training steps: 0.01553105072501434
Training loss per 100 training steps: 0.015506087663163711
Training loss per 100 training steps: 0.015489563791479402
Training loss per 100 training steps: 0.01545813539039837
Training loss per 100 training steps: 0.015433429503363868
Training loss per 100 training steps: 0.015414721533103162
Training loss per 100 training steps: 0.015391916689043238
Training loss per 100 training steps: 0.015369300866515305
Training loss per 100 training steps: 0.01533617324635829
Training loss per 100 training steps: 0.015320142567343155
Training loss per 100 training steps: 0.015296636516324215
Training loss per 100 training steps: 0.015277663757583772
Training loss per 100 training steps: 0.015258138014070843
Training loss per 100 training steps: 0.015233965227098368
Training loss per 100 training steps: 0.015209418915532562
Training loss per 100 training steps: 0.015176030205325286
Training loss per 100 training steps: 0.015148572735632273
Training loss per 100 training steps: 0.015126282916315072
Training loss per 100 training steps: 0.01511361258274965
Training loss per 100 training steps: 0.015095798372062978
Training loss per 100 training steps: 0.015071697579871769
Training loss per 100 training steps: 0.015054529767730243
Training loss per 100 training steps: 0.015028967353189522
Training loss per 100 training steps: 0.01500700683485534
Training loss per 100 training steps: 0.014985136463157439
Training loss per 100 training steps: 0.014964225997398758
Training loss per 100 training steps: 0.014946269228734535
Training loss per 100 training steps: 0.014920539546401875
Training loss per 100 training steps: 0.014897340826096908
Training loss per 100 training steps: 0.014881199358689213
Training loss per 100 training steps: 0.014862289551497593
Training loss per 100 training steps: 0.014840770206189074
Training loss per 100 training steps: 0.014821614667305527
Training loss per 100 training steps: 0.014807545061221134
Training loss per 100 training steps: 0.014791045090465365
Training loss per 100 training steps: 0.014764229512991172
Training loss per 100 training steps: 0.014748281830286914
Training loss per 100 training steps: 0.014731201819405675
Training loss per 100 training steps: 0.014713396602493824
Training loss per 100 training steps: 0.014701021165455637
Training loss per 100 training steps: 0.014685989108178196
Training loss per 100 training steps: 0.014668770913900422
Training loss per 100 training steps: 0.014651455915526036
Training loss per 100 training steps: 0.014628498559072271
Training loss per 100 training steps: 0.01461220688542684
Training loss per 100 training steps: 0.014598069785934432
Training loss per 100 training steps: 0.014583762982312315
Training loss per 100 training steps: 0.014569004321280688
Training loss per 100 training steps: 0.014545309806571725
Training loss per 100 training steps: 0.014524431346535527
Training loss per 100 training steps: 0.014510908736267408
Training loss per 100 training steps: 0.014502243935152228
Training loss per 100 training steps: 0.014482071856512856
Training loss per 100 training steps: 0.014466079478276245
Training loss per 100 training steps: 0.014445522565362509
Training loss per 100 training steps: 0.014428171112372762
Training loss per 100 training steps: 0.01441625750837087
Training loss per 100 training steps: 0.014401436434159726
Training loss per 100 training steps: 0.014389100961118522
Training loss per 100 training steps: 0.014384124945611499
Training loss per 100 training steps: 0.014365503948465336
Training loss per 100 training steps: 0.014355635618662259
Training loss per 100 training steps: 0.014346234504321986
Training loss per 100 training steps: 0.014332177587899505
Training loss per 100 training steps: 0.014314667897423097
Training loss per 100 training steps: 0.014300208978114277
Training loss per 100 training steps: 0.014280281431755413
Training loss per 100 training steps: 0.014259003398422886
Training loss per 100 training steps: 0.014242828906153103
Training loss per 100 training steps: 0.014232275047845086
Training loss per 100 training steps: 0.01421305900866676
Training loss per 100 training steps: 0.01419768206370582
Training loss per 100 training steps: 0.014186026337672894
Training loss per 100 training steps: 0.014170943061876483
Training loss per 100 training steps: 0.014156408314860256
Training loss per 100 training steps: 0.014142017583636686
Training loss per 100 training steps: 0.014134534296976201
Training loss per 100 training steps: 0.014115462617762206
Training loss per 100 training steps: 0.014106527290262672
Training loss per 100 training steps: 0.014088862552842425
Training loss per 100 training steps: 0.014076764832549344
Training loss per 100 training steps: 0.014070562584811411
Training loss per 100 training steps: 0.014057690860374714
Training loss per 100 training steps: 0.0140459960809575
Training loss per 100 training steps: 0.014039336483882132
Training loss per 100 training steps: 0.014026264613095425
Training loss per 100 training steps: 0.01401569390634932
Training loss per 100 training steps: 0.013996546458856165
Training loss per 100 training steps: 0.01398647513888604
Training loss per 100 training steps: 0.01397728664212243
Training loss per 100 training steps: 0.013968943715596387
Training loss per 100 training steps: 0.013957230781035102
Training loss per 100 training steps: 0.013946265466394787
Training loss per 100 training steps: 0.01393386438610839
Training loss per 100 training steps: 0.01392255732758447
Training loss per 100 training steps: 0.013912452205464233
Training loss per 100 training steps: 0.013907067513482642
Training loss per 100 training steps: 0.013892710131984542
Training loss per 100 training steps: 0.013877233603677047
Training loss per 100 training steps: 0.013866037927656663
Training loss per 100 training steps: 0.01385984340870503
Training loss per 100 training steps: 0.013852615739334911
Training loss per 100 training steps: 0.013845351353546496
Training loss per 100 training steps: 0.013832223055388572
Training loss per 100 training steps: 0.013821147146707138
Training loss per 100 training steps: 0.013815101883932581
Training loss per 100 training steps: 0.013807382173980472
Training loss per 100 training steps: 0.013801123657726233
Training loss per 100 training steps: 0.013788183447940245
Training loss per 100 training steps: 0.013779815327833548
Training loss per 100 training steps: 0.013765225422695737
Training loss per 100 training steps: 0.013753744332053435
Training loss per 100 training steps: 0.013742137351091215
Training loss per 100 training steps: 0.01373421309790273
Training loss per 100 training steps: 0.013725994435780403
Training loss per 100 training steps: 0.013719437595307749
Training loss per 100 training steps: 0.013709284285890017
Training loss per 100 training steps: 0.013695072952468381
Training loss per 100 training steps: 0.013689417830151443
Training loss per 100 training steps: 0.01368000142791235
Training loss per 100 training steps: 0.013668070289986171
Training loss per 100 training steps: 0.013654549109948334
Training loss per 100 training steps: 0.013643539839719006
Training loss per 100 training steps: 0.013638427211154637
Training loss per 100 training steps: 0.013632537237465644
Training loss per 100 training steps: 0.01362399803153057
Training loss per 100 training steps: 0.013612747458599219
Training loss per 100 training steps: 0.013602597389846606
Training loss per 100 training steps: 0.013591473322493351
Training loss per 100 training steps: 0.01357948255575918
Training loss per 100 training steps: 0.01356707303863987
Training loss per 100 training steps: 0.013558803014604222
Training loss per 100 training steps: 0.013551448253217668
Training loss per 100 training steps: 0.013543624264907144
Training loss per 100 training steps: 0.013536308985419914
Training loss per 100 training steps: 0.013530054132054839
Training loss per 100 training steps: 0.013519523262803878
Training loss per 100 training steps: 0.013510862260096384
Training loss per 100 training steps: 0.013501888471241254
Training loss per 100 training steps: 0.013489157790359498
Training loss per 100 training steps: 0.013476266325262993
Training loss per 100 training steps: 0.013465322274189792
Training loss per 100 training steps: 0.013454530398037269
Training loss per 100 training steps: 0.013445347275192678
Training loss per 100 training steps: 0.01344073341360979
Training loss per 100 training steps: 0.013434808989285032
Training loss per 100 training steps: 0.013428242127291474
Training loss per 100 training steps: 0.013420445480915201
Training loss per 100 training steps: 0.01341515206743996
Training loss per 100 training steps: 0.01340485869886361
Training loss per 100 training steps: 0.01339439840988464
Training loss per 100 training steps: 0.013388730359663316
Training loss per 100 training steps: 0.013379415347800327
Training loss per 100 training steps: 0.013370402847416992
Training loss per 100 training steps: 0.013362147735737448
Training loss per 100 training steps: 0.013359725678206584
Training loss per 100 training steps: 0.013352416112873047
Training loss per 100 training steps: 0.013345798218339122
Training loss per 100 training steps: 0.013334597012183069
Training loss per 100 training steps: 0.01332698416253474
Training loss per 100 training steps: 0.013316246173233364
Training loss per 100 training steps: 0.013308072595824927
Training loss per 100 training steps: 0.013298704222807744
Training loss per 100 training steps: 0.013290648266230939
Training loss per 100 training steps: 0.013285202157923863
Training loss per 100 training steps: 0.013279819804158837
Training loss per 100 training steps: 0.01327259806905168
Training loss per 100 training steps: 0.013262805661925201
Training loss per 100 training steps: 0.013252828208684139
Training loss per 100 training steps: 0.01324516321346429
Training loss per 100 training steps: 0.0132367697169233
Training loss per 100 training steps: 0.013230645378692472
Training loss per 100 training steps: 0.013222152415283649
Training loss per 100 training steps: 0.013215100005051793
Training loss per 100 training steps: 0.013207716505334473
Training loss per 100 training steps: 0.013195308128473101
Training loss per 100 training steps: 0.013191715713829684
Training loss per 100 training steps: 0.013179031707403319
Training loss per 100 training steps: 0.013172716375534937
Training loss per 100 training steps: 0.01316890046407603
Training loss per 100 training steps: 0.01315953991300543
Training loss per 100 training steps: 0.013155110267531385
Training loss per 100 training steps: 0.013147007915111192
Training loss per 100 training steps: 0.013136227464822778
Training loss per 100 training steps: 0.013126584753994612
Training loss per 100 training steps: 0.013121007904197627
Training loss per 100 training steps: 0.01311408180339782
Training loss per 100 training steps: 0.013107657751663719
Training loss per 100 training steps: 0.013103870143738507
Training loss per 100 training steps: 0.013094958658456671
Training loss per 100 training steps: 0.013087649845470862
Training loss per 100 training steps: 0.013082131056159556
Training loss per 100 training steps: 0.01307949383018297
Training loss per 100 training steps: 0.013074234592806253
Training loss per 100 training steps: 0.013068199906587887
Training loss per 100 training steps: 0.013058789726817184
Training loss per 100 training steps: 0.013050794953950716
Training loss per 100 training steps: 0.013044924239800951
Training loss per 100 training steps: 0.013038932339617443
Training loss per 100 training steps: 0.01303022196898285
Training loss per 100 training steps: 0.013026768948216024
Training loss per 100 training steps: 0.013019551307574413
Training loss per 100 training steps: 0.013015335379261276
Training loss per 100 training steps: 0.013009730962988526
Training loss per 100 training steps: 0.01299993139345828
Training loss per 100 training steps: 0.012996370525922186
Training loss per 100 training steps: 0.012986185602213216
Training loss per 100 training steps: 0.012978728016598733
Training loss per 100 training steps: 0.012974297417596734
Training loss per 100 training steps: 0.012968579755834143
Training loss per 100 training steps: 0.012962411443168792
Training loss per 100 training steps: 0.012959617032179746
Training loss per 100 training steps: 0.012953593669965473
Training loss per 100 training steps: 0.012947324074313173
Training loss per 100 training steps: 0.012946335298171501
Training loss per 100 training steps: 0.012940867656289384
Training loss epoch: 0.012938089804618604
Training accuracy epoch: 0.9565718858035926

既存手法との比較

Whisper の書き起こしをデータセットとして訓練したモデルで、Let's build GPT: from scratch, in code, spelled out. の書き起こしに句読点を自動的に挿入したものを示します:

  • Original
1482880	1487280	 saw a lot of this in a lot more depth in the make more series and here if i just run this
1488000	1494400	 then we currently get the predictions the scores the logits for every one of the four by eight
1494400	1498720	 positions now that we've made predictions about what comes next we'd like to evaluate the loss
1498720	1504240	 function and so in make more series we saw that a good way to measure a loss or like a quality of
1504240	1508960	 the predictions is to use the negative log likelihood loss which is also implemented in
  • Our model
1482880	1487280	 saw a lot of this in a lot more depth in the make more series. and here, if i just run this
1488000	1494400	 then we currently get the predictions, the scores, the logits for every one of the four by eight
1494400	1498720	 positions. now that we've made predictions about what comes next, we'd like to evaluate the loss
1498720	1504240	 function. and so in make more series, we saw that a good way to measure a loss or like a quality of
1504240	1508960	 the predictions is to use the negative log likelihood loss, which is also implemented in

既存のモデルでは 「saw a lot of this in a lot more depth in the make more series and here, if i just run this,」や「Saw a lot of this in a lot more depth in the make More series and here if I just run this.」のように、完全な文章で訓練している影響を受けていたのに比べて、今回作成した「saw a lot of this in a lot more depth in the make more series. and here, if i just run this」のように、文章が途切れていることや、話し言葉が多く含まれているなどの特徴に対応しているように思います。

今後の課題と展望

  • データセットの追加
    現状ではデータセットの追加によって精度が向上しているため、データセットを追加していきたいと思います。
  • モデルのチューニング
    データセットが整ってきたら、モデルのチューニングに移りたいと思います。
  • コードの公開
    再現性のために実験が一定の水準まで進んだらコードを公開したいと思います。

おわりに

Whisper の書き起こしには、句読点が含まれていないことがあるため、自動翻訳の精度に悪影響を与えることがあります。そこで、Whisper の書き起こしに対して、句読点を復元するためのモデルを作成しました。一般的な文章とは異なり、Whisper の書き起こしは文章が途切れていることや、話し言葉が多く含まれているなどの特徴があるため、句読点の復元には新しい手法が必要となります。今回は、Whisper が生成した書き起こしの中から句読点が含まれるものを自動的に抽出し、これを訓練データとして用いることで、モデルをトレーニングしました。結果として、既存の手法と比較して、Whisper の書き起こしに対して、より高い精度で句読点を復元することができることが示されました。

Discussion