首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >为什么我的火炬张量大小会在一些批次后改变并包含NaNs?

为什么我的火炬张量大小会在一些批次后改变并包含NaNs?
EN

Stack Overflow用户
提问于 2021-06-15 08:52:11
回答 1查看 1.9K关注 0票数 1

我在训练一位毕道尔模特。经过一段时间后,即使在洗牌时,模型还包含了一些有限的张力--只包含NaN值:

代码语言:javascript
复制
tensor([[[    nan,     nan,     nan,  ...,     nan,     nan,     nan],
         [    nan,     nan,     nan,  ...,     nan,     nan,     nan],
         [    nan,     nan,     nan,  ...,     nan,     nan,     nan],
         ...,
         [ 1.4641,  0.0360, -1.1528,  ..., -2.3592, -2.6310,  6.3893],
         [    nan,     nan,     nan,  ...,     nan,     nan,     nan],
         [    nan,     nan,     nan,  ...,     nan,     nan,     nan]]],
       device='cuda:0', grad_fn=<AddBackward0>)

detect_anomaly函数返回:

代码语言:javascript
复制
  File "TestDownload.py", line 701, in <module>
    main(learning_rate, batch_size, epochs, experiment)
  File "TestDownload.py", line 635, in main
    train(model, device, train_loader, criterion, optimizer, scheduler, epoch, iter_meter, experiment)
  File "TestDownload.py", line 486, in train
    output = F.log_softmax(output, dim=2)
  File "\lib\site-packages\torch\nn\functional.py", line 1672, in log_softmax
    ret = input.log_softmax(dim)
 (function _print_stack) Traceback (most recent call last):
  File "TestDownload.py", line 701, in <module>
    main(learning_rate, batch_size, epochs, experiment)
  File "TestDownload.py", line 635, in main
    train(model, device, train_loader, criterion, optimizer, scheduler, epoch, iter_meter, experiment)
  File "TestDownload.py", line 490, in train
    loss.backward()
  File "\lib\site-packages\comet_ml\monkey_patching.py", line 317, in wrapper
    return_value = original(*args, **kwargs)
  File "\lib\site-packages\torch\tensor.py", line 245, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
  File "\lib\site-packages\torch\autograd\__init__.py", line 145, in backward
    Variable._execution_engine.run_backward(
RuntimeError: Function 'LogSoftmaxBackward' returned nan values in its 0th output.

引用下一行output = F.log_softmax(output, dim=2)

它显示了另一个错误,如果我只使用try-除了:(当丢失函数运行在包含NaNs的张量上时)

代码语言:javascript
复制
[W ..\torch\csrc\autograd\python_anomaly_mode.cpp:104] Warning: Error detected in CtcLossBackward. Traceback of forward call that caused the error:
  File "TestDownload.py", line 734, in <module>
    # In[ ]:
  File "TestDownload.py", line 667, in main
    test(model, device, test_loader, criterion, epoch, iter_meter, experiment)
  File "TestDownload.py", line 517, in train
    loss.backward()
  File "\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "\lib\site-packages\torch\nn\modules\loss.py", line 1590, in forward
    return F.ctc_loss(log_probs, targets, input_lengths, target_lengths, self.blank, self.reduction,
  File "\lib\site-packages\torch\nn\functional.py", line 2307, in ctc_loss
    return torch.ctc_loss(
 (function _print_stack)
Traceback (most recent call last):
  File "TestDownload.py", line 518, in train
  File "\lib\site-packages\comet_ml\monkey_patching.py", line 317, in wrapper
    return_value = original(*args, **kwargs)
  File "\lib\site-packages\torch\tensor.py", line 245, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
  File "\lib\site-packages\torch\autograd\__init__.py", line 145, in backward
    Variable._execution_engine.run_backward(
RuntimeError: Function 'CtcLossBackward' returned nan values in its 0th output.

正常张量应该是这样的:

代码语言:javascript
复制
tensor([[[-3.3904, -3.4340, -3.3703,  ..., -3.3613, -3.5098, -3.4344]],

        [[-3.3760, -3.2948, -3.2673,  ..., -3.4039, -3.3827, -3.3919]],

        [[-3.3857, -3.3358, -3.3901,  ..., -3.4686, -3.4749, -3.3826]],

        ...,

        [[-3.3568, -3.3502, -3.4416,  ..., -3.4463, -3.4921, -3.3769]],

        [[-3.4379, -3.3508, -3.3610,  ..., -3.3707, -3.4030, -3.4244]],

        [[-3.3919, -3.4513, -3.3565,  ..., -3.2714, -3.3984, -3.3643]]],
       device='cuda:0', grad_fn=<TransposeBackward0>)

如果两个括号是进口的,请注意。

代码:

代码语言:javascript
复制
for batch_idx, _data in enumerate(train_loader):
    spectrograms, labels, input_lengths, label_lengths = _data
    spectrograms, labels = spectrograms.to(device), labels.to(device)
    optimizer.zero_grad()

    output = model(spectrograms)
    output = F.log_softmax(output, dim=2)
    output = output.transpose(0, 1)  # (time, batch, n_class) # X, 1, 29
    loss = criterion(output, labels, input_lengths, label_lengths)
    loss.backward()
    optimizer.step()
    scheduler.step()
    iter_meter.step()

另外,我尝试使用更大的批大小(当前的批处理大小:1,更大的批处理大小: 6)运行它,直到我得到此错误的第一个时代的40%之前,它都没有出错。

Cuda内存耗尽

此外,我还试图将数据torchaudio.transforms.MelSpectrogram(sample_rate=16000, n_mels=128, normalized=True)规范化。

将学习率从5e-4降低到5e-5也没有帮助。

附加信息:我的数据集包含近300000个.wav文件,第一个时代的错误出现在3-10%的运行时。

我很感激任何提示,我很乐意提供更多的信息。

EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2021-06-16 21:33:42

错误的来源可以是一个损坏的输入或标签,它将包含一个inf值的NaN。可以检查张量中没有NaN值。

代码语言:javascript
复制
torch.isnan(tensor).any()

或者张量中的所有值都不是infNaN

代码语言:javascript
复制
torch.isfinite(tensor).all()
票数 3
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/67983039

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档