开发者

Lower batch size in the last iteration of first training epoch than the other iteration

I'm trying to train an deep neural network model, the output dimensions of each iteration in one epoch is like [64,1600,8] (64 is the batch size). But in the last iteration of first epoch, this output changed to [54,1600,8] and faced with dimension error. Why in the last iteration batch size had changed?? Additionally, if I change the batch size to 32 the last iteration's output is [22,1600,8].

I think that the output of the last iteration must be same as the 开发者_运维知识库other iteration.


The last iteration batch size changed because you did not have enough data to completely fill the batch. If you have a batch size of 10, for example, and you have 101 entries total in your data, then you will have 10 batches of 10 and 1 batch of 1.

The solution is to either drop the batch if it is not the correct size, or to adapt your model so that it will detect the size of the batch and change accordingly, instead of having the batch size hard-coded in to your model parameters.


Seeing that you are using pytorch, I'll add to the answer by Richard by saying that pytorch DataLoaders have the functionality built-in to drop the last (incomplete) batch. Checking the documentation, you can specify drop_last=True while instantiating the DataLoader.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜