每次我尝试运行model.predict()时,如果图片太大(这很好),它会抛出一个错误,但是错误说tensorflow/core/framework/allocator.cc:101] Allocation of 3717120800 exceeds 10% of system memory是这样的,它有32 6GB,但是为什么它不能使用,比如20%或者30% (顺便说一下,cuda是禁用的,因为我的GPU只有6GB) BTW:我知道这是警告而不是错误,但是程序过了一会儿就崩溃了,没有给我其他的输出;(
下面是模型:
def build_dce_net():
input_img = keras.Input(shape=[None, None, 3])
conv1 = layers.Conv2D(
32, (3, 3), strides=(1, 1), activation="relu", padding="same"
)(input_img)
conv2 = layers.Conv2D(
64, (3, 3), strides=(1, 1), activation="relu", padding="same"
)(conv1)
conv3 = layers.Conv2D(
96, (3, 3), strides=(1, 1), activation="relu", padding="same"
)(conv2)
conv4 = layers.Conv2D(
96, (3, 3), strides=(1, 1), activation="relu", padding="same"
)(conv3)
int_con1 = layers.Concatenate(axis=-1)([conv4, conv3])
conv5 = layers.Conv2D(
64, (3, 3), strides=(1, 1), activation="relu", padding="same"
)(int_con1)
int_con2 = layers.Concatenate(axis=-1)([conv5, conv2])
conv6 = layers.Conv2D(
32, (3, 3), strides=(1, 1), activation="relu", padding="same"
)(int_con2)
int_con3 = layers.Concatenate(axis=-1)([conv6, conv1])
x_r = layers.Conv2D(24, (3, 3), strides=(1, 1), activation="tanh", padding="same")(
int_con3
)
#return keras.models.load_model('./high-res-trained')
return keras.Model(inputs=input_img, outputs=x_r)是的,所有的东西通常都是缩进的,但是仍然不能让它在堆栈溢出上工作。
编辑:在ubuntu上运行模型之后,我得到了一个更有用的日志:
2022-05-31 13:41:27.744568: W tensorflow/core/framework/cpu_allocator_impl.cc:82] Allocation of 9663676416 exceeds 10% of free system memory.
2022-05-31 13:41:29.461537: W tensorflow/core/framework/cpu_allocator_impl.cc:82] Allocation of 14495514624 exceeds 10% of free system memory.
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
Aborted发布于 2022-11-23 10:28:20
我也有同样的问题。我在linux中设置了一个交换内存。然后问题解决了。
https://stackoverflow.com/questions/72448084
复制相似问题