Parameters和FLOPs计算

作者搜集了网上存在的利用Tensorflow计算FLOPs的代码(如下所示),但是几乎都不能准确计算,因此我们尝试通过人工计算。
CNN 模型所需的计算力(flops)和参数(parameters)数量是怎么计算的?
卷积层计算量(FLOPS)和参数量的计算
MACs和FLOPs

def get_flops_params():
    sess = tf.Session()
    graph = sess.graph
    flops = tf.profiler.profile(graph, options=tf.profiler.ProfileOptionBuilder.float_operation())
    params = tf.profiler.profile(graph, options=tf.profiler.ProfileOptionBuilder.trainable_variables_parameter())
    print('FLOPs: {};    Trainable params: {}'.format(flops.total_float_ops, params.total_parameters))

1.SCNN

1.1.Parameters计算

def SCNN():
    # build the CNN model
    in_shp = [2, 128]
    L = 128  # sample points
    xm_input = Input(in_shp)
    xm = Reshape([128, 2], input_shape=in_shp)(xm_input)
    x1 = Conv1D(128, 16, activation='relu', padding='same', input_shape=[L, 2])(xm)
    x2 = BatchNormalization()(x1)
    x3 = Dropout(0.5)(x2)

    x4 = SeparableConv1D(64, 8, activation='relu', padding='same')(x3)
    x5 = BatchNormalization()(x4)
    x6 = Dropout(0.5)(x5)

    x7 = Flatten()(x6)
    x8 = Dense(10)(x7)
    predicts = Activation('softmax')(x8)
    model = Model(xm_input, predicts)
    model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
    model.summary()
    return model

输出每一层具体信息:

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         (None, 2, 128)            0         
_________________________________________________________________
reshape_1 (Reshape)          (None, 128, 2)            0         
_________________________________________________________________
conv1d_1 (Conv1D)            (None, 128, 128)          4224      
_________________________________________________________________
batch_normalization_1 (Batch (None, 128, 128)          512       
_________________________________________________________________
dropout_1 (Dropout)          (None, 128, 128)          0         
_________________________________________________________________
separable_conv1d_1 (Separabl (None, 128, 64)           9280      
_________________________________________________________________
batch_normalization_2 (Batch (None, 128, 64)           256       
_________________________________________________________________
dropout_2 (Dropout)          (None, 128, 64)           0         
_________________________________________________________________
flatten_1 (Flatten)          (None, 8192)              0         
_________________________________________________________________
dense_1 (Dense)              (None, 11)                90123     
_________________________________________________________________
activation_1 (Activation)    (None, 11)                0         
=================================================================
Total params: 104,395
Trainable params: 104,011
Non-trainable params: 384
_________________________________________________________________

  • 卷积层:
    以第一个conv1d_1 (Conv1D)为例,
    Parameters和FLOPs计算Parameters和FLOPs计算

1.2.FLOPs计算

2.PET-CGDNN

def cal1(x):
    y = tf.keras.backend.cos(x)
    return y

def cal2(x):
    y = tf.keras.backend.sin(x)
    return y
def MCLDNN(weights=None,
           input_shape1=[2, 128],
           input_shape2=[128, 1],
           classes=11,
           **kwargs):
    if weights is not None and not (os.path.exists(weights)):
        raise ValueError('The `weights` argument should be either '
                         '`None` (random initialization), '
                         'or the path to the weights file to be loaded.')

    dr = 0.5  # dropout rate (%)
    input = Input(input_shape1 + [1], name='input1')
    input1 = Input(input_shape2, name='input2')
    input2 = Input(input_shape2, name='input3')

    x1 = Flatten()(input)
    x1 = Dense(1, name='fc2')(x1)
    x1 = Activation('linear')(x1)

    cos1 = Lambda(cal1)(x1)
    sin1 = Lambda(cal2)(x1)
    x11 = Multiply()([input1, cos1])
    x12 = Multiply()([input2, sin1])
    x21 = Multiply()([input2, cos1])
    x22 = Multiply()([input1, sin1])
    y1 = Add()([x11, x12])
    y2 = Subtract()([x21, x22])
    y1 = Reshape(target_shape=(128, 1), name='reshape1')(y1)
    y2 = Reshape(target_shape=(128, 1), name='reshape2')(y2)
    x11 = concatenate([y1, y2])
    x3 = Reshape(target_shape=((128, 2, 1)), name='reshape3')(x11)

    # spatial feature
    x3 = Conv2D(75, (8, 2), padding='valid', activation="relu", name="conv1_1", kernel_initializer='glorot_uniform')(
        x3)
    x3 = Conv2D(25, (5, 1), padding='valid', activation="relu", name="conv1_2", kernel_initializer='glorot_uniform')(
        x3)
    # temporal feature
    x4 = Reshape(target_shape=((117, 25)), name='reshape4')(x3)
    x4 = keras.layers.GRU(units=128)(x4)
    # x4 = Flatten()(x4)
    #
    x = Dense(classes, activation='softmax', name='softmax')(x4)

    model = Model(inputs=[input, input1, input2], outputs=x)

输出每一层具体信息:

__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input1 (InputLayer)             (None, 2, 128, 1)    0                                            
__________________________________________________________________________________________________
flatten_1 (Flatten)             (None, 256)          0           input1[0][0]                     
__________________________________________________________________________________________________
fc2 (Dense)                     (None, 1)            257         flatten_1[0][0]                  
__________________________________________________________________________________________________
activation_1 (Activation)       (None, 1)            0           fc2[0][0]                        
__________________________________________________________________________________________________
input2 (InputLayer)             (None, 128, 1)       0                                            
__________________________________________________________________________________________________
lambda_1 (Lambda)               (None, 1)            0           activation_1[0][0]               
__________________________________________________________________________________________________
input3 (InputLayer)             (None, 128, 1)       0                                            
__________________________________________________________________________________________________
lambda_2 (Lambda)               (None, 1)            0           activation_1[0][0]               
__________________________________________________________________________________________________
multiply_1 (Multiply)           (None, 128, 1)       0           input2[0][0]                     
                                                                 lambda_1[0][0]                   
__________________________________________________________________________________________________
multiply_2 (Multiply)           (None, 128, 1)       0           input3[0][0]                     
                                                                 lambda_2[0][0]                   
__________________________________________________________________________________________________
multiply_3 (Multiply)           (None, 128, 1)       0           input3[0][0]                     
                                                                 lambda_1[0][0]                   
__________________________________________________________________________________________________
multiply_4 (Multiply)           (None, 128, 1)       0           input2[0][0]                     
                                                                 lambda_2[0][0]                   
__________________________________________________________________________________________________
add_1 (Add)                     (None, 128, 1)       0           multiply_1[0][0]                 
                                                                 multiply_2[0][0]                 
__________________________________________________________________________________________________
subtract_1 (Subtract)           (None, 128, 1)       0           multiply_3[0][0]                 
                                                                 multiply_4[0][0]                 
__________________________________________________________________________________________________
reshape1 (Reshape)              (None, 128, 1)       0           add_1[0][0]                      
__________________________________________________________________________________________________
reshape2 (Reshape)              (None, 128, 1)       0           subtract_1[0][0]                 
__________________________________________________________________________________________________
concatenate_1 (Concatenate)     (None, 128, 2)       0           reshape1[0][0]                   
                                                                 reshape2[0][0]                   
__________________________________________________________________________________________________
reshape3 (Reshape)              (None, 128, 2, 1)    0           concatenate_1[0][0]              
__________________________________________________________________________________________________
conv1_1 (Conv2D)                (None, 121, 1, 75)   1275        reshape3[0][0]                   
__________________________________________________________________________________________________
conv1_2 (Conv2D)                (None, 117, 1, 25)   9400        conv1_1[0][0]                    
__________________________________________________________________________________________________
reshape4 (Reshape)              (None, 117, 25)      0           conv1_2[0][0]                    
__________________________________________________________________________________________________
gru_1 (GRU)                     (None, 128)          59136       reshape4[0][0]                   
__________________________________________________________________________________________________
softmax (Dense)                 (None, 11)           1419        gru_1[0][0]                      
==================================================================================================
Total params: 71,487
Trainable params: 71,487
Non-trainable params: 0

Parameters和FLOPs计算 Parameters和FLOPs计算Parameters和FLOPs计算
注:
1.我们如果用tensorflow作为backend时候,系统会默认是channel_last。
2.如何计算GRU卷积神经网络CNN中的参数量(parameters)和计算量(FLOPs )这篇文章里面的结论直接用:FLOPs = paras * H * W * 2
我们以上面的计算结果为例Parameters和FLOPs计算
514和512差别不大。
Parameters和FLOPs计算
308,550和281,325差别不大。
Parameters和FLOPs计算
2,199,600和2,190,825差别不大。
因此虽然没有找到GRU的FLOPs计算公式,我们也尝试用这个公式计算
Parameters和FLOPs计算
于是得到总的
Parameters和FLOPs计算

文章出处登录后可见!

已经登录?立即刷新

共计人评分,平均

到目前为止还没有投票!成为第一位评论此文章。

(0)
心中带点小风骚的头像心中带点小风骚普通用户
上一篇 2022年6月9日
下一篇 2022年6月9日

相关推荐