Building a customized Lasagne layer whose output is a matrix of the elementwise product (input x weight) and not the dot product
I have an input sequence with shape (seq_length(19) x Features(21)), which I feed as an input to neural network.
I need a layer to perform an elementwise multiplication on inputs with weights (Not dot product), so the output shape should be (#units, input_shape). Since, in my case Input_shape(19 x 21), the output shape by the operation performed in that layer is also (19 x 21). And if the # units is 8, the output should be (8,19,21)
How to do this using Lasagne layers? I checked the Lasagne documentation on how build custom layers, as from link. Following this link, the custom layer is as follows.
class ElementwiseMulLayer(lasagne.layers.Layer):
def __init__(self, incoming, num_units, W=lasagne.init.Normal(0.01),**kwargs):
super(ElementwiseMulLayer, self).__init__(incoming, **kwargs)
self.num_inputs = self.input_shape[1]
self.num_units = num_units
self.W = self.add_param(W, (self.num_inputs,num_units), name='W')
def get_output_for(self, input, **kwargs):
#return T.dot(input, self.W)
result=input*self.W
return result
def get_output_shape_for(self, input_shape):
return (input_shape[0], self.num_units,self.num_inputs)
Here's the NN:
l_in_2 = lasagne.layers.InputLayer(shape=(None, 9*19*21))
l_reshape_l_in_2 = lasagne.layers.ReshapeLayer(l_in_2, (-1, 9,19,21))
l_reshape_l_in_2_EL = lasagne.layers.ExpressionLayer(l_reshape_l_in_2, lambda X: X[:,0,:,:], output_shape='auto')
l_reshape_l_in_2_EL = lasagne.layers.ReshapeLayer(l_reshape_l_in_2_EL, (-1, 19*21))
l_out1 = ElementwiseMulLayer(l_reshape_l_in_2_EL, num_units=8, name='my_EW_layer')
l_out1 = lasagne.layers.ReshapeLayer(l_out1, (-1, 8*399))
l_out = lasagne.layers.DenseLayer(l_out1,
num_units = 19*21,
W = lasagne.init.Normal(),
nonlinearity = lasagne.nonlinearities.rectify)
It's worth noting that the batch size is 64. The NN summary:
| Layer | Layer_name | output_shape | # parameters |
_____________________________________________________________________________
| 0 | InputLayer | (None, 3591) | 0 |
| 1 | ReshapeLayer | (None, 9, 19, 21) | 0 |
| 2 | ExpressionLayer | (None, 19, 21) | 0 |
| 3 | ReshapeLayer | (None, 399) | 0 |
| 4 | ElementwiseMulLayer | (None, 8, 399) | 3192 |
| 5 | ReshapeLayer | (None, 3192) | 3192 |
| 6 | DenseLayer | (None, 399) | 1277199 |
Now, when i try to build the NN, I recieved the following error:
ValueError: GpuElemwise. Input dimension mis-match. Input 1 (indices start at 0) has shape[0] == 399, but the output's size on that axis is 64.
Apply node that caused the error: GpuElemwise{mul,no_inplace}(GpuReshape{2}.0, my_dot_layer.W)
Toposort index: 23
Inputs types: [GpuArrayType<None>(float32, matrix), GpuArrayType<None>(float32, matrix)]
Inputs shapes: [(64, 399), (399, 8)]
Inputs strides: [(14364, 4), (32, 4)]
Inputs values: ['not shown', 'not shown']
Outputs clients: [[GpuReshape{2}(GpuElemwise{mul,no_inplace}.0, TensorConstant{[ -1 3192]})]]
I tried to set W as follows:
self.W = self.add_param(W, (self.num_inputs,num_units, self.num_inputs), name='W')
but then again, received a similar error:
ValueError: GpuElemwise. Input dimension mis-match. Input 1 (indices start at 0) has shape[1] == 8, but the output's size on that axis is 64.
Apply node that caused the error: GpuElemwise{mul,no_inplace}(InplaceGpuDimShuffle{x,0,1}.0, my_EW_layer.W)
Toposort index: 26
Inputs types: [GpuArrayType<None>(float32, (True, False, False)), GpuArrayType<None>(float32, 3D)]
Inputs shapes: [(1, 64, 399), (399, 8, 399)]
Inputs strides: [(919296, 14364, 4), (12768, 1596, 4)]
Inputs values: ['not shown', 'not shown']
Outputs clients: [[GpuReshape{2}(GpuElemwise{mul,no_inplace}.0, TensorConstant{[ -1 3192]})]]
I don't have a clear perception how to overcome this issue?
Comments
Post a Comment