wish helps you First I read pictures from folders and sub-folders. Then I’m changing images to gray and resizing to 100*200. I want to classify my images to 6 class. When i want to create my model, I can't use Conv2D, cause I have dimension error but when I use Conv1D, I don’t have any error and neural network is work. I want to use conv2D because my data is image. What is my problem? , You need to add the channels dimension to your input training data:
this will help I'm trying to convert a network that I'm using from using tf-slim's conv2d to using tf.layers.conv2d, since it looks like tf.layers is the more supported and future-proof option. The function signatures are fairly similar, but is there something algorithmically different between the two? I'm getting different output tensor dimensions than expected. , I'm getting different output tensor dimensions than expected.
x = tf.layers.conv2d(x, 256, 3, padding='same')
Tensorflow: why tf.nn.conv2d runs faster than tf.layers.conv2d?
I hope this helps . If you try to follow the chain of function calls, you will find that tf.layers.conv2D() makes calls to tf.nn.conv2D() so no matter what you use, tf.nn.conv2d() will be called, it will be just faster if you call it yourself. You can use traceback.print_stack() method to verify that for yourself. NOTE This does not mean that they are one and the same, select the function based on your need as there are various other tasks undertaken by tf.layers.conv2D().
Negative dimension size caused by subtracting 6 from 1 for 'conv1d_2/convolution/Conv2D' (op: 'Conv2D') with input shape
Does that help The error originates at the second Conv1D because your kernel is getting bigger than the tensor dimension. To fix this, either use padding='same' or change kernel_size=1 after the first Conv1D.
Assuming the order Conv2d->ReLU->BN, should the Conv2d layer have a bias parameter?
may help you . Yes, if the order is conv2d -> ReLU -> BatchNorm, then having a bias parameter in the convolution can help. To show that, let's assume that there is a bias in the convolution layer, and let's compare what happens with both of the orders you mention in the question. The idea is to see whether the bias is useful for each case. Let's consider a single pixel from one of the convolution's output layers, and assume that x_1, ..., x_k are the corresponding inputs (in vectorised form) from the batch (batch size == k). We can write the convolution as
Wx+b #with W the convolution weights, b the bias
(Wx_i - mu)/sigma ==> becomes (Wx_i + b - mu - b)/sigma i.e. no changes.