wish helps you First I read pictures from folders and subfolders. Then I’m changing images to gray and resizing to 100*200. I want to classify my images to 6 class. When i want to create my model, I can't use Conv2D, cause I have dimension error but when I use Conv1D, I don’t have any error and neural network is work. I want to use conv2D because my data is image. What is my problem? , You need to add the channels dimension to your input training data: import numpy as np
X_train = np.expand_dims(X_train, axis=3)
X_test = np.expand_Dims(X_test, axis=3)
Boards Message : 
You Must Login
Or Sign Up
to Add Your Comments . 
Share :

Difference between tf.layers.conv2d and tf.contrib.slim.conv2d
Date : March 29 2020, 07:55 AM
this will help I'm trying to convert a network that I'm using from using tfslim's conv2d to using tf.layers.conv2d, since it looks like tf.layers is the more supported and futureproof option. The function signatures are fairly similar, but is there something algorithmically different between the two? I'm getting different output tensor dimensions than expected. , I'm getting different output tensor dimensions than expected. x = tf.layers.conv2d(x, 256, 3, padding='same')

Tensorflow: why tf.nn.conv2d runs faster than tf.layers.conv2d?
Tag : python , By : quasarkitten
Date : March 29 2020, 07:55 AM
I hope this helps . If you try to follow the chain of function calls, you will find that tf.layers.conv2D() makes calls to tf.nn.conv2D() so no matter what you use, tf.nn.conv2d() will be called, it will be just faster if you call it yourself. You can use traceback.print_stack() method to verify that for yourself. NOTE This does not mean that they are one and the same, select the function based on your need as there are various other tasks undertaken by tf.layers.conv2D().

Negative dimension size caused by subtracting 6 from 1 for 'conv1d_2/convolution/Conv2D' (op: 'Conv2D') with input shape
Date : March 29 2020, 07:55 AM
Does that help The error originates at the second Conv1D because your kernel is getting bigger than the tensor dimension. To fix this, either use padding='same' or change kernel_size=1 after the first Conv1D.

Assuming the order Conv2d>ReLU>BN, should the Conv2d layer have a bias parameter?
Date : March 29 2020, 07:55 AM
may help you . Yes, if the order is conv2d > ReLU > BatchNorm, then having a bias parameter in the convolution can help. To show that, let's assume that there is a bias in the convolution layer, and let's compare what happens with both of the orders you mention in the question. The idea is to see whether the bias is useful for each case. Let's consider a single pixel from one of the convolution's output layers, and assume that x_1, ..., x_k are the corresponding inputs (in vectorised form) from the batch (batch size == k). We can write the convolution as Wx+b #with W the convolution weights, b the bias
(Wx_i  mu)/sigma ==> becomes (Wx_i + b  mu  b)/sigma i.e. no changes.
BN(ReLU(Wx+b))
(1/k)(0+...+0+ SUM_s (Wx_s+b))=some_term + b/k
const*((0some_termb/k)^2 + ... + (Wx_i+b  some_term b/k)^2 +...))
(Wx_i+b  some_term  b/k)^2 = some_other_term + some_factor * W * b/k * x_i

ValueError: Dimensions must be equal, but are 1 and 3 for 'Conv2D' (op: 'Conv2D') with input shapes: [1,400,400,1], [1,3
Date : March 29 2020, 07:55 AM

