|
- What does 1x1 convolution mean in a neural network?
1x1 conv creates channel-wise dependencies with a negligible cost This is especially exploited in depthwise-separable convolutions Nobody said anything about this but I'm writing this as a comment since I don't have enough reputation here
- What is the difference between Conv1D and Conv2D?
I will be using a Pytorch perspective, however, the logic remains the same When using Conv1d (), we have to keep in mind that we are most likely going to work with 2-dimensional inputs such as one-hot-encode DNA sequences or black and white pictures The only difference between the more conventional Conv2d () and Conv1d () is that latter uses a 1-dimensional kernel as shown in the picture
- How do bottleneck architectures work in neural networks?
We define a bottleneck architecture as the type found in the ResNet paper where [two 3x3 conv layers] are replaced by [one 1x1 conv, one 3x3 conv, and another 1x1 conv layer] I understand that t
- Pooling vs. stride for downsampling - Cross Validated
Pooling and stride both can be used to downsample the image Let's say we have an image of 4x4, like below and a filter of 2x2 Then how do we decide whether to use (2x2 pooling) vs (stride of 2)?
- neural networks - Difference between strided and non-strided . . .
conv = conv_2d (strides=) I want to know in what sense a non-strided convolution differs from a strided convolution I know how convolutions with strides work but I am not familiar with the non-str
- Difference between Conv and FC layers? - Cross Validated
What is the difference between conv layers and FC layers? Why cannot I use conv layers instead of FC layers?
- Where should I place dropout layers in a neural network?
I've updated the answer to clarify that in the work by Park et al , the dropout was applied after the RELU on each CONV layer I do not believe they investigated the effect of adding dropout following max pooling layers
- deep learning - What is the definition of a feature map (aka . . .
Typical-looking activations on the first CONV layer (left), and the 5th CONV layer (right) of a trained AlexNet looking at a picture of a cat Every box shows an activation map corresponding to some filter Notice that the activations are sparse (most values are zero, in this visualization shown in black) and mostly local
|
|
|