All convolutions inside of a dense block are ReLU-activated and use batch normalization. Channel-clever concatenation is only feasible if the height and width dimensions of the information remain unchanged, so convolutions inside a dense block are all of stride 1. Pooling layers are inserted between dense blocks for even more dimensionality reducti