We develop a Wilsonian renormalization group approach to understanding practical neural networks of finite width and depth. This approach becomes tractable in the limit of a large number of neurons per layer. Keeping the leading finite-width corrections, we'll obtain recursion equations for the two-point and four-point functions, encoding how these observables evolve with increasing network depth. These recursions are solvable in the large-depth limit for general activation functions. We explain this approach in detail using the simplest realistic class of deep neural networks, called multilayer perceptrons (MLPs). Time permitting, we will also comment on how this RG flow makes concrete the heuristic picture of representation coarse-graining in deep learning. Based on upcoming work with Boris Hanin and Sho Yaida.