Recently, sinusoidal neural networks have exhibited promising results in representing low dimen- sion objects in computer graphics, such as images, implicit surfaces, and radiance fields. There is, however, a lack of understanding behind the capacity and architecture a neural network may have to represent a given signal. In this work, we study sinusoidal neural networks from a Fourier series perspective and link the initialization and training schemes with the model’s generated frequencies to obtain an appropriate capacity for the model to learn the signal. We explore the relationship between sinusoidal networks and Fourier series to propose a training procedure that bounds the network’s frequencies during training. We also present a pruning scheme that reduce the number of parameters based on their influence over the recon- struction. Additionally, we propose an algorithm that learns the appropriated input frequencies to accurately and compactly represent the underlying signa