One of the most classic problems in computer vision, 3D reconstruction aims to build a 3D model of an object or scene given a series of views. In recent years, new reconstruction techniques based on Neural Radiance Fields (NeRFs) have created new forms to model objects instead of the traditional mesh and point cloud-based representations, allowing for more photorealistic rendering. However, these techniques were too slow to be used in practical settings, taking in the range of hours in high-end GPUs. Due to these limitations, new techniques have been created for fast reconstruction of scenes, such as DirectVoxGO. Alongside this limitation, one issue with NeRFs is that they were initially unable to separate the foreground from the background and had problems with 360◦ until the emergence of new techniques such as NeRF++. Our method extends DirectVoxGO, which is limited to bounded scenes, with ideas from NeRF++ and incorporates elements from a neural hashing approach employed by other works. Our technique improved photorealism compared with DirectVoxGO and Plenoxels on a subset of the LF dataset on average in at least 2%, 8% and 8% for PSNR, SSIM, and LPIPS metrics respectively, while also being an order of magnitude faster than NeRF++.