While recent work on text-conditional 3D object generation has shown promising results, the state-of-the-art methods typically require multiple GPU-hours to provide a single sample. That is in stark contrast to state-of-the-art generative image models, which produce samples in quite a few seconds or minutes. On this paper, we explore an alternate method for 3D object generation which produces 3D models in just 1-2 minutes on a single GPU. Our method first generates a single synthetic view using a text-to-image diffusion model, after which produces a 3D point cloud using a second diffusion model which conditions on the generated image. While our method still falls in need of the state-of-the-art by way of sample quality, it’s one to 2 orders of magnitude faster to sample from, offering a practical trade-off for some use cases. We release our pre-trained point cloud diffusion models, in addition to evaluation code and models, at this https URL.