Models trained on synthetic data can be more accurate than other models in some cases, which could eliminate some privacy, copyright, and ethical concerns from using real data.
MIT researchers have developed a machine-learning technique that accurately captures and models the underlying acoustics of a scene from only a limited number of sound recordings. In this image, a sound emitter is marked by a red dot. The colors show the sound volume if a listener were to stand at different locations — yellow is louder and blue is quieter.
Yilun Du, a PhD student and MIT CSAIL affiliate, discusses the potential applications of generative art beyond the explosion of images that put the web into creative hysterics.