The SIGGRAPH 2018 presentation introduces a breakthrough system for mass-scale material synthesis that employs learning algorithms to allow rapid, intuitive creation and fine-tuning of photorealistic materials without the need for domain expertise.
The video opens with the challenge inherent in creating high-quality photorealistic materials, where traditionally a user would engage in a laborious trial and error process with a principled shader, adjusting numerous properties and waiting for the result after each tweak. This process, requiring both expertise and time, is ripe for improvement.
The presented system represents a quantum leap in material synthesis, utilizing machine learning to drastically streamline the workflow. The user starts by exploring a gallery of materials and selects high-scoring samples, which the system then uses to recommend a diverse array of new, appealing materials.
Introducing a convolutional neural network (CNN), the system predicts the appearance of materials almost instantaneously, sidestepping the extensive rendering time of classical global illumination methods. Additionally, the system features an intuitive 2D latent space, allowing users to fine-tune materials in real time, guiding them with a color-coded preference map generated via Gaussian Process Regression. The CNN further provides immediate visual feedback on material similarity.
The video demonstrates the system's practicality with a case study on fine-tuning grape materials, highlighting the ease and speed of adjustments. It also touches upon an extended shader capable of more complex features like procedural textures, showcasing scenes created using the learning, recommendation, and latent space embedding phases.
In closing, the system is heralded as a tool that empowers both novices and experts, suggesting that the combination of multiple learning algorithms can pave the way for further advancements in rapid, real-time material visualization and customization.