Аннотация:Comparison of algorithms and methods is the cornerstone of any research which allows to prove that the proposed approach takes a step forward with a previous one. In areas such as computer vision or machine learning a huge amount of open data sets have been created, in which researchers can test their algorithms and compete with each other. In computer graphics and photorealistic rendering at the current moment the situation is different. Open sets of scenes in which different researchers using different render systems could get matching (or at least close) images do not exist. This leads to the fact that in scientific papers so-called “Cherry-picking” is practiced. This one is a pedantic selection of scenes and lighting conditions in such a way as to demonstrate the advantages of the developed algorithm. Such approach greatly reduces the practical significance of research - even if the new method works well on a certain class of scenes, it does not mean it will work for other cases. For this reason, many render systems still do not go beyond the basic algorithms, not trusting the results of research. In this paper, we began to fill this gap. We have created a special set of scenes (the so-called “benchmark”), which allows us to evaluate the performance of lighting integration in various situations and, thus, to show the positive and negative aspects of render systems and algorithms used by them in various conditions. We recreated many scenes from well-known computer graphics papers and added some scenes based on our experience. Our goal is to make the most complete coverage possible, using as few scenes and functionality of the render system as possible so that such a comparison can be easily reproduced in any existing system. To validate our approach, we conducted a pilot comparison among 4 popular products for 3D Studio Max (VRay, Corona, Octane, Hydra Renderer) for the speed of integrating the lighting on various scenarios. Despite the fact that 3 out of 4 systems are closed commercial products, we managed to get the same or similar images for all scenes, which indicates the viability of our proposed approach.
https://lppm3.ru/files/journal/XLV/MathMontXLV-Frolov.pdf