The era of artificial intelligence which has existed until now, has started in 2012. Scientists at that time have found that by bombarding an algorithm with thousands of images, mimicking the way neurons work in a brain. That made possible an amazing increase of accuracy.
Now what scientists are trying is new, and it involves training the same algorithms to turn 2D images into 3D ones. This new way has the potential to transform a lot of industries including that of videogames, robotics and more. A lot of experts believe that this new technique will make machines able to view the world almost the same way as we humans view it.
The way the technique works is by using a neural network to capture 2D images and from them to generate 3D scenery, the technique has been named ''neural rendering'' The idea came from a the unification of ideas that circulate on computer graphics and artificial intelligence, but the explosion of the interest happend on April 2020 when scientists at UC Berkley and Google showed that a neuron network could catch a scene in a photorealistic way in 3D just by seeing 2D images.
This technique makes use of the way that light travels through the air, and does calculations that calculate the density and dhe colour of the dots on the 3D space. This makes it possible for the 2D images to be converted into a 3D scenery that can be viewed from every point possible. The core of it is the same neuron network as the algorithm of the image of the year 2012, which anylises the pixels in a 2D image. The new algorithms convert the 2D pixels in 3D, which are known as voxels.
For anyone that works on computer graphics, this new way is an improvment. Creating a detailed, realistic 3D scene usualy takes a lot of manual work, therefore takes a lot of time. This new technique makes it possible to generate 3D scenes from usual images in minutes. It also offers a new way to create and manipulate synthetic scenes.