NVIDIA creates an AI that makes the drivers for its graphics cards
To this day we do not know if NVIDIA is a company that makes artificial intelligence hardware with the ability to display graphics or vice versa. And it is that this computer discipline has become popular in recent years and has grown exponentially. All this thanks to the implementation of units capable of executing matrix calculations with few clock cycles. The next occurrence of those of Jen Hsen Huang? Creating graphics drivers with AI. Are we facing a problem in the making or the definitive solution to a common problem?
Drivers or graphics controllers are one of the biggest problems for graphics card manufacturers and especially for GPU designers. One that is of poor quality can mean much worse results in benchmarks and games, which is fatal for the economy. After all, no one can sell a piece of hardware for a price higher than its performance. The reason? You are selling less value for a higher price.
Graphics drivers and AI?
One of the myths that we do not stop reading is that artificial intelligence will end up eliminating many jobs such as computer programming, but what they will do in the field of computer software is the elimination of repetitive and mechanized tasks. The fact that we are talking about graphics drivers with AI does not mean that they are in charge of creating these programs. Rather, it refers to the fact that the management of processes and GPU resources are controlled by a program of this type managed by the controller.
A GPU is made up of tens of cores and at the same time thousands of execution threads, however, the problem comes from the fact that their control unit is not like that of central processors. Their management needs to be carried out from the driver that is in charge of organizing the entire process. Think of it as a huge parcel company that has to assign the different shipments to their corresponding destinations in the most efficient way possible.
In any contemporary graphics chip, occupying the different registers and using the largest number of cores at the same time is crucial for performance. Poor organization can mean that the maximum number of resources possible are not used and force, for example, increases in clock speed and with it an increase in consumption.
NVIDIA would already be using it in its RTX 40
One of the behaviors that we have seen in the new generation of NVIDIA graphics cards is the way in which they behave when managing games that require few resources. What the RTX 30 and earlier did was turn on a few cores, but when the work accumulated they ended up running at Boost speeds and turning on the fans. Instead, better driver management on their new GPUs. What it does is that these titles use more of the chip’s resources, but at a lower clock speed and thus with a smaller impact on the electricity bill due to the performance they provide.
In any case, one of the keys could also be the use of compilers with AI to generate better binary code that is executed by the GPU on the graphics card. Many times it happens that the code generated in compilation is not the most efficient and certain combinations of instructions in high level language do not translate into the best combinations of instructions for the processor. Although this is rather something that affects all programs.
In summary, NVIDIA’s commitment to AI has been paying off for gaming graphics cards for some time and given AMD’s insouciance all this time regarding this issue and its driver problems, it is possible that the green mark will turn a little further ahead in terms of performance.