Nvidia: Recently, Nvidia and IBM published a study in partnership with researchers about the development of a solution to directly integrate GPUs and SSDs. In free translation into Portuguese, the novelty was called “Great Memory Accelerator” (BaM) and may play a key role in improving tasks involving machine learning and artificial intelligence in the future.
In short, BaM will allow GPUs to read and overwrite, on demand, small data directly on an SSD, dispensing with extra information traffic through a processor that could result in loss of efficiency. In the full study, the researchers explain: “The purpose of BaM is to extend the memory capacity of the GPU and improve the effective storage access bandwidth.”
To accomplish the feat, BaM would use the GPU’s cache memory, managed by external software supported by a data library on the SSD. Together, the entire streaming process would be managed by the GPU, resulting in even faster data processing and accelerating machine learning training and other more demanding activities.
Promising, the novelty may be the first alternative to the proprietary DirectStorage solution, just released by Microsoft, which should offer more performance for games and applications through a similar process. It remains to wait for more information about the development of BaM to check its efficiency.