Patent attributes
Data transformation caching in an artificial intelligence infrastructure that includes one or more storage systems and one or more graphical processing unit (‘GPU’) servers, including: identifying, in dependence upon one or more machine learning models to be executed on the GPU servers, one or more transformations to apply to a dataset; generating, in dependence upon the one or more transformations, a transformed dataset; storing, within one or more of the storage systems, the transformed dataset; receiving a plurality of requests to transmit the transformed dataset to one or more of the GPU servers; and responsive to each request, transmitting, from the one or more storage systems to the one or more GPU servers without re-performing the one or more transformations on the dataset, the transformed dataset.