Core ML is the foundational machine learning framework from Apple that builds on top of Accelerate, BNNS, and Metal Performance Shaders. It provides machine learning models that can be integrated to iOS applications and supports image analyses, natural language processing, audio to text conversion, and sound analysis. Applications can take advantage of Core ML without the need to have a network connection or API calls because the Core ML framework works using on-device computing.
It is used across Apple's products including, Siri, Camera, and Quick Type. Core ML is made to optimize device performance through leveraging its CPU, GPU, and neural engine while keeping memory and power consumption minimal. The CoreML application programming interfaces (APIs) are used by applications to make predictions and fine-tune models on all devices owned by each user. Model formats that can be converted to Core ML are Caffe , Keras , XGBoost , Scikit-learn , MXNet , LibSVM and Torch7.
At WWDC 2017 Apple released Core ML. Core ML was made to ensure user privacy by supporting on-device machine learning. The initial Core ML framework was capable of using several types of neural networks such as deep, recurrent, convolutional, linear, and tree ensembles.
On December 5, 2017 Google release their own tool for converting mobile device AI models using the TensorFlow Lite tool to file formats supported by Core ML.
At WWDC 2018 Apple released Core ML 2. Core ML 2 was made to improve the overall process of the Core ML framework by optimizing the overall size of the model, improving speed, and the ability for developers to customize their own Core ML models.
In June, 2019 Apple released Core ML 3. Core ML 3 added support for neural networks with more than 100 layers types, and on-device machine learning model training. Core ML 3 adds support for the following models: NearestNeighbors.proto, ItemSimilairityREcommender.proto, SoundAnalysisPreprocessing.proto, and LinkedModels.proto.