Apple Releases Open Source MLX Framework for Efficient Machine Learning on Apple Silicon
Apple recently released MLX — or ML Explore — the company’s machine learning (ML) framework for Apple Silicon computers.
Apple recently released MLX — or ML Explore — the company’s machine learning (ML) framework for Apple Silicon computers. The company’s latest framework is specifically designed to simplify the process of training and running ML models on computers that are powered by Apple’s M1, M2, and M3 series chips. The company says that MLX features a unified memory model. Apple has also demonstrated the use of the framework, which is open source, allowing machine learning enthusiasts to run the framework on their laptop or computer.
According to details shared by Apple on code hosting platform GitHub, the MLX framework has a C++ API along with a Python API that is closely based on NumPy, the Python library for scientific computing. Users can also take advantage of higher-level packages that enable them to build and run more complex models on their computer, according to Apple.
MLX simplifies the process of training and running ML models on a computer — developers were previously forced to rely on a translator to convert and optimise their models (using CoreML). This has now been replaced by MLX, which allows users running Apple Silicon computers to train and run their models directly on their own devices.
Apple says that the MLX’s design follows other popular frameworks used today, including ArrayFire, Jax, NumPy, and PyTorch. The firm has touted its framework’s unified memory model — MLX arrays live in shared memory, while operations on them can be performed on any device types (currently, Apple supports the CPU and GPU) without the need to create copies of data.
The company has also shared examples of MLX in action, performing tasks like image generation using Stable Diffusion on Apple Silicon hardware. When generating a batch of images, Apple says that MLX is faster than PyTorch for batch sizes of 6,8,12, and 16 — with up to 40 percent higher throughput than the latter.
The tests were conducted on a Mac powered by an M2 Ultra chip, the company’s fastest processor to date — MLX is capable of generating 16 images in 90 seconds, while PyTorch would take around 120 seconds to perform the same task, according to the company.
The video is a Llama v1 7B model implemented in MLX and running on an M2 Ultra.
More here: https://t.co/gXIjEZiJws
* Train a Transformer LM or fine-tune with LoRA
* Text generation with Mistral
* Image generation with Stable Diffusion
* Speech recognition with Whisper pic.twitter.com/twMF6NIMdV— Awni Hannun (@awnihannun) December 5, 2023
Other examples of MLX in action include generating text using Meta’s open source LLaMA language model, as well as the Mistral large language model. AI and ML researchers can also use OpenAI’s open source Whisper tool to run the speech recognition models on their computer using MLX.
The release of Apple’s MLX framework could help make ML research and development easier on the company’s hardware, eventually allowing developers to bring better tools that could be used for apps and services that offer on-device ML features running efficiently on a user’s computer.