Apple’s machine learning (ML) teams quietly flexed their muscle with the release of a new ML framework developed for Apple Silicon. Apple’s machine learning (ML) teams have released a new ML framework for Apple Silicon: MLX, or ML Explore arrives after being tested over the summer and is now available through GitHub. Machine Learning for Apple Silicon In an X-note, Awni Hannun, of Apple’s ML team, calls the software: “…an efficient machine learning framework specifically designed for Apple silicon (i.e. your laptop!)” The idea is that it streamlines training and deployment of ML models for researchers who use Apple hardware. MLX is a NumPy-like array framework designed for efficient and flexible machine learning on Apple’s processors. This isn’t a consumer-facing tool; it equips developers with what appears to be a powerful environment within which to build ML models. The company also seems to have worked to embrace the languages developers want to use, rather than force a language on them – and it apparently invented powerful LLM tools in the process. Familiar to developers MLX design is inspired by existing frameworks such as PyTorch, Jax, and ArrayFire. However, MLX adds support for a unified memory model, which means arrays live in shared memory and operations can be performed on any of the supported device types without performing data copies. The team explains: “The Python API closely follows NumPy with a few exceptions. MLX also has a fully featured C++ API which closely follows the Python API.” Notes accompanying the release also say: “The framework is intended to be user-friendly, but still efficient to train and deploy models…. We intend to make it easy for researchers to extend and improve MLX with the goal of quickly exploring new ideas.” Pretty good at first glance On first glance, MLX seems relatively good and (as explained on GitHub) is equipped with several features that set it apart — for example, the use of familiar APIs, and also: Composable function transformations: MLX has composable function transformations for automatic differentiation, automatic vectorization, and computation graph optimization. Lazy computation: Computations in MLX are lazy. Arrays are only materialized when needed. Dynamic graph construction: Computation graphs in MLX are built dynamically. Changing the shapes of function arguments does not trigger slow compilations, and debugging is simple and intuitive. Multi-device: Operations can run on any of the supported devices (currently, the CPU and GPU). Unified memory: Under the unified memory model, arrays in MLX live in shared memory. Operations on MLX arrays can be performed on any of the supported device types without moving data. What it can already achieve Apple has provided a collection of examples of what MLX can do. These appear to confirm the company now has a highly-efficient language model, powerful tools for image generation using Stable Diffusion, and highly accurate speech recognition. This tallies with claims earlier this year, and some speculation concerning infinite virtual world creation for future Vision Pro experiences. Examples include: Train a Transformer LM or fine-tune with LoRA. Text generation with Mistral. Image generation with Stable Diffusion. Speech recognition with Whisper. Developers, developers…. Ultimately, Apple seems to want to democratize machine learning. “MLX is designed by machine learning researchers for machine learning researchers,” the team explains. In other words, Apple has recognized the need to build open, easy-to-use development environments for machine learning in order to nurture further work in that space. That MLX lives on Apple Silicon is also important, given that Apple’s processors now live across all its products, including Mac, iPhone, and iPad. The use of the GPU, CPU, and — conceivably, at some point — Neural Engine on those chips could translate into on-device execution of ML models (for privacy) with performance other processors cannot match, at least not in terms of edge devices. Is it too little, too late? Given the big buzz that emerged around Open AI’s Chat GPT when it appeared around this time last year, is Apple really truly late to the party? I don’t think so. The company has clearly decided to place its focus on equipping ML researchers with the best tools it can make, including powerful M3 Macs to build models on. Now, it wants to translate that attention into viable, human-focused AI tools for the rest of us to enjoy. It is much too early to declare Apple defeated in an AI industry war that has really only just begun. Please follow me on Mastodon, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe. Related content news analysis When it comes to AI, Apple is opening up for intelligence Apple is becoming increasingly open as its research teams cook up Apple Intelligence. By Jonny Evans Jun 18, 2024 4 mins Apple Developer Generative AI how-to How to use iCloud with Windows If you have an Apple ID, you can use iCloud with Windows. Here's how to get started. By Jonny Evans Jun 17, 2024 10 mins Small and Medium Business Apple Computers feature Apple's grip on retail tech is strengthening From point-of-sale systems to food storage management, Apple is having an impact on parts of retail we never thought possible. By Jonny Evans Jun 17, 2024 5 mins Retail Industry Apple iOS news analysis Everything Apple Intelligence will do for you (so far) These are the tools and services Apple's new Apple Intelligence will boost — with more to come. By Jonny Evans Jun 14, 2024 6 mins Apple Generative AI Mobile Podcasts Videos Resources Events SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe