MLX models are optimized for MacOS with Apple Silicon (M1/M2/M3). Experience blazing fast inference on your Mac!
nexa infer NexaAI/Qwen3-0.6B-bf16-MLX
nexa infer NexaAI/gemma-3-4b-it-8bit-MLX
mlx-community