copy and paste this google map to your website or blog!
Press copy button and paste into your blog or website.
(Please switch to 'HTML' mode when posting into your blog. Examples: WordPress Example, Blogger Example)
mlx-community (MLX Community) - Hugging Face MLX Community A community org for MLX model weights that run on Apple Silicon This organization hosts ready-to-use models compatible with: mlx-lm - A Python package for LLM text generation and fine-tuning with MLX mlx-swift-examples – a Swift package to run MLX models mlx-vlm – package for inference and fine-tuning of Vision Language Models (VLMs) using MLX These are pre-converted
Llama-3. 3-70B-Instruct-4bit LoRA Fine-Tuning: No Change (or . . . - GitHub I'm struggling to fine-tune mlx-community Llama-3 3-70B-Instruct-4bit using LoRA (mlx_lm lora V0 21 0) The model either doesn't change the output at all (scale=1 0) or becomes completely unstable with gibberish output and NaN inf values in the logits when I increase the scale even slightly
FineTuning with MLX | Matt Williams - technovangelist. com rough notes about mlx in preparation for a video MLX Apple Silicon only I asked in last video which fine tune framework to tackle first Results: 24 for mlx, 6 for unsloth, 4 for axolotl you have to use hugging face format models finetuning starting with a gguf model won’t work you can use them for inference, but not for fine-tuning mlx-lm seems to be a sample app first but it’s the
mlx-community dbrx-instruct-4bit - Hugging Face This model was converted to MLX format from databricks dbrx-instruct using mlx-lm version b80adbc after DBRX support was added by Awni Hannun Refer to the original model card for more details on the model