Claude Skill

vllm-mlx

by waybarrios

OpenAI and Anthropic compatible server for Apple Silicon. Run LLMs and vision-language models (Llama, Qwen-VL, LLaVA) with continuous batching, MCP tool calling, and multimodal support. Native MLX backend, 400+ tok/s. Works with Claude Code.

412 stars48 forksActivePython
73
Good

📊 Score Breakdown

🛡️Security30%
3.0/5
Utility30%
3.0/5
🔄Maintenance25%
5.0/5
💎Uniqueness15%
4.0/5

Overall = Security (30%) + Utility (30%) + Maintenance (25%) + Uniqueness (15%). Full methodology →

ℹ️ Details

Category

💻 Code Execution & Dev Tools

Ecosystem

Claude Skill

Language

Python

Pricing

Free

License

Status

Active

Platforms

claude

📈 GitHub Signals

412

Stars

48

Forks

0

Commits (30d)

29

Open Issues

Last commit: 1 months ago

anthropicapple-siliconaudio-processingclaude-codecomputer-visionimage-understandinginferencellmmachine-learningmacosmllmmlxmultimodal-aispeech-to-textstttext-to-speechttsvideo-understandingvision-language-modelvllm

The Weekly Index 📬

New MCP servers, Claude skills, stale alerts, and picks — every Thursday.

Data last verified: 1 months ago. See something wrong? Report it →