Ryujin 3.5 (480p - 720p)
| Benchmark | Ryujin 3.5 (6B active) | LLaMA 3 (8B dense) | GPT-3.5 Turbo | | :--- | :--- | :--- | :--- | | | 72.4% | 66.5% | 69.8% | | HumanEval (Code) | 68.2% | 62.1% | 64.5% | | Inference Speed (t/s) | 110 t/s | 85 t/s | 90 t/s | | VRAM (4-bit) | 18 GB | 6 GB | N/A (Closed) |
Works best with vLLM for production (supports MoE expert parallelism) or llama.cpp (with MoE kernels) for CPU inference. Ryujin 3.5 vs. The Competition | Feature | Ryujin 3.5 | Mixtral 8x7B | DeepSeek-V2 | | :--- | :--- | :--- | :--- | | Active Params | 6B | 12B | 21B | | Total Params | 35B | 47B | 236B | | Expert Count | 16 | 8 | 160 | | Context Window | 256k | 32k | 128k | | License | Apache 2.0 | Apache 2.0 | MIT | ryujin 3.5
Note: The MMLU score is impressive for its active parameter count, rivaling models twice its size. 1. Local Code Generation Because it activates coding-specific experts only when parsing Python or Rust, Ryujin 3.5 avoids "cross-talk" contamination (where math logic interferes with string parsing). This leads to fewer hallucinations in git diff suggestions. 2. Multilingual Routing Ryujin 3.5 dedicates two experts to non-English Latin scripts (Spanish, French, German) and one expert to CJK (Chinese, Japanese, Korean). For a Japanese prompt ("Ryujin" means Dragon God), the router correctly sends tokens to the CJK expert + the general syntax expert. 3. Retrieval-Augmented Generation (RAG) The 256k context window allows you to load a vector database result set directly into the prompt. Ryujin 3.5's sparse attention mechanism pays computational "attention" only to relevant chunks, ignoring filler text. How to Run Ryujin 3.5 (Practical Guide) Assuming this model follows open-source weights (Hugging Face Transformers compatible), here is the optimal setup: | Benchmark | Ryujin 3
prompt = "Explain the significance of the Dragon God in Shinto mythology." inputs = tokenizer(prompt, return_tensors="pt").to("cuda") outputs = model.generate(**inputs, max_new_tokens=512) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ignoring filler text.
from transformers import AutoModelForCausalLM, AutoTokenizer import torch model_name = "ryujin-3.5-35b-moe" tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained( model_name, device_map="auto", torch_dtype=torch.float16, load_in_4bit=True # Critical for MoE memory savings )