A large LLM created by combining two fine-tuned Llama 70B models into one 120B model. Combines Xwin and Euryale.
Credits to
– [@chargoddard](https://huggingface.co/chargoddard) for developing the framework used to merge the model – [mergekit](https://github.com/cg123/mergekit).
– [@Undi95](https://huggingface.co/Undi95) for helping with the merge ratios.
#merge
Only logged in customers who have purchased this product may leave a review.
Copyright 2025