llamalora
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the SLERP merge method.
Models Merged
The following models were included in the merge:
- /home/xiaoyuanhang/llamalora/Llama-3-8B-Instruct-262k
- /home/xiaoyuanhang/llamalora/Llama3-8B-Chinese-Chat
Configuration
The following YAML configuration was used to produce this model:
slices:
- sources:
- model: /home/xiaoyuanhang/llamalora/Llama3-8B-Chinese-Chat
layer_range: [0, 30]
- model: /home/xiaoyuanhang/llamalora/Llama-3-8B-Instruct-262k
layer_range: [0, 30]
# or, the equivalent models: syntax:
# models:
# - model: psmathur/orca_mini_v3_13b
# - model: garage-bAInd/Platypus2-13B
merge_method: slerp
base_model: /home/xiaoyuanhang/llamalora/Llama3-8B-Chinese-Chat
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
dtype: float16
评论