-
Notifications
You must be signed in to change notification settings - Fork 697
Add Qwen3.5 Moe lite awq #4561
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
43758726
wants to merge
3
commits into
InternLM:main
Choose a base branch
from
43758726:InternS2_preview_lite_awq
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Add Qwen3.5 Moe lite awq #4561
Changes from all commits
Commits
Show all changes
3 commits
Select commit
Hold shift + click to select a range
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Some comments aren't visible on the classic Files Changed page.
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -6,7 +6,7 @@ | |
| from torch import nn | ||
| from transformers import AutoTokenizer | ||
|
|
||
| from lmdeploy.archs import get_model_arch, get_task | ||
| from lmdeploy.archs import get_model_arch | ||
| from lmdeploy.lite.quantization import CalibrationContext, CalibrationContextV2 | ||
| from lmdeploy.lite.utils import collect_target_modules, get_calib_loaders, load_hf_from_pretrained | ||
| from lmdeploy.vl.model.builder import load_vl_model | ||
|
|
@@ -18,6 +18,9 @@ | |
| 'QWenLMHeadModel': 'QWenBlock', | ||
| 'Qwen2ForCausalLM': 'Qwen2DecoderLayer', | ||
| 'Qwen3ForCausalLM': 'Qwen3DecoderLayer', | ||
| 'Qwen3MoeForCausalLM': 'Qwen3MoeDecoderLayer', | ||
| 'Qwen3_5ForConditionalGeneration': 'Qwen3_5DecoderLayer', | ||
| 'Qwen3_5MoeForConditionalGeneration': 'Qwen3_5MoeDecoderLayer', | ||
| 'BaiChuanForCausalLM': 'DecoderLayer', # Baichuan 7B | ||
| 'BaichuanForCausalLM': 'DecoderLayer', # Baichuan2 7B | ||
| 'LlamaForCausalLM': 'LlamaDecoderLayer', | ||
|
|
@@ -39,6 +42,9 @@ | |
| 'QWenLMHeadModel': 'RMSNorm', | ||
| 'Qwen2ForCausalLM': 'Qwen2RMSNorm', | ||
| 'Qwen3ForCausalLM': 'Qwen3RMSNorm', | ||
| 'Qwen3MoeForCausalLM': 'Qwen3MoeRMSNorm', | ||
| 'Qwen3_5ForConditionalGeneration': 'Qwen3_5RMSNorm', | ||
| 'Qwen3_5MoeForConditionalGeneration': 'Qwen3_5MoeRMSNorm', | ||
| 'BaiChuanForCausalLM': 'RMSNorm', # Baichuan 7B | ||
| 'BaichuanForCausalLM': 'RMSNorm', # Baichuan2 7B | ||
| 'LlamaForCausalLM': 'LlamaRMSNorm', | ||
|
|
@@ -60,6 +66,9 @@ | |
| 'QWenLMHeadModel': 'lm_head', | ||
| 'Qwen2ForCausalLM': 'lm_head', | ||
| 'Qwen3ForCausalLM': 'lm_head', | ||
| 'Qwen3MoeForCausalLM': 'lm_head', | ||
| 'Qwen3_5ForConditionalGeneration': 'lm_head', | ||
| 'Qwen3_5MoeForConditionalGeneration': 'lm_head', | ||
| 'BaiChuanForCausalLM': 'lm_head', # Baichuan 7B | ||
| 'BaichuanForCausalLM': 'lm_head', # Baichuan2 7B | ||
| 'LlamaForCausalLM': 'lm_head', | ||
|
|
@@ -74,6 +83,60 @@ | |
| 'MistralForCausalLM': 'lm_head', | ||
| } | ||
|
|
||
| MOE_MODEL_LIST = [ | ||
| 'Qwen3MoeForCausalLM', | ||
| 'Qwen3_5MoeForConditionalGeneration', | ||
| 'MixtralForCausalLM' | ||
| ] | ||
|
Comment on lines
+86
to
+90
|
||
|
|
||
|
|
||
| def check_vl_llm(backend: str, config: dict) -> bool: | ||
| """Check if the model is a vl model from model config.""" | ||
| if 'auto_map' in config: | ||
| for _, v in config['auto_map'].items(): | ||
| if 'InternLMXComposer2ForCausalLM' in v: | ||
| return True | ||
|
|
||
| if 'language_config' in config and 'vision_config' in config and config['language_config'].get( | ||
| 'architectures', [None])[0] == 'DeepseekV2ForCausalLM': | ||
| return True | ||
|
|
||
| arch = config['architectures'][0] | ||
| supported_archs = set([ | ||
| 'LlavaLlamaForCausalLM', 'LlavaMistralForCausalLM', 'CogVLMForCausalLM', 'InternLMXComposer2ForCausalLM', | ||
| 'InternVLChatModel', 'MiniCPMV', 'LlavaForConditionalGeneration', 'LlavaNextForConditionalGeneration', | ||
| 'Phi3VForCausalLM', 'Qwen2VLForConditionalGeneration', 'Qwen2_5_VLForConditionalGeneration', | ||
| 'Qwen3VLForConditionalGeneration', 'Qwen3VLMoeForConditionalGeneration', 'Qwen3_5ForConditionalGeneration', | ||
| 'Qwen3_5MoeForConditionalGeneration', 'MllamaForConditionalGeneration', 'MolmoForCausalLM', | ||
| 'Gemma3ForConditionalGeneration', 'Llama4ForConditionalGeneration', 'InternVLForConditionalGeneration', | ||
| 'InternS1ForConditionalGeneration', 'InternS1ProForConditionalGeneration', | ||
| 'InternS1_1_ForConditionalGeneration', 'Glm4vForConditionalGeneration' | ||
| ]) | ||
| if arch == 'QWenLMHeadModel' and 'visual' in config: | ||
| return True | ||
| elif arch == 'MultiModalityCausalLM' and 'language_config' in config: | ||
| return True | ||
| elif arch in ['ChatGLMModel', 'ChatGLMForConditionalGeneration'] and 'vision_config' in config: | ||
| return True | ||
| elif arch in supported_archs: | ||
| return True | ||
| return False | ||
|
|
||
|
|
||
| def get_task(backend: str, model_path: str): | ||
| """Get pipeline type and pipeline class from model config.""" | ||
| import os | ||
|
|
||
| if os.path.exists(os.path.join(model_path, 'triton_models', 'weights')): | ||
| # workspace model | ||
| return 'llm' | ||
|
Comment on lines
+128
to
+132
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This block can be safely removed |
||
| _, config = get_model_arch(model_path) | ||
| if check_vl_llm(backend, config.to_dict()): | ||
| return 'vlm' | ||
|
|
||
| # default task | ||
| return 'llm' | ||
|
|
||
|
|
||
| def _prepare_for_calibrate(model: nn.Module, | ||
| layer_type: str | type, | ||
|
|
@@ -195,6 +258,55 @@ def update_moe_mapping(model, model_type): | |
| NORM_FCS_MAP[LAYER_TYPE_MAP[model_type]] = updated_norm2fcs | ||
|
|
||
|
|
||
| def load_model_and_tokenizer(model: str, | ||
| dtype: Literal['float16', 'bfloat16', 'auto'] = 'auto', | ||
| work_dir: str = './work_dir'): | ||
| """Load model and tokenizer.""" | ||
| model_type = get_task(backend='turbomind', model_path=model) | ||
| make_compatible_internvl_config(model) | ||
|
|
||
| # Load tokenizer | ||
| tokenizer = AutoTokenizer.from_pretrained(model, trust_remote_code=True) | ||
|
|
||
| # get model arch and config | ||
| arch, original_config = get_model_arch(model) | ||
|
|
||
| if model_type == 'llm': | ||
| model = load_hf_from_pretrained(model, dtype=dtype, trust_remote_code=True) | ||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Please don't hard code "trust_remote_code" |
||
| vl_model = None | ||
| elif model_type == 'vlm': | ||
| vl_model = load_vl_model(model, backend=None, with_llm=True).vl_model | ||
| model = vl_model | ||
| if hasattr(vl_model, 'language_model'): # deepseek-vl, ... | ||
| model = vl_model.language_model | ||
| if hasattr(vl_model, 'llm'): # MiniCPMV, ... | ||
| model = vl_model.llm | ||
| model.config.use_cache = False | ||
| if hasattr(model.config, 'text_config'): | ||
| model.config.text_config.use_cache = False | ||
| elif hasattr(model.config, 'llm_config'): | ||
| model.config.llm_config.use_cache = False | ||
| if dtype == 'float16' or (dtype == 'auto' and original_config.torch_dtype == torch.float16): | ||
| model.half() | ||
| elif dtype == 'bfloat16' or (dtype == 'auto' and original_config.torch_dtype == torch.bfloat16): | ||
| assert torch.cuda.is_bf16_supported( | ||
| ), 'your device does not support bfloat16 please set --dtype float16' # noqa | ||
| model.to(torch.bfloat16) | ||
| model.eval() | ||
|
|
||
| model_type = type(model).__name__ | ||
| if model_type not in LAYER_TYPE_MAP or model_type not in NORM_TYPE_MAP: | ||
| raise RuntimeError(f'Currently, quantification and calibration of {model_type} are ' | ||
| f'not supported. The supported model types are ' | ||
| f"{', '.join(LAYER_TYPE_MAP.keys())}.") | ||
|
|
||
| # Create work directory if not exists | ||
| work_dir = Path(work_dir) | ||
| work_dir.mkdir(parents=True, exist_ok=True) | ||
|
|
||
| return arch, vl_model, model, tokenizer, model_type, work_dir | ||
|
|
||
|
|
||
| def calibrate(model: str, | ||
| calib_dataset: str = 'wikitext2', | ||
| calib_samples: int = 128, | ||
|
|
@@ -241,41 +353,8 @@ def calibrate(model: str, | |
| 'Support only `wikitext2`, `c4`, `pileval`, `gsm8k`, ' \ | ||
| '`neuralmagic_calibration`, `open-platypus`, `openwebtext`.' | ||
|
|
||
| model_type, _ = get_task(backend='turbomind', model_path=model) | ||
| make_compatible_internvl_config(model) | ||
|
|
||
| # Load tokenizer and configuration | ||
| tokenizer = AutoTokenizer.from_pretrained(model, trust_remote_code=True) | ||
|
|
||
| if model_type == 'llm': | ||
| model = load_hf_from_pretrained(model, dtype=dtype, trust_remote_code=True) | ||
| vl_model = None | ||
| elif model_type == 'vlm': | ||
| _, original_config = get_model_arch(model) | ||
| vl_model = load_vl_model(model, backend=None, with_llm=True).vl_model | ||
| model = vl_model | ||
| if hasattr(vl_model, 'language_model'): # deepseek-vl, ... | ||
| model = vl_model.language_model | ||
| if hasattr(vl_model, 'llm'): # MiniCPMV, ... | ||
| model = vl_model.llm | ||
| model.config.use_cache = False | ||
| if hasattr(model.config, 'text_config'): | ||
| model.config.text_config.use_cache = False | ||
| elif hasattr(model.config, 'llm_config'): | ||
| model.config.llm_config.use_cache = False | ||
| if dtype == 'float16' or (dtype == 'auto' and original_config.torch_dtype == torch.float16): | ||
| model.half() | ||
| elif dtype == 'bfloat16' or (dtype == 'auto' and original_config.torch_dtype == torch.bfloat16): | ||
| assert torch.cuda.is_bf16_supported( | ||
| ), 'your device does not support bfloat16 please set --dtype float16' # noqa | ||
| model.to(torch.bfloat16) | ||
| model.eval() | ||
|
|
||
| model_type = type(model).__name__ | ||
| if model_type not in LAYER_TYPE_MAP or model_type not in NORM_TYPE_MAP: | ||
| raise RuntimeError(f'Currently, quantification and calibration of {model_type} are ' | ||
| f'not supported. The supported model types are ' | ||
| f"{', '.join(LAYER_TYPE_MAP.keys())}.") | ||
| arch, vl_model, model, tokenizer, model_type, work_dir = load_model_and_tokenizer( | ||
| model, dtype=dtype, work_dir=work_dir) | ||
|
|
||
| if model_type in ['MixtralForCausalLM']: | ||
| update_moe_mapping(model, model_type) | ||
|
|
@@ -319,12 +398,9 @@ def calibrate(model: str, | |
| all_data = torch.cat(calib_loader).to(device) | ||
| calib_ctx.calibrate(all_data) | ||
|
|
||
| # Create work directory if not exists | ||
| work_dir = Path(work_dir) | ||
| work_dir.mkdir(parents=True, exist_ok=True) | ||
| calib_ctx.export(work_dir) | ||
|
|
||
| return vl_model, model, tokenizer, work_dir | ||
| return arch, vl_model, model, tokenizer | ||
|
|
||
|
|
||
| if __name__ == '__main__': | ||
|
|
||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,7 @@ | ||
| # Copyright (c) OpenMMLab. All rights reserved. | ||
|
|
||
| from .base import CONVERT_MOE_MODELS | ||
| from .mixtral import MixtralMoeMLP | ||
| from .qwen import QwenMoeMLP | ||
|
|
||
| __all__ = ['CONVERT_MOE_MODELS', 'MixtralMoeMLP', 'QwenMoeMLP'] |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,5 @@ | ||
| # Copyright (c) OpenMMLab. All rights reserved. | ||
|
|
||
| from mmengine import Registry | ||
|
|
||
| CONVERT_MOE_MODELS = Registry('moe_mlp_module', locations=['lmdeploy.lite.moe_mlp_modules']) |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When
calib_ds_req=False, this code bypassescalibrate()'s supported-model validation and then doeslayer_type = LAYER_TYPE_MAP[type(model).__name__], which will raise a rawKeyErrorfor unsupported/renamed model classes. Consider adding the same explicit check and user-facingRuntimeErrormessage thatcalibrate()uses (or reusingcalibrate()'s model-type validation) so failures are actionable.