rtx 2060 6g cuda mem problem

#1
by cleverhack - opened

Traceback (most recent call last):
File "/home/jack/miniconda3/envs/python3/lib/python3.7/site-packages/gradio/routes.py", line 400, in run_predict
event_data=event_data,
File "/home/jack/miniconda3/envs/python3/lib/python3.7/site-packages/gradio/blocks.py", line 1070, in process_api
fn_index, inputs, iterator, request, event_id, event_data
File "/home/jack/miniconda3/envs/python3/lib/python3.7/site-packages/gradio/blocks.py", line 893, in call_function
utils.async_iteration, iterator, limiter=self.limiter
File "/home/jack/miniconda3/envs/python3/lib/python3.7/site-packages/anyio/to_thread.py", line 32, in run_sync
func, *args, cancellable=cancellable, limiter=limiter
File "/home/jack/miniconda3/envs/python3/lib/python3.7/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "/home/jack/miniconda3/envs/python3/lib/python3.7/site-packages/anyio/_backends/_asyncio.py", line 867, in run
result = context.run(func, *args)
File "/home/jack/miniconda3/envs/python3/lib/python3.7/site-packages/gradio/utils.py", line 549, in async_iteration
return next(iterator)
File "web_my_gpu_qe.py", line 17, in predict
temperature=temperature):
File "/home/jack/miniconda3/envs/python3/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 43, in generator_context
response = gen.send(None)
File "/home/jack/.cache/huggingface/modules/transformers_modules/THUDM/chatglm-6b-int4-qe/437fc94474689e27adfcb29ef768bfaef9be5c45/modeling_chatglm.py", line 1163, in stream_chat
for outputs in self.stream_generate(**input_ids, **gen_kwargs):
File "/home/jack/miniconda3/envs/python3/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 43, in generator_context
response = gen.send(None)
File "/home/jack/.cache/huggingface/modules/transformers_modules/THUDM/chatglm-6b-int4-qe/437fc94474689e27adfcb29ef768bfaef9be5c45/modeling_chatglm.py", line 1244, in stream_generate
output_hidden_states=False,
File "/home/jack/miniconda3/envs/python3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/home/jack/.cache/huggingface/modules/transformers_modules/THUDM/chatglm-6b-int4-qe/437fc94474689e27adfcb29ef768bfaef9be5c45/modeling_chatglm.py", line 1051, in forward
return_dict=return_dict,
File "/home/jack/miniconda3/envs/python3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/home/jack/.cache/huggingface/modules/transformers_modules/THUDM/chatglm-6b-int4-qe/437fc94474689e27adfcb29ef768bfaef9be5c45/modeling_chatglm.py", line 855, in forward
inputs_embeds = self.word_embeddings(input_ids)
File "/home/jack/miniconda3/envs/python3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/home/jack/.cache/huggingface/modules/transformers_modules/THUDM/chatglm-6b-int4-qe/437fc94474689e27adfcb29ef768bfaef9be5c45/quantization.py", line 380, in forward
original_weight = extract_weight_to_half(weight=self.weight, scale_list=self.weight_scale, source_bit_width=self.weight_bit_width)
File "/home/jack/.cache/huggingface/modules/transformers_modules/THUDM/chatglm-6b-int4-qe/437fc94474689e27adfcb29ef768bfaef9be5c45/quantization.py", line 229, in extract_weight_to_half
out = torch.empty(n, m * (8 // source_bit_width), dtype=torch.half, device="cuda")
RuntimeError: CUDA out of memory. Tried to allocate 1.15 GiB (GPU 0; 5.77 GiB total capacity; 3.29 GiB already allocated; 1.13 GiB free; 3.35 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

量化后的模型权重文件仅为 3G ,理论上 6G 显存(使用 CPU 即 6G 内存)即可推理,具有在嵌入式设备(如树莓派)上运行的可能。

其实差不多6.5G左右,意思是最少8G

Sign up or log in to comment