MuseTalk一键整合包
下面是官方安装:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
| pushd D:\Software\AI\MuseTalk
git clone https://github.com/TMElyralab/MuseTalk.git
cd MuseTalk
conda create -n musetalk python=3.11.7
conda activate musetalk
conda install ffmpeg # 还需单独下载 download ffmpeg-static and export to FFMPEG_PATH https://github.com/BtbN/FFmpeg-Builds/releases
pip install ffmpeg-python
# Please follow the instructions from https://pytorch.org/get-started/previous-versions/
# This installation command only works on CUDA 11.1
pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 -f https://download.pytorch.org/whl/torch_stable.html
pip install -r requirements.txt
## 下面代码安装后可能需要重新安装torch-gpu版本。
mmlab packages
pip install --no-cache-dir -U openmim
mim install mmengine
mim install "mmcv>=2.0.1"
mim install "mmdet>=3.1.0"
mim install "mmpose>=1.1.0"
Download ffmpeg-static
|
下面是升级安装:
升级到python 3.11.7 cuda12.1 torch2.2+cu121 torchvision0.17.0+cu121
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
|
# Please follow the instructions from https://pytorch.org/get-started/previous-versions/
# This installation command only works on CUDA 11.1
## pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 -f https://download.pytorch.org/whl/torch_stable.html
#因当前系统用的是cuda12.1 【环境变量指定的cuda12.1,显卡驱动自带12.3】
安装torch 2.2.0+cu121 、torchaudio 2.2.0+cu121 、torchvision 0.17.0+cu121
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu121
pip install -r requirements.txt
pip list |findstr torch
torch 2.2.0+cu121
torchaudio 2.2.0+cu121
torchvision 0.17.0+cu121
|
1 :omegaconf报错
1
2
3
4
5
| python scripts/inference.py --inference_config configs/inference/test.yaml
Traceback (most recent call last):
File "D:\Software\AI\MuseTalk\scripts\inference.py", line 3, in <module>
from omegaconf import OmegaConf
ModuleNotFoundError: No module named 'omegaconf'
|
问题解决
2
ffmpeg-static
1
2
3
| python scripts/inference.py --inference_config configs/inference/test.yaml
please download ffmpeg-static and export to FFMPEG_PATH.
For example: export FFMPEG_PATH=/musetalk/ffmpeg-4.4-amd64-static
|
set FFMPEG_PATH=D:\Software\AI\MuseTalk\ffmpeg-n6.1\bin
问题解决。
3
报错找不到musetalk模块。
1
2
3
4
5
6
7
8
9
10
|
python scripts/inference.py --inference_config configs/inference/test.yaml
Traceback (most recent call last):
File "D:\Software\AI\MuseTalk\scripts\inference.py", line 12, in <module>
from musetalk.utils.utils import get_file_type,get_video_fps,datagen
ModuleNotFoundError: No module named 'musetalk'
安装完后 新建 __init__.py 到根目录
type nul > __init__.py
set PYTHONPATH=%PYTHONPATH%;D:\Software\AI\MuseTalk
|
解决。
4:troche gpu报错
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
| set FFMPEG_PATH=D:\Software\AI\MuseTalk\ffmpeg-n6.1\bin
(musetalk) D:\Software\AI\MuseTalk>python scripts/inference.py --inference_config configs/inference/test.yaml
add ffmpeg to path
Loads checkpoint by local backend from path: ./models/dwpose/dw-ll_ucoco_384.pth
Traceback (most recent call last):
File "D:\Software\AI\MuseTalk\scripts\inference.py", line 14, in <module>
from musetalk.utils.blending import get_image
File "D:\Software\AI\MuseTalk\musetalk\utils\blending.py", line 6, in <module>
fp = FaceParsing()
^^^^^^^^^^^^^
File "D:\Software\AI\MuseTalk\musetalk/utils\face_parsing\__init__.py", line 12, in __init__
self.net = self.model_init()
^^^^^^^^^^^^^^^^^
File "D:\Software\AI\MuseTalk\musetalk/utils\face_parsing\__init__.py", line 19, in model_init
net.cuda()
File "D:\Software\miniconda3\envs\musetalk\Lib\site-packages\torch\nn\modules\module.py", line 905, in cuda
return self._apply(lambda t: t.cuda(device))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Software\miniconda3\envs\musetalk\Lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "D:\Software\miniconda3\envs\musetalk\Lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "D:\Software\miniconda3\envs\musetalk\Lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "D:\Software\miniconda3\envs\musetalk\Lib\site-packages\torch\nn\modules\module.py", line 820, in _apply
param_applied = fn(param)
^^^^^^^^^
File "D:\Software\miniconda3\envs\musetalk\Lib\site-packages\torch\nn\modules\module.py", line 905, in <lambda>
return self._apply(lambda t: t.cuda(device))
^^^^^^^^^^^^^^
File "D:\Software\miniconda3\envs\musetalk\Lib\site-packages\torch\cuda\__init__.py", line 239, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
(musetalk) D:\Software\AI\MuseTalk>pip list |grep torch
'grep' 不是内部或外部命令,也不是可运行的程序
或批处理文件。
(musetalk) D:\Software\AI\MuseTalk>pip list | findstr torch
torch 2.0.1
torchaudio 2.0.2
torchvision 0.15.2
(musetalk) D:\Software\AI\MuseTalk>nvidia-smi
Sat Apr 6 19:54:21 2024
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 546.65 Driver Version: 546.65 CUDA Version: 12.3 |
|
卸载 torch 重新安装cuda
卸载当前的torch,torchvision,和torchaudio库:
1
| pip uninstall torch torchvision torchaudio
|
然后,使用如下命令重新安装带有CUDA支持的torch库:
1
2
3
| pip install torch==2.0.1 -f https://download.pytorch.org/whl/cu118/torch_stable.html
pip install torchvision==0.15.2 -f https://download.pytorch.org/whl/cu118/torch_stable.html
pip install torchaudio==2.0.2 -f https://download.pytorch.org/whl/cu118/torch_stable.html
|
最后,你可以再次运行以下Python代码来检查torch库是否安装正确:
1
2
3
4
| import torch
print(torch.__version__)
print(torch.cuda.is_available())
|
问题解决.
原因是pip install -r requirements.txt 后 ,安装mmlab packages 后面的包 把原tourch-gpu版本重新安装为了 cpu版本导致cuda报错。
推理运行
python scripts/inference.py –inference_config configs/inference/test.yaml
整合包下载: MuseTalk一键整合包下载。
加入高清codeformer合成 。