AI相关.md 4.5 KB

学习

运行中方案

openclaw + ollama::qwen3.5:2b=openai-completions

ollama

[不建议使用的]ollama是一个开源的AI模型服务,用于本地运行AI模型。现发觉存在开源依赖异常与未来可能的转闭源问题[](https://www.bilibili.com/video/BV1rwdzBSEMy/?vd_source=7a8f72fd13d7216bed258ce61fd4cb6d)

llama.cpp是ollama的主要驱动,其本身以命令行工具提供完整本地运行大模型的功能

[已废除]本次本地运行模型为 qwen3.5:2b ,并尝试将其集成到 openclaw

本地将使用qwenPaw3.5:4B,以clawclaw为agent后端,LobHub为前端尝试构建

ollama run qwen3.5:2b

shimmy 据说是ollama的替代

...

从魔搭社区下载大模型并通过ollama.cpp启动

  • 以 QwenPaw:4B 为例
  1. 下载模型

    > python -m pip install modelscope
  2. 此时下载到的modelscope为可执行文件在python-path/Scripts中,或安装时提示WARNING: The scripts modelscope.exe and ms.exe are installed in ...所在的路径下
> modelscope download --model AgentScope/QwenPaw-Flash-4B-Q8_0 --local_dir /your/local/path

  • 下载 ollama.cpp 预编译版本并解压

  • 下载lobehub客户端并部署本地docker服务

    docker run -it -d -p 3210:3210 --network pg --env-file lobehub.env --name lobehub lobehub/lobehub
  • 此处完成docker容器my-postgres的创建

    docker run -it --name lobehub -p 3210:3210 \
    --network pg \
    -e DATABASE_URL=postgres://postgres:mysecretpassword@my-postgres:5432/postgres \
    -e KEY_VAULTS_SECRET=KgA2gFRQHM4Dnr0vaYCXzyjhluq9iStnZerR7sqkQj8= \
    -e AUTH_SECRET=aJrYhLwHQLSdPaSYRZiJc2lwK2iC/LZkpbl4/L0h8Do= \
    -e S3_ACCESS_KEY_ID=admin \
    -e S3_SECRET_ACCESS_KEY=minio1234 \
    -e S3_ENDPOINT=http://localhost:9000 \
    -e S3_BUCKET=LobeHub \
    -e JWKS_KEY='{"keys":[{"d":"IklGNZFwmvF1367eDa5p5xcs4dWPNPEGmAoMFpN98nrDwt2C0HT-LcwhXfh4fiitzFs_eAt_UCIYzc1In7CDoXJqaxPpIDOWcrRWH1FLBpRdMDhgGIreAN32N2eRSGOnJzDdawPWUlLnABtNUd6u5sdaZQ7lqav4ndFvy_sWwAOXbcNjIjh0-FMDTJ5R6CVLKxrFm45NnroOV7K8mVh_sSrC_GT_zt1y3Ejjm37yyjD6aCaYRszuZaWOE3fbgYcxbkwvruxaPsdBgpth1Rjm6IO0C7feZh0KdouViwdL_NrPf7KLkLhGtHSploeq4erlf9HBgfAjPDuxHbpJeyKh2Q","dp":"u4h1Y5SncBRPrEtwd7ktClfC8HYWMg09PvdAlMLd6TfgOzkqQoQsp_VCl1Z2Br5K_ZrSHX4Y-Xuz-kBcJ9jURgYOhxge0KAj0Hh8dbUsau00g7LkCLDf11ld6Q5x5x39cLu8M1V8EZu-9hNQ87hNsQhREz6aJaQn9AjXbo_pjU0","dq":"G7-T67VWcujs6fl9iBCwLvTBJm8sUeVObDUnJT3sut2LDosGowbI-ap0IRxxCB8hNlMe5NSkz4dyzqgOJQGTgtzUC_Ro0b-hftZUCW5Yv0SpWEK6_DbLdIbnrndFwCbRLcXSljlTFh4FjFF22g83AGuZ8BWTCmy5zyDk096Z7pk","e":"AQAB","kty":"RSA","n":"oxidlec5mGtg1bMKWJhlhPJIJG17bAp9ZcI_SldCLB23T-jXeASet9JixWViQ-ikdtjjPKKObbPhW4KaMmeM8Yo4NV8sP7qOEbacFhUWQOJRDa6Eb-VwVVMvhiOywam6QIFpcTwhgVNk_pa7CKnilcLcfBd7TXHniKJk1ujr7e7UAjPUZevjt6osX_KIiksw79OWd3bK5X7a1DP2o1hK_zAZ7Q1_KnuI_FrVLbKBle2Nikxm9oUHYUVw7EWQByZimy6_q1ygnMtg97qaN9uNOeWedecLOnAtGiaxXy0Tdssh7JmUkypD8DDkD19Trh9xtRVDQB70Oi-Hja2aNJihLQ","p":"zY4ju_dC6VHDqHJJhz2i-e-6F1-OCsLv92M5RCXGZV6q7Oflkp20RsqSj6JpFW5A_ELpTf-brBV7j9u4OXaJPKQ1pQK36goh0YvPeTBGfgxR60h9hFxDBCmfUt8b1SqmtfJPvqc11kVbARWdEM-CMhyVlFWAJPxP05yqQA7mf_U","q":"yx8DyFXXzeLFqnWBmlOsJyShtAHLXGAy02kBpCFW60b_AG5n2qPXo3RI9XcY3TS8iB2occo7xBMq3XRtmgbR8q9c7G0izZognDs2zq_MsRfHaSQ07OvyQW7Oixt8G-x62w6Qg6Sm23XTSgBDH-JK9mGSehTZMr9MipXjJGIvcVk","qi":"EPq53mytw_gWxz7ho_kbXljpDIwcVU82jbYuAhYoVxLQMGOObEWZH6bjWRN4jonHLIdou0Jy_cI33UKMSOWFLxOX7zFWMyPTmpuRXgA6SSZciHidDM6LhvIJISrsKDRYDE4mCmnAbiktsNtOYENuOCxT0yOncQJD4SfeP4r_V9o","use":"sig","kid":"17b26bd6bbb0409d","alg":"RS256"}]}' \
    -e APP_URL=http://localhost:3210 \
    lobehub/lobehub
    
    docker run -d \
    --name minio \
    -p 9000:9000 \
    -p 9001:9001 \
    -e "MINIO_ROOT_USER=admin" \
    -e "MINIO_ROOT_PASSWORD=minio1234" \
    minio/minio server /data --console-address ":9001"
    
    ./shimmy.exe serve --model-path F:/mirlink/models/qwenPaw3.5-4b/QwenPaw-Flash-4B-Q8_0.gguf --bind 127.0.0.1:11434 --gpu-backend vulkan
    # ERRORLOG::: llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'qwen35'
    
    .\llama-server.exe -m F:\mirlink\models\qwenPaw3.5-4b\QwenPaw-Flash-4B-Q8_0.gguf --host 127.0.0.1 --port 11434 -c 8192