不要再嘴什麼AI XX 或者 Gen AI 了 ... 先跑起來,我就信你有心認真要做 AI ~ XD
- Bloom ≈ 4 * 176 GB = 704 GB記憶體
- Llama-2-70b ≈ 4 * 70 GB = 280 GB記憶體
- Falcon-40b ≈ 4 * 40 GB = 160 GB記憶體
- MPT-30b ≈ 4 * 30 GB = 120 GB記憶體
- bigcode/starcoder ≈ 4 * 15.5 = 62 GB記憶體
sudo vi /etc/systemd/system/ollama.serviceEnvironment="OLLAMA_MODELS=/path/to/ollama/models" (在最下方的service加上這行你要的路徑)sudo systemctl daemon-reloadsudo systemctl restart ollama.servicesudo systemctl status ollamaOLLAMA_HOST=0.0.0.0 ollama serve (或者是用變量來做 launchctl setenv OLLAMA_HOST "0.0.0.0")如果是想要用 ngrok 來做服務,那就需要這樣ngrok http 11434 --host-header="localhost:11434"
- LLM 模型
- 14 GB:codeqwen:7b-code-v1.5-fp16
- 69 GB:llava:34b-v1.6-fp16
- 73 GB:codellama:70b-code-q8_0
- 74 GB:dolphin-llama3:70b-v2.9-q8_0
- 75 GB:llama3-chatqa:70b-v1.5-q8_0
- 74 GB:llama3:70b-instruct-q8_0
- 115 GB:mixtral:8x22b-instruct-v0.1-q6_K
- 118 GB:qwen:110b-chat-v1.5-q8_0
- 250 GB:deepseek-coder-v2:236b-instruct-q8_0
- 250 GB:deepseek-v2:236b-chat-q8_0
- 141 GB:llama3.1:70b-instruct-fp16
- 231 GB:llama3.1:405b (ollama link)
- 245 GB:mistral-large:123b-instruct-2407-fp16 (ollama link)
- Text Embedding 模型
- bge-reranker-large
- bge-large-zh-v1.5
- mxbai-embed-large:v1
- nomic-embed-text:v1.5
- OLLAMA_FLASH_ATTENTION:優化的,計算效率跟記憶體使用率都會比較好
- OLLAMA_ORGINS:用逗號分隔,來當作連線來源限制
- OLLAMA_KEEP_ALIVE:指的是模型能存活多久,有在用的就知道,有時會跑很久,就是因為整個重新 loading,當然如果 GPU 夠的話,其實可以活久一點啦
- OLLAMA_PORT:這個是指定 PORT,很好懂
- OLLAMA_NUM_PARALLEL:這個是同時要啟用幾個模型,如果GPU很夠用,是真的可以多開
- OLLAMA_MAX_QUEUE:默認512,如字面意思,就是可以排多長的隊啦
- OLLAMA_MAX_LOADED_MODELS:如字面意思,會合理分配,默認是1,也就是只會有一個模型在 VRAM 裡
- OLLAMA_DEBUG:輸出DEBUG的log
- 預設上下文視窗大小為2048 tokens,可以透過指令/set parameter num_ctx 4096 將上下文視窗長度調整為4096 token
- ollama ps判斷模型時運行在CPU還是GPU上
from huggingface_hub import snapshot_download
model_id = "google/paligemma-3b-pt-896" # hugginFace's model name
snapshot_download(
repo_id=model_id,
local_dir="paligemma-3b-pt-896",
local_dir_use_symlinks=False,
revision="main",
use_auth_token="<YOUR_HF_ACCESS_TOKEN>")
把放 GGUF 檔目錄,再新增一個檔案 Modelfile,裡面寫 FROM ./模型檔名.gguf
還有要再另外寫上 TEMPLATE 跟 PARAMETER
細節一樣官方文件都有寫:https://github.com/ollama/ollama/blob/main/docs/modelfile.md
docker pull mintplexlabs/anythingllmexport STORAGE_LOCATION=$HOME/anythingllm && \mkdir -p $STORAGE_LOCATION && \touch "$STORAGE_LOCATION/.env" && \docker run -d -p 3001:3001 \--cap-add SYS_ADMIN \-v ${STORAGE_LOCATION}:/app/server/storage \-v ${STORAGE_LOCATION}/.env:/app/server/.env \-e STORAGE_DIR="/app/server/storage" \mintplexlabs/anythingllm
sudo apt update
sudo apt install nodejs npm
https://github.com/microsoft/autogen/tree/main/samples/apps/autogen-studio
npm install -g gatsby-cli
npm install --global yarn
cd frontend
yarn install
yarn build
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! lmdb@2.5.3 install: `node-gyp-build-optional-packages`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the lmdb@2.5.3 install script.
npm ERR! This is probably not a problem with npm.
There is likely additional logging output above.
最後只好乾脆 build 一個它的 docker 來用了
sudo docker build -f .devcontainer/full/Dockerfile -t autogen_full_img https://github.com/microsoft/autogen.git
sudo docker run -it --name autogen -v /opt/data:/opt/data -p 9999:9999 -p 8889:8889 --shm-size=32g autogen_full_img:latest /bin/bash
sudo docker exec -i -t autogen /bin/bash
./Anaconda3-2023.09-0-Linux-x86_64.sh
conda create -n autogen python=3.10
git clone https://github.com/microsoft/autogen.git
cd autogen
pip install -e .cd samples/apps/autogen-studio/
pip install -e .
jupyter notebook --generate-config
(首先就是先生成 config 檔)
vi ~/.jupyter/jupyter_notebook_config.py
(接著去編輯相關設定參數)
# Default: False
c.NotebookApp.allow_remote_access = True
(這是遠端連線用,要拿掉註解的#跟=後面的False改成True)
jupyter notebook password
(這就是遠端連線的登入密碼/token)
Enter password: ****
Verify password: ****
[NotebookPasswordApp] Wrote hashed password to ~/.jupyter/jupyter_notebook_config.json
c.NotebookApp.port = xxxx
(這個就是 port 號)
ipython kernel install --name "deepspeed" --user
(我有個 conda env 叫 deepspeed,這樣就能加到 jupyter)接著就是直接啟動啦jupyter notebook至於 autogen 嘛,啟動指令是這樣
autogenstudio ui --port 9872
2024/05/20 想起了這個也蠻方便的 tunel 的用法 " TryCloudflare "
- Autogen: While Autogen excels in creating conversational agents capable of working together, it lacks an inherent concept of process. In Autogen, orchestrating agents' interactions requires additional programming, which can become complex and cumbersome as the scale of tasks grows.
- Autogen 代理只能按照它們被編程的方式進行互動。如果需要代理以不同的方式交互,則需要進行額外的編程。 程式設計複雜:程式設計Autogen 代理程式的互動可能非常複雜,尤其是對於複雜的任務及邏輯。 可擴展性差:Autogen 代理的程式設計方式不利於可擴展性。隨著任務規模的成長,程式設計代理的互動變得更加困難。
總的來說,相對於Autogen 和ChatDev 創建會話代理的工具,CrewAI 的構建以生產為中心,既提供了Autogen 對話代理的靈活性,又採用了ChatDev 的結構化流程方法,但又不失靈活性。 CrewAI 基於動態的流程設計理念,具有很強的適應性,可以無縫地融入開發和生產工作流程,以使得工作流程可以根據需求進行調整和優化,以適應不同的工作場景和業務需求。 - ChatDev: ChatDev introduced the idea of processes into the realm of AI agents, but its implementation is quite rigid. Customizations in ChatDev are limited and not geared towards production environments, which can hinder scalability and flexibility in real-world applications.
- ChatDev 中的自訂是有限的。這意味著ChatDev 可能難以滿足複雜應用程式的需求。 不適合生產環境:ChatDev 不適合生產環境。這意味著ChatDev 可能難以滿足實際應用程式的要求。 可擴展性差:ChatDev 的實作不利於可擴展性。隨著任務規模的成長,ChatDev 可能難以滿足應用程式的需求。
https://github.com/Deep-Learning-101/Natural-Language-Processing-Paper#llm
推理加速 / DeepSpeed介绍 / 微軟DeepSpeed
https://huggingface.co/docs/transformers/main_classes/deepspeed
DeepSpeed之ZeRO系列:將顯存優化進行到底
ZeRO-Infinity: Breaking the GPU Memory Wall for Extreme Scale Deep Learning
https://github.com/intel/intel-extension-for-transformers