mirror of
https://github.com/langgenius/dify.git
synced 2024-11-16 03:32:23 +08:00
Chore/improve deployment flow (#4299)
Co-authored-by: 天魂 <365125264@qq.com>
This commit is contained in:
parent
dd5f3873da
commit
488e3c3d56
13
.github/workflows/api-tests.yml
vendored
13
.github/workflows/api-tests.yml
vendored
|
@ -55,6 +55,11 @@ jobs:
|
|||
- name: Run Tool
|
||||
run: poetry run -C api bash dev/pytest/pytest_tools.sh
|
||||
|
||||
- name: Set up dotenvs
|
||||
run: |
|
||||
cp docker/.env.example docker/.env
|
||||
cp docker/middleware.env.example docker/middleware.env
|
||||
|
||||
- name: Set up Sandbox
|
||||
uses: hoverkraft-tech/compose-action@v2.0.0
|
||||
with:
|
||||
|
@ -71,12 +76,7 @@ jobs:
|
|||
uses: hoverkraft-tech/compose-action@v2.0.0
|
||||
with:
|
||||
compose-file: |
|
||||
docker/docker-compose.middleware.yaml
|
||||
docker/docker-compose.qdrant.yaml
|
||||
docker/docker-compose.milvus.yaml
|
||||
docker/docker-compose.pgvecto-rs.yaml
|
||||
docker/docker-compose.pgvector.yaml
|
||||
docker/docker-compose.chroma.yaml
|
||||
docker/docker-compose.yaml
|
||||
services: |
|
||||
weaviate
|
||||
qdrant
|
||||
|
@ -86,6 +86,5 @@ jobs:
|
|||
pgvecto-rs
|
||||
pgvector
|
||||
chroma
|
||||
|
||||
- name: Test Vector Stores
|
||||
run: poetry run -C api bash dev/pytest/pytest_vdb.sh
|
||||
|
|
|
@ -174,6 +174,7 @@ The easiest way to start the Dify server is to run our [docker-compose.yml](dock
|
|||
|
||||
```bash
|
||||
cd docker
|
||||
cp .env.example .env
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
|
@ -183,7 +184,7 @@ After running, you can access the Dify dashboard in your browser at [http://loca
|
|||
|
||||
## Next steps
|
||||
|
||||
If you need to customize the configuration, please refer to the comments in our [docker-compose.yml](docker/docker-compose.yaml) file and manually set the environment configuration. After making the changes, please run `docker-compose up -d` again. You can see the full list of environment variables [here](https://docs.dify.ai/getting-started/install-self-hosted/environments).
|
||||
If you need to customize the configuration, please refer to the comments in our [.env.example](docker/.env.example) file and update the corresponding values in your `.env` file. Additionally, you might need to make adjustments to the `docker-compose.yaml` file itself, such as changing image versions, port mappings, or volume mounts, based on your specific deployment environment and requirements. After making any changes, please re-run `docker-compose up -d`. You can find the full list of available environment variables [here](https://docs.dify.ai/getting-started/install-self-hosted/environments).
|
||||
|
||||
If you'd like to configure a highly-available setup, there are community-contributed [Helm Charts](https://helm.sh/) and YAML files which allow Dify to be deployed on Kubernetes.
|
||||
|
||||
|
|
|
@ -157,15 +157,17 @@
|
|||
|
||||
```bash
|
||||
cd docker
|
||||
cp .env.example .env
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
بعد التشغيل، يمكنك الوصول إلى لوحة تحكم Dify في متصفحك على [http://localhost/install](http://localhost/install) وبدء عملية التهيئة.
|
||||
|
||||
> إذا كنت ترغب في المساهمة في Dify أو القيام بتطوير إضافي، فانظر إلى [دليلنا للنشر من الشفرة (code) المصدرية](https://docs.dify.ai/getting-started/install-self-hosted/local-source-code)
|
||||
|
||||
## الخطوات التالية
|
||||
|
||||
إذا كنت بحاجة إلى تخصيص التكوين، يرجى الرجوع إلى التعليقات في ملف [docker-compose.yml](docker/docker-compose.yaml) لدينا وتعيين التكوينات البيئية يدويًا. بعد إجراء التغييرات، يرجى تشغيل `docker-compose up -d` مرة أخرى. يمكنك رؤية قائمة كاملة بالمتغيرات البيئية [هنا](https://docs.dify.ai/getting-started/install-self-hosted/environments).
|
||||
إذا كنت بحاجة إلى تخصيص الإعدادات، فيرجى الرجوع إلى التعليقات في ملف [.env.example](docker/.env.example) وتحديث القيم المقابلة في ملف `.env`. بالإضافة إلى ذلك، قد تحتاج إلى إجراء تعديلات على ملف `docker-compose.yaml` نفسه، مثل تغيير إصدارات الصور أو تعيينات المنافذ أو نقاط تحميل وحدات التخزين، بناءً على بيئة النشر ومتطلباتك الخاصة. بعد إجراء أي تغييرات، يرجى إعادة تشغيل `docker-compose up -d`. يمكنك العثور على قائمة كاملة بمتغيرات البيئة المتاحة [هنا](https://docs.dify.ai/getting-started/install-self-hosted/environments).
|
||||
|
||||
يوجد مجتمع خاص بـ [Helm Charts](https://helm.sh/) وملفات YAML التي تسمح بتنفيذ Dify على Kubernetes للنظام من الإيجابيات العلوية.
|
||||
|
||||
|
|
|
@ -179,11 +179,16 @@ Dify 是一个开源的 LLM 应用开发平台。其直观的界面结合了 AI
|
|||
|
||||
```bash
|
||||
cd docker
|
||||
cp .env.example .env
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
运行后,可以在浏览器上访问 [http://localhost/install](http://localhost/install) 进入 Dify 控制台并开始初始化安装操作。
|
||||
|
||||
### 自定义配置
|
||||
|
||||
如果您需要自定义配置,请参考 [.env.example](docker/.env.example) 文件中的注释,并更新 `.env` 文件中对应的值。此外,您可能需要根据您的具体部署环境和需求对 `docker-compose.yaml` 文件本身进行调整,例如更改镜像版本、端口映射或卷挂载。完成任何更改后,请重新运行 `docker-compose up -d`。您可以在[此处](https://docs.dify.ai/getting-started/install-self-hosted/environments)找到可用环境变量的完整列表。
|
||||
|
||||
#### 使用 Helm Chart 部署
|
||||
|
||||
使用 [Helm Chart](https://helm.sh/) 版本或者 YAML 文件,可以在 Kubernetes 上部署 Dify。
|
||||
|
@ -192,10 +197,6 @@ docker compose up -d
|
|||
- [Helm Chart by @BorisPolonsky](https://github.com/BorisPolonsky/dify-helm)
|
||||
- [YAML 文件 by @Winson-030](https://github.com/Winson-030/dify-kubernetes)
|
||||
|
||||
### 配置
|
||||
|
||||
如果您需要自定义配置,请参考我们的 [docker-compose.yml](docker/docker-compose.yaml) 文件中的注释,并手动设置环境配置。更改后,请再次运行 `docker-compose up -d`。您可以在我们的[文档](https://docs.dify.ai/getting-started/install-self-hosted/environments)中查看所有环境变量的完整列表。
|
||||
|
||||
## Star History
|
||||
|
||||
[![Star History Chart](https://api.star-history.com/svg?repos=langgenius/dify&type=Date)](https://star-history.com/#langgenius/dify&Date)
|
||||
|
|
|
@ -179,6 +179,7 @@ La forma más fácil de iniciar el servidor de Dify es ejecutar nuestro archivo
|
|||
|
||||
```bash
|
||||
cd docker
|
||||
cp .env.example .env
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
|
@ -188,7 +189,7 @@ Después de ejecutarlo, puedes acceder al panel de control de Dify en tu navegad
|
|||
|
||||
## Próximos pasos
|
||||
|
||||
Si necesitas personalizar la configuración, consulta los comentarios en nuestro archivo [docker-compose.yml](docker/docker-compose.yaml) y configura manualmente la configuración del entorno
|
||||
Si necesita personalizar la configuración, consulte los comentarios en nuestro archivo [.env.example](docker/.env.example) y actualice los valores correspondientes en su archivo `.env`. Además, es posible que deba realizar ajustes en el propio archivo `docker-compose.yaml`, como cambiar las versiones de las imágenes, las asignaciones de puertos o los montajes de volúmenes, según su entorno de implementación y requisitos específicos. Después de realizar cualquier cambio, vuelva a ejecutar `docker-compose up -d`. Puede encontrar la lista completa de variables de entorno disponibles [aquí](https://docs.dify.ai/getting-started/install-self-hosted/environments).
|
||||
|
||||
. Después de realizar los cambios, ejecuta `docker-compose up -d` nuevamente. Puedes ver la lista completa de variables de entorno [aquí](https://docs.dify.ai/getting-started/install-self-hosted/environments).
|
||||
|
||||
|
|
|
@ -179,6 +179,7 @@ La manière la plus simple de démarrer le serveur Dify est d'exécuter notre fi
|
|||
|
||||
```bash
|
||||
cd docker
|
||||
cp .env.example .env
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
|
@ -188,9 +189,7 @@ Après l'exécution, vous pouvez accéder au tableau de bord Dify dans votre nav
|
|||
|
||||
## Prochaines étapes
|
||||
|
||||
Si vous devez personnaliser la configuration, veuillez
|
||||
|
||||
vous référer aux commentaires dans notre fichier [docker-compose.yml](docker/docker-compose.yaml) et définir manuellement la configuration de l'environnement. Après avoir apporté les modifications, veuillez exécuter à nouveau `docker-compose up -d`. Vous pouvez voir la liste complète des variables d'environnement [ici](https://docs.dify.ai/getting-started/install-self-hosted/environments).
|
||||
Si vous devez personnaliser la configuration, veuillez vous référer aux commentaires dans notre fichier [.env.example](docker/.env.example) et mettre à jour les valeurs correspondantes dans votre fichier `.env`. De plus, vous devrez peut-être apporter des modifications au fichier `docker-compose.yaml` lui-même, comme changer les versions d'image, les mappages de ports ou les montages de volumes, en fonction de votre environnement de déploiement et de vos exigences spécifiques. Après avoir effectué des modifications, veuillez réexécuter `docker-compose up -d`. Vous pouvez trouver la liste complète des variables d'environnement disponibles [ici](https://docs.dify.ai/getting-started/install-self-hosted/environments).
|
||||
|
||||
Si vous souhaitez configurer une configuration haute disponibilité, la communauté fournit des [Helm Charts](https://helm.sh/) et des fichiers YAML, à travers lesquels vous pouvez déployer Dify sur Kubernetes.
|
||||
|
||||
|
|
|
@ -178,6 +178,7 @@ Difyサーバーを起動する最も簡単な方法は、[docker-compose.yml](d
|
|||
|
||||
```bash
|
||||
cd docker
|
||||
cp .env.example .env
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
|
@ -187,7 +188,7 @@ docker compose up -d
|
|||
|
||||
## 次のステップ
|
||||
|
||||
環境設定をカスタマイズする場合は、[docker-compose.yml](docker/docker-compose.yaml)ファイル内のコメントを参照して、環境設定を手動で設定してください。変更を加えた後は、再び `docker-compose up -d` を実行してください。環境変数の完全なリストは[こちら](https://docs.dify.ai/getting-started/install-self-hosted/environments)をご覧ください。
|
||||
設定をカスタマイズする必要がある場合は、[.env.example](docker/.env.example) ファイルのコメントを参照し、`.env` ファイルの対応する値を更新してください。さらに、デプロイ環境や要件に応じて、`docker-compose.yaml` ファイル自体を調整する必要がある場合があります。たとえば、イメージのバージョン、ポートのマッピング、ボリュームのマウントなどを変更します。変更を加えた後は、`docker-compose up -d` を再実行してください。利用可能な環境変数の全一覧は、[こちら](https://docs.dify.ai/getting-started/install-self-hosted/environments)で確認できます。
|
||||
|
||||
高可用性設定を設定する必要がある場合、コミュニティは[Helm Charts](https://helm.sh/)とYAMLファイルにより、DifyをKubernetesにデプロイすることができます。
|
||||
|
||||
|
|
|
@ -179,6 +179,7 @@ The easiest way to start the Dify server is to run our [docker-compose.yml](dock
|
|||
|
||||
```bash
|
||||
cd docker
|
||||
cp .env.example .env
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
|
@ -188,7 +189,7 @@ After running, you can access the Dify dashboard in your browser at [http://loca
|
|||
|
||||
## Next steps
|
||||
|
||||
If you need to customize the configuration, please refer to the comments in our [docker-compose.yml](docker/docker-compose.yaml) file and manually set the environment configuration. After making the changes, please run `docker-compose up -d` again. You can see the full list of environment variables [here](https://docs.dify.ai/getting-started/install-self-hosted/environments).
|
||||
If you need to customize the configuration, please refer to the comments in our [.env.example](docker/.env.example) file and update the corresponding values in your `.env` file. Additionally, you might need to make adjustments to the `docker-compose.yaml` file itself, such as changing image versions, port mappings, or volume mounts, based on your specific deployment environment and requirements. After making any changes, please re-run `docker-compose up -d`. You can find the full list of available environment variables [here](https://docs.dify.ai/getting-started/install-self-hosted/environments).
|
||||
|
||||
If you'd like to configure a highly-available setup, there are community-contributed [Helm Charts](https://helm.sh/) and YAML files which allow Dify to be deployed on Kubernetes.
|
||||
|
||||
|
|
|
@ -172,6 +172,7 @@ Dify 서버를 시작하는 가장 쉬운 방법은 [docker-compose.yml](docker/
|
|||
|
||||
```bash
|
||||
cd docker
|
||||
cp .env.example .env
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
|
@ -181,8 +182,7 @@ docker compose up -d
|
|||
|
||||
## 다음 단계
|
||||
|
||||
구성 커스터마이징이 필요한 경우, [docker-compose.yml](docker/docker-compose.yaml) 파일의 코멘트를 참조하여 환경 구성을 수동으로 설정하십시오. 변경 후 `docker-compose up -d` 를 다시 실행하십시오. 환경 변수의 전체 목록은 [여기](https://docs.dify.ai/getting-started/install-self-hosted/environments)에서 확인할 수 있습니다.
|
||||
|
||||
구성을 사용자 정의해야 하는 경우 [.env.example](docker/.env.example) 파일의 주석을 참조하고 `.env` 파일에서 해당 값을 업데이트하십시오. 또한 특정 배포 환경 및 요구 사항에 따라 `docker-compose.yaml` 파일 자체를 조정해야 할 수도 있습니다. 예를 들어 이미지 버전, 포트 매핑 또는 볼륨 마운트를 변경합니다. 변경 한 후 `docker-compose up -d`를 다시 실행하십시오. 사용 가능한 환경 변수의 전체 목록은 [여기](https://docs.dify.ai/getting-started/install-self-hosted/environments)에서 찾을 수 있습니다.
|
||||
|
||||
Dify를 Kubernetes에 배포하고 프리미엄 스케일링 설정을 구성했다는 커뮤니티가 제공하는 [Helm Charts](https://helm.sh/)와 YAML 파일이 존재합니다.
|
||||
|
||||
|
|
109
docker-legacy/docker-compose.middleware.yaml
Normal file
109
docker-legacy/docker-compose.middleware.yaml
Normal file
|
@ -0,0 +1,109 @@
|
|||
version: '3'
|
||||
services:
|
||||
# The postgres database.
|
||||
db:
|
||||
image: postgres:15-alpine
|
||||
restart: always
|
||||
environment:
|
||||
# The password for the default postgres user.
|
||||
POSTGRES_PASSWORD: difyai123456
|
||||
# The name of the default postgres database.
|
||||
POSTGRES_DB: dify
|
||||
# postgres data directory
|
||||
PGDATA: /var/lib/postgresql/data/pgdata
|
||||
volumes:
|
||||
- ./volumes/db/data:/var/lib/postgresql/data
|
||||
ports:
|
||||
- "5432:5432"
|
||||
|
||||
# The redis cache.
|
||||
redis:
|
||||
image: redis:6-alpine
|
||||
restart: always
|
||||
volumes:
|
||||
# Mount the redis data directory to the container.
|
||||
- ./volumes/redis/data:/data
|
||||
# Set the redis password when startup redis server.
|
||||
command: redis-server --requirepass difyai123456
|
||||
ports:
|
||||
- "6379:6379"
|
||||
|
||||
# The Weaviate vector store.
|
||||
weaviate:
|
||||
image: semitechnologies/weaviate:1.19.0
|
||||
restart: always
|
||||
volumes:
|
||||
# Mount the Weaviate data directory to the container.
|
||||
- ./volumes/weaviate:/var/lib/weaviate
|
||||
environment:
|
||||
# The Weaviate configurations
|
||||
# You can refer to the [Weaviate](https://weaviate.io/developers/weaviate/config-refs/env-vars) documentation for more information.
|
||||
QUERY_DEFAULTS_LIMIT: 25
|
||||
AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED: 'false'
|
||||
PERSISTENCE_DATA_PATH: '/var/lib/weaviate'
|
||||
DEFAULT_VECTORIZER_MODULE: 'none'
|
||||
CLUSTER_HOSTNAME: 'node1'
|
||||
AUTHENTICATION_APIKEY_ENABLED: 'true'
|
||||
AUTHENTICATION_APIKEY_ALLOWED_KEYS: 'WVF5YThaHlkYwhGUSmCRgsX3tD5ngdN8pkih'
|
||||
AUTHENTICATION_APIKEY_USERS: 'hello@dify.ai'
|
||||
AUTHORIZATION_ADMINLIST_ENABLED: 'true'
|
||||
AUTHORIZATION_ADMINLIST_USERS: 'hello@dify.ai'
|
||||
ports:
|
||||
- "8080:8080"
|
||||
|
||||
# The DifySandbox
|
||||
sandbox:
|
||||
image: langgenius/dify-sandbox:0.2.1
|
||||
restart: always
|
||||
environment:
|
||||
# The DifySandbox configurations
|
||||
# Make sure you are changing this key for your deployment with a strong key.
|
||||
# You can generate a strong key using `openssl rand -base64 42`.
|
||||
API_KEY: dify-sandbox
|
||||
GIN_MODE: 'release'
|
||||
WORKER_TIMEOUT: 15
|
||||
ENABLE_NETWORK: 'true'
|
||||
HTTP_PROXY: 'http://ssrf_proxy:3128'
|
||||
HTTPS_PROXY: 'http://ssrf_proxy:3128'
|
||||
SANDBOX_PORT: 8194
|
||||
volumes:
|
||||
- ./volumes/sandbox/dependencies:/dependencies
|
||||
networks:
|
||||
- ssrf_proxy_network
|
||||
|
||||
# ssrf_proxy server
|
||||
# for more information, please refer to
|
||||
# https://docs.dify.ai/getting-started/install-self-hosted/install-faq#id-16.-why-is-ssrf_proxy-needed
|
||||
ssrf_proxy:
|
||||
image: ubuntu/squid:latest
|
||||
restart: always
|
||||
ports:
|
||||
- "3128:3128"
|
||||
- "8194:8194"
|
||||
volumes:
|
||||
# pls clearly modify the squid.conf file to fit your network environment.
|
||||
- ./volumes/ssrf_proxy/squid.conf:/etc/squid/squid.conf
|
||||
networks:
|
||||
- ssrf_proxy_network
|
||||
- default
|
||||
# Qdrant vector store.
|
||||
# uncomment to use qdrant as vector store.
|
||||
# (if uncommented, you need to comment out the weaviate service above,
|
||||
# and set VECTOR_STORE to qdrant in the api & worker service.)
|
||||
# qdrant:
|
||||
# image: qdrant/qdrant:1.7.3
|
||||
# restart: always
|
||||
# volumes:
|
||||
# - ./volumes/qdrant:/qdrant/storage
|
||||
# environment:
|
||||
# QDRANT_API_KEY: 'difyai123456'
|
||||
# ports:
|
||||
# - "6333:6333"
|
||||
# - "6334:6334"
|
||||
|
||||
|
||||
networks:
|
||||
# create a network between sandbox, api and ssrf_proxy, and can not access outside.
|
||||
ssrf_proxy_network:
|
||||
driver: bridge
|
||||
internal: true
|
BIN
docker-legacy/docker-compose.png
Normal file
BIN
docker-legacy/docker-compose.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 62 KiB |
588
docker-legacy/docker-compose.yaml
Normal file
588
docker-legacy/docker-compose.yaml
Normal file
|
@ -0,0 +1,588 @@
|
|||
version: '3'
|
||||
services:
|
||||
# API service
|
||||
api:
|
||||
image: langgenius/dify-api:0.6.11
|
||||
restart: always
|
||||
environment:
|
||||
# Startup mode, 'api' starts the API server.
|
||||
MODE: api
|
||||
# The log level for the application. Supported values are `DEBUG`, `INFO`, `WARNING`, `ERROR`, `CRITICAL`
|
||||
LOG_LEVEL: INFO
|
||||
# enable DEBUG mode to output more logs
|
||||
# DEBUG : true
|
||||
# A secret key that is used for securely signing the session cookie and encrypting sensitive information on the database. You can generate a strong key using `openssl rand -base64 42`.
|
||||
SECRET_KEY: sk-9f73s3ljTXVcMT3Blb3ljTqtsKiGHXVcMT3BlbkFJLK7U
|
||||
# The base URL of console application web frontend, refers to the Console base URL of WEB service if console domain is
|
||||
# different from api or web app domain.
|
||||
# example: http://cloud.dify.ai
|
||||
CONSOLE_WEB_URL: ''
|
||||
# Password for admin user initialization.
|
||||
# If left unset, admin user will not be prompted for a password when creating the initial admin account.
|
||||
INIT_PASSWORD: ''
|
||||
# The base URL of console application api server, refers to the Console base URL of WEB service if console domain is
|
||||
# different from api or web app domain.
|
||||
# example: http://cloud.dify.ai
|
||||
CONSOLE_API_URL: ''
|
||||
# The URL prefix for Service API endpoints, refers to the base URL of the current API service if api domain is
|
||||
# different from console domain.
|
||||
# example: http://api.dify.ai
|
||||
SERVICE_API_URL: ''
|
||||
# The URL prefix for Web APP frontend, refers to the Web App base URL of WEB service if web app domain is different from
|
||||
# console or api domain.
|
||||
# example: http://udify.app
|
||||
APP_WEB_URL: ''
|
||||
# File preview or download Url prefix.
|
||||
# used to display File preview or download Url to the front-end or as Multi-model inputs;
|
||||
# Url is signed and has expiration time.
|
||||
FILES_URL: ''
|
||||
# File Access Time specifies a time interval in seconds for the file to be accessed.
|
||||
# The default value is 300 seconds.
|
||||
FILES_ACCESS_TIMEOUT: 300
|
||||
# When enabled, migrations will be executed prior to application startup and the application will start after the migrations have completed.
|
||||
MIGRATION_ENABLED: 'true'
|
||||
# The configurations of postgres database connection.
|
||||
# It is consistent with the configuration in the 'db' service below.
|
||||
DB_USERNAME: postgres
|
||||
DB_PASSWORD: difyai123456
|
||||
DB_HOST: db
|
||||
DB_PORT: 5432
|
||||
DB_DATABASE: dify
|
||||
# The configurations of redis connection.
|
||||
# It is consistent with the configuration in the 'redis' service below.
|
||||
REDIS_HOST: redis
|
||||
REDIS_PORT: 6379
|
||||
REDIS_USERNAME: ''
|
||||
REDIS_PASSWORD: difyai123456
|
||||
REDIS_USE_SSL: 'false'
|
||||
# use redis db 0 for redis cache
|
||||
REDIS_DB: 0
|
||||
# The configurations of celery broker.
|
||||
# Use redis as the broker, and redis db 1 for celery broker.
|
||||
CELERY_BROKER_URL: redis://:difyai123456@redis:6379/1
|
||||
# Specifies the allowed origins for cross-origin requests to the Web API, e.g. https://dify.app or * for all origins.
|
||||
WEB_API_CORS_ALLOW_ORIGINS: '*'
|
||||
# Specifies the allowed origins for cross-origin requests to the console API, e.g. https://cloud.dify.ai or * for all origins.
|
||||
CONSOLE_CORS_ALLOW_ORIGINS: '*'
|
||||
# CSRF Cookie settings
|
||||
# Controls whether a cookie is sent with cross-site requests,
|
||||
# providing some protection against cross-site request forgery attacks
|
||||
#
|
||||
# Default: `SameSite=Lax, Secure=false, HttpOnly=true`
|
||||
# This default configuration supports same-origin requests using either HTTP or HTTPS,
|
||||
# but does not support cross-origin requests. It is suitable for local debugging purposes.
|
||||
#
|
||||
# If you want to enable cross-origin support,
|
||||
# you must use the HTTPS protocol and set the configuration to `SameSite=None, Secure=true, HttpOnly=true`.
|
||||
#
|
||||
# The type of storage to use for storing user files. Supported values are `local` and `s3` and `azure-blob` and `google-storage`, Default: `local`
|
||||
STORAGE_TYPE: local
|
||||
# The path to the local storage directory, the directory relative the root path of API service codes or absolute path. Default: `storage` or `/home/john/storage`.
|
||||
# only available when STORAGE_TYPE is `local`.
|
||||
STORAGE_LOCAL_PATH: storage
|
||||
# The S3 storage configurations, only available when STORAGE_TYPE is `s3`.
|
||||
S3_USE_AWS_MANAGED_IAM: 'false'
|
||||
S3_ENDPOINT: 'https://xxx.r2.cloudflarestorage.com'
|
||||
S3_BUCKET_NAME: 'difyai'
|
||||
S3_ACCESS_KEY: 'ak-difyai'
|
||||
S3_SECRET_KEY: 'sk-difyai'
|
||||
S3_REGION: 'us-east-1'
|
||||
# The Azure Blob storage configurations, only available when STORAGE_TYPE is `azure-blob`.
|
||||
AZURE_BLOB_ACCOUNT_NAME: 'difyai'
|
||||
AZURE_BLOB_ACCOUNT_KEY: 'difyai'
|
||||
AZURE_BLOB_CONTAINER_NAME: 'difyai-container'
|
||||
AZURE_BLOB_ACCOUNT_URL: 'https://<your_account_name>.blob.core.windows.net'
|
||||
# The Google storage configurations, only available when STORAGE_TYPE is `google-storage`.
|
||||
GOOGLE_STORAGE_BUCKET_NAME: 'yout-bucket-name'
|
||||
# if you want to use Application Default Credentials, you can leave GOOGLE_STORAGE_SERVICE_ACCOUNT_JSON_BASE64 empty.
|
||||
GOOGLE_STORAGE_SERVICE_ACCOUNT_JSON_BASE64: 'your-google-service-account-json-base64-string'
|
||||
# The Alibaba Cloud OSS configurations, only available when STORAGE_TYPE is `aliyun-oss`
|
||||
ALIYUN_OSS_BUCKET_NAME: 'your-bucket-name'
|
||||
ALIYUN_OSS_ACCESS_KEY: 'your-access-key'
|
||||
ALIYUN_OSS_SECRET_KEY: 'your-secret-key'
|
||||
ALIYUN_OSS_ENDPOINT: 'https://oss-ap-southeast-1-internal.aliyuncs.com'
|
||||
ALIYUN_OSS_REGION: 'ap-southeast-1'
|
||||
ALIYUN_OSS_AUTH_VERSION: 'v4'
|
||||
# The Tencent COS storage configurations, only available when STORAGE_TYPE is `tencent-cos`.
|
||||
TENCENT_COS_BUCKET_NAME: 'your-bucket-name'
|
||||
TENCENT_COS_SECRET_KEY: 'your-secret-key'
|
||||
TENCENT_COS_SECRET_ID: 'your-secret-id'
|
||||
TENCENT_COS_REGION: 'your-region'
|
||||
TENCENT_COS_SCHEME: 'your-scheme'
|
||||
# The type of vector store to use. Supported values are `weaviate`, `qdrant`, `milvus`, `relyt`,`pgvector`, `chroma`, 'opensearch', 'tidb_vector'.
|
||||
VECTOR_STORE: weaviate
|
||||
# The Weaviate endpoint URL. Only available when VECTOR_STORE is `weaviate`.
|
||||
WEAVIATE_ENDPOINT: http://weaviate:8080
|
||||
# The Weaviate API key.
|
||||
WEAVIATE_API_KEY: WVF5YThaHlkYwhGUSmCRgsX3tD5ngdN8pkih
|
||||
# The Qdrant endpoint URL. Only available when VECTOR_STORE is `qdrant`.
|
||||
QDRANT_URL: http://qdrant:6333
|
||||
# The Qdrant API key.
|
||||
QDRANT_API_KEY: difyai123456
|
||||
# The Qdrant client timeout setting.
|
||||
QDRANT_CLIENT_TIMEOUT: 20
|
||||
# The Qdrant client enable gRPC mode.
|
||||
QDRANT_GRPC_ENABLED: 'false'
|
||||
# The Qdrant server gRPC mode PORT.
|
||||
QDRANT_GRPC_PORT: 6334
|
||||
# Milvus configuration Only available when VECTOR_STORE is `milvus`.
|
||||
# The milvus host.
|
||||
MILVUS_HOST: 127.0.0.1
|
||||
# The milvus host.
|
||||
MILVUS_PORT: 19530
|
||||
# The milvus username.
|
||||
MILVUS_USER: root
|
||||
# The milvus password.
|
||||
MILVUS_PASSWORD: Milvus
|
||||
# The milvus tls switch.
|
||||
MILVUS_SECURE: 'false'
|
||||
# relyt configurations
|
||||
RELYT_HOST: db
|
||||
RELYT_PORT: 5432
|
||||
RELYT_USER: postgres
|
||||
RELYT_PASSWORD: difyai123456
|
||||
RELYT_DATABASE: postgres
|
||||
# pgvector configurations
|
||||
PGVECTOR_HOST: pgvector
|
||||
PGVECTOR_PORT: 5432
|
||||
PGVECTOR_USER: postgres
|
||||
PGVECTOR_PASSWORD: difyai123456
|
||||
PGVECTOR_DATABASE: dify
|
||||
# tidb vector configurations
|
||||
TIDB_VECTOR_HOST: tidb
|
||||
TIDB_VECTOR_PORT: 4000
|
||||
TIDB_VECTOR_USER: xxx.root
|
||||
TIDB_VECTOR_PASSWORD: xxxxxx
|
||||
TIDB_VECTOR_DATABASE: dify
|
||||
# oracle configurations
|
||||
ORACLE_HOST: oracle
|
||||
ORACLE_PORT: 1521
|
||||
ORACLE_USER: dify
|
||||
ORACLE_PASSWORD: dify
|
||||
ORACLE_DATABASE: FREEPDB1
|
||||
# Chroma configuration
|
||||
CHROMA_HOST: 127.0.0.1
|
||||
CHROMA_PORT: 8000
|
||||
CHROMA_TENANT: default_tenant
|
||||
CHROMA_DATABASE: default_database
|
||||
CHROMA_AUTH_PROVIDER: chromadb.auth.token_authn.TokenAuthClientProvider
|
||||
CHROMA_AUTH_CREDENTIALS: xxxxxx
|
||||
# Mail configuration, support: resend, smtp
|
||||
MAIL_TYPE: ''
|
||||
# default send from email address, if not specified
|
||||
MAIL_DEFAULT_SEND_FROM: 'YOUR EMAIL FROM (eg: no-reply <no-reply@dify.ai>)'
|
||||
SMTP_SERVER: ''
|
||||
SMTP_PORT: 465
|
||||
SMTP_USERNAME: ''
|
||||
SMTP_PASSWORD: ''
|
||||
SMTP_USE_TLS: 'true'
|
||||
SMTP_OPPORTUNISTIC_TLS: 'false'
|
||||
# the api-key for resend (https://resend.com)
|
||||
RESEND_API_KEY: ''
|
||||
RESEND_API_URL: https://api.resend.com
|
||||
# The DSN for Sentry error reporting. If not set, Sentry error reporting will be disabled.
|
||||
SENTRY_DSN: ''
|
||||
# The sample rate for Sentry events. Default: `1.0`
|
||||
SENTRY_TRACES_SAMPLE_RATE: 1.0
|
||||
# The sample rate for Sentry profiles. Default: `1.0`
|
||||
SENTRY_PROFILES_SAMPLE_RATE: 1.0
|
||||
# Notion import configuration, support public and internal
|
||||
NOTION_INTEGRATION_TYPE: public
|
||||
NOTION_CLIENT_SECRET: you-client-secret
|
||||
NOTION_CLIENT_ID: you-client-id
|
||||
NOTION_INTERNAL_SECRET: you-internal-secret
|
||||
# The sandbox service endpoint.
|
||||
CODE_EXECUTION_ENDPOINT: "http://sandbox:8194"
|
||||
CODE_EXECUTION_API_KEY: dify-sandbox
|
||||
CODE_MAX_NUMBER: 9223372036854775807
|
||||
CODE_MIN_NUMBER: -9223372036854775808
|
||||
CODE_MAX_STRING_LENGTH: 80000
|
||||
TEMPLATE_TRANSFORM_MAX_LENGTH: 80000
|
||||
CODE_MAX_STRING_ARRAY_LENGTH: 30
|
||||
CODE_MAX_OBJECT_ARRAY_LENGTH: 30
|
||||
CODE_MAX_NUMBER_ARRAY_LENGTH: 1000
|
||||
# SSRF Proxy server
|
||||
SSRF_PROXY_HTTP_URL: 'http://ssrf_proxy:3128'
|
||||
SSRF_PROXY_HTTPS_URL: 'http://ssrf_proxy:3128'
|
||||
# Indexing configuration
|
||||
INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH: 1000
|
||||
depends_on:
|
||||
- db
|
||||
- redis
|
||||
volumes:
|
||||
# Mount the storage directory to the container, for storing user files.
|
||||
- ./volumes/app/storage:/app/api/storage
|
||||
# uncomment to expose dify-api port to host
|
||||
# ports:
|
||||
# - "5001:5001"
|
||||
networks:
|
||||
- ssrf_proxy_network
|
||||
- default
|
||||
|
||||
# worker service
|
||||
# The Celery worker for processing the queue.
|
||||
worker:
|
||||
image: langgenius/dify-api:0.6.11
|
||||
restart: always
|
||||
environment:
|
||||
CONSOLE_WEB_URL: ''
|
||||
# Startup mode, 'worker' starts the Celery worker for processing the queue.
|
||||
MODE: worker
|
||||
|
||||
# --- All the configurations below are the same as those in the 'api' service. ---
|
||||
|
||||
# The log level for the application. Supported values are `DEBUG`, `INFO`, `WARNING`, `ERROR`, `CRITICAL`
|
||||
LOG_LEVEL: INFO
|
||||
# A secret key that is used for securely signing the session cookie and encrypting sensitive information on the database. You can generate a strong key using `openssl rand -base64 42`.
|
||||
# same as the API service
|
||||
SECRET_KEY: sk-9f73s3ljTXVcMT3Blb3ljTqtsKiGHXVcMT3BlbkFJLK7U
|
||||
# The configurations of postgres database connection.
|
||||
# It is consistent with the configuration in the 'db' service below.
|
||||
DB_USERNAME: postgres
|
||||
DB_PASSWORD: difyai123456
|
||||
DB_HOST: db
|
||||
DB_PORT: 5432
|
||||
DB_DATABASE: dify
|
||||
# The configurations of redis cache connection.
|
||||
REDIS_HOST: redis
|
||||
REDIS_PORT: 6379
|
||||
REDIS_USERNAME: ''
|
||||
REDIS_PASSWORD: difyai123456
|
||||
REDIS_DB: 0
|
||||
REDIS_USE_SSL: 'false'
|
||||
# The configurations of celery broker.
|
||||
CELERY_BROKER_URL: redis://:difyai123456@redis:6379/1
|
||||
# The type of storage to use for storing user files. Supported values are `local` and `s3` and `azure-blob` and `google-storage`, Default: `local`
|
||||
STORAGE_TYPE: local
|
||||
STORAGE_LOCAL_PATH: storage
|
||||
# The S3 storage configurations, only available when STORAGE_TYPE is `s3`.
|
||||
S3_USE_AWS_MANAGED_IAM: 'false'
|
||||
S3_ENDPOINT: 'https://xxx.r2.cloudflarestorage.com'
|
||||
S3_BUCKET_NAME: 'difyai'
|
||||
S3_ACCESS_KEY: 'ak-difyai'
|
||||
S3_SECRET_KEY: 'sk-difyai'
|
||||
S3_REGION: 'us-east-1'
|
||||
# The Azure Blob storage configurations, only available when STORAGE_TYPE is `azure-blob`.
|
||||
AZURE_BLOB_ACCOUNT_NAME: 'difyai'
|
||||
AZURE_BLOB_ACCOUNT_KEY: 'difyai'
|
||||
AZURE_BLOB_CONTAINER_NAME: 'difyai-container'
|
||||
AZURE_BLOB_ACCOUNT_URL: 'https://<your_account_name>.blob.core.windows.net'
|
||||
# The Google storage configurations, only available when STORAGE_TYPE is `google-storage`.
|
||||
GOOGLE_STORAGE_BUCKET_NAME: 'yout-bucket-name'
|
||||
# if you want to use Application Default Credentials, you can leave GOOGLE_STORAGE_SERVICE_ACCOUNT_JSON_BASE64 empty.
|
||||
GOOGLE_STORAGE_SERVICE_ACCOUNT_JSON_BASE64: 'your-google-service-account-json-base64-string'
|
||||
# The Alibaba Cloud OSS configurations, only available when STORAGE_TYPE is `aliyun-oss`
|
||||
ALIYUN_OSS_BUCKET_NAME: 'your-bucket-name'
|
||||
ALIYUN_OSS_ACCESS_KEY: 'your-access-key'
|
||||
ALIYUN_OSS_SECRET_KEY: 'your-secret-key'
|
||||
ALIYUN_OSS_ENDPOINT: 'https://oss-ap-southeast-1-internal.aliyuncs.com'
|
||||
ALIYUN_OSS_REGION: 'ap-southeast-1'
|
||||
ALIYUN_OSS_AUTH_VERSION: 'v4'
|
||||
# The Tencent COS storage configurations, only available when STORAGE_TYPE is `tencent-cos`.
|
||||
TENCENT_COS_BUCKET_NAME: 'your-bucket-name'
|
||||
TENCENT_COS_SECRET_KEY: 'your-secret-key'
|
||||
TENCENT_COS_SECRET_ID: 'your-secret-id'
|
||||
TENCENT_COS_REGION: 'your-region'
|
||||
TENCENT_COS_SCHEME: 'your-scheme'
|
||||
# The type of vector store to use. Supported values are `weaviate`, `qdrant`, `milvus`, `relyt`, `pgvector`, `chroma`, 'opensearch', 'tidb_vector'.
|
||||
VECTOR_STORE: weaviate
|
||||
# The Weaviate endpoint URL. Only available when VECTOR_STORE is `weaviate`.
|
||||
WEAVIATE_ENDPOINT: http://weaviate:8080
|
||||
# The Weaviate API key.
|
||||
WEAVIATE_API_KEY: WVF5YThaHlkYwhGUSmCRgsX3tD5ngdN8pkih
|
||||
# The Qdrant endpoint URL. Only available when VECTOR_STORE is `qdrant`.
|
||||
QDRANT_URL: http://qdrant:6333
|
||||
# The Qdrant API key.
|
||||
QDRANT_API_KEY: difyai123456
|
||||
# The Qdrant client timeout setting.
|
||||
QDRANT_CLIENT_TIMEOUT: 20
|
||||
# The Qdrant client enable gRPC mode.
|
||||
QDRANT_GRPC_ENABLED: 'false'
|
||||
# The Qdrant server gRPC mode PORT.
|
||||
QDRANT_GRPC_PORT: 6334
|
||||
# Milvus configuration Only available when VECTOR_STORE is `milvus`.
|
||||
# The milvus host.
|
||||
MILVUS_HOST: 127.0.0.1
|
||||
# The milvus host.
|
||||
MILVUS_PORT: 19530
|
||||
# The milvus username.
|
||||
MILVUS_USER: root
|
||||
# The milvus password.
|
||||
MILVUS_PASSWORD: Milvus
|
||||
# The milvus tls switch.
|
||||
MILVUS_SECURE: 'false'
|
||||
# Mail configuration, support: resend
|
||||
MAIL_TYPE: ''
|
||||
# default send from email address, if not specified
|
||||
MAIL_DEFAULT_SEND_FROM: 'YOUR EMAIL FROM (eg: no-reply <no-reply@dify.ai>)'
|
||||
SMTP_SERVER: ''
|
||||
SMTP_PORT: 465
|
||||
SMTP_USERNAME: ''
|
||||
SMTP_PASSWORD: ''
|
||||
SMTP_USE_TLS: 'true'
|
||||
SMTP_OPPORTUNISTIC_TLS: 'false'
|
||||
# the api-key for resend (https://resend.com)
|
||||
RESEND_API_KEY: ''
|
||||
RESEND_API_URL: https://api.resend.com
|
||||
# relyt configurations
|
||||
RELYT_HOST: db
|
||||
RELYT_PORT: 5432
|
||||
RELYT_USER: postgres
|
||||
RELYT_PASSWORD: difyai123456
|
||||
RELYT_DATABASE: postgres
|
||||
# tencent configurations
|
||||
TENCENT_VECTOR_DB_URL: http://127.0.0.1
|
||||
TENCENT_VECTOR_DB_API_KEY: dify
|
||||
TENCENT_VECTOR_DB_TIMEOUT: 30
|
||||
TENCENT_VECTOR_DB_USERNAME: dify
|
||||
TENCENT_VECTOR_DB_DATABASE: dify
|
||||
TENCENT_VECTOR_DB_SHARD: 1
|
||||
TENCENT_VECTOR_DB_REPLICAS: 2
|
||||
# OpenSearch configuration
|
||||
OPENSEARCH_HOST: 127.0.0.1
|
||||
OPENSEARCH_PORT: 9200
|
||||
OPENSEARCH_USER: admin
|
||||
OPENSEARCH_PASSWORD: admin
|
||||
OPENSEARCH_SECURE: 'true'
|
||||
# pgvector configurations
|
||||
PGVECTOR_HOST: pgvector
|
||||
PGVECTOR_PORT: 5432
|
||||
PGVECTOR_USER: postgres
|
||||
PGVECTOR_PASSWORD: difyai123456
|
||||
PGVECTOR_DATABASE: dify
|
||||
# tidb vector configurations
|
||||
TIDB_VECTOR_HOST: tidb
|
||||
TIDB_VECTOR_PORT: 4000
|
||||
TIDB_VECTOR_USER: xxx.root
|
||||
TIDB_VECTOR_PASSWORD: xxxxxx
|
||||
TIDB_VECTOR_DATABASE: dify
|
||||
# oracle configurations
|
||||
ORACLE_HOST: oracle
|
||||
ORACLE_PORT: 1521
|
||||
ORACLE_USER: dify
|
||||
ORACLE_PASSWORD: dify
|
||||
ORACLE_DATABASE: FREEPDB1
|
||||
# Chroma configuration
|
||||
CHROMA_HOST: 127.0.0.1
|
||||
CHROMA_PORT: 8000
|
||||
CHROMA_TENANT: default_tenant
|
||||
CHROMA_DATABASE: default_database
|
||||
CHROMA_AUTH_PROVIDER: chromadb.auth.token_authn.TokenAuthClientProvider
|
||||
CHROMA_AUTH_CREDENTIALS: xxxxxx
|
||||
# Notion import configuration, support public and internal
|
||||
NOTION_INTEGRATION_TYPE: public
|
||||
NOTION_CLIENT_SECRET: you-client-secret
|
||||
NOTION_CLIENT_ID: you-client-id
|
||||
NOTION_INTERNAL_SECRET: you-internal-secret
|
||||
# Indexing configuration
|
||||
INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH: 1000
|
||||
depends_on:
|
||||
- db
|
||||
- redis
|
||||
volumes:
|
||||
# Mount the storage directory to the container, for storing user files.
|
||||
- ./volumes/app/storage:/app/api/storage
|
||||
networks:
|
||||
- ssrf_proxy_network
|
||||
- default
|
||||
|
||||
# Frontend web application.
|
||||
web:
|
||||
image: langgenius/dify-web:0.6.11
|
||||
restart: always
|
||||
environment:
|
||||
# The base URL of console application api server, refers to the Console base URL of WEB service if console domain is
|
||||
# different from api or web app domain.
|
||||
# example: http://cloud.dify.ai
|
||||
CONSOLE_API_URL: ''
|
||||
# The URL for Web APP api server, refers to the Web App base URL of WEB service if web app domain is different from
|
||||
# console or api domain.
|
||||
# example: http://udify.app
|
||||
APP_API_URL: ''
|
||||
# The DSN for Sentry error reporting. If not set, Sentry error reporting will be disabled.
|
||||
SENTRY_DSN: ''
|
||||
# uncomment to expose dify-web port to host
|
||||
# ports:
|
||||
# - "3000:3000"
|
||||
|
||||
# The postgres database.
|
||||
db:
|
||||
image: postgres:15-alpine
|
||||
restart: always
|
||||
environment:
|
||||
PGUSER: postgres
|
||||
# The password for the default postgres user.
|
||||
POSTGRES_PASSWORD: difyai123456
|
||||
# The name of the default postgres database.
|
||||
POSTGRES_DB: dify
|
||||
# postgres data directory
|
||||
PGDATA: /var/lib/postgresql/data/pgdata
|
||||
volumes:
|
||||
- ./volumes/db/data:/var/lib/postgresql/data
|
||||
# notice!: if you use windows-wsl2, postgres may not work properly due to the ntfs issue.you can use volumes to mount the data directory to the host.
|
||||
# if you use the following config, you need to uncomment the volumes configuration below at the end of the file.
|
||||
# - postgres:/var/lib/postgresql/data
|
||||
# uncomment to expose db(postgresql) port to host
|
||||
# ports:
|
||||
# - "5432:5432"
|
||||
healthcheck:
|
||||
test: [ "CMD", "pg_isready" ]
|
||||
interval: 1s
|
||||
timeout: 3s
|
||||
retries: 30
|
||||
|
||||
# The redis cache.
|
||||
redis:
|
||||
image: redis:6-alpine
|
||||
restart: always
|
||||
volumes:
|
||||
# Mount the redis data directory to the container.
|
||||
- ./volumes/redis/data:/data
|
||||
# Set the redis password when startup redis server.
|
||||
command: redis-server --requirepass difyai123456
|
||||
healthcheck:
|
||||
test: [ "CMD", "redis-cli", "ping" ]
|
||||
# uncomment to expose redis port to host
|
||||
# ports:
|
||||
# - "6379:6379"
|
||||
|
||||
# The Weaviate vector store.
|
||||
weaviate:
|
||||
image: semitechnologies/weaviate:1.19.0
|
||||
restart: always
|
||||
volumes:
|
||||
# Mount the Weaviate data directory to the container.
|
||||
- ./volumes/weaviate:/var/lib/weaviate
|
||||
environment:
|
||||
# The Weaviate configurations
|
||||
# You can refer to the [Weaviate](https://weaviate.io/developers/weaviate/config-refs/env-vars) documentation for more information.
|
||||
QUERY_DEFAULTS_LIMIT: 25
|
||||
AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED: 'false'
|
||||
PERSISTENCE_DATA_PATH: '/var/lib/weaviate'
|
||||
DEFAULT_VECTORIZER_MODULE: 'none'
|
||||
CLUSTER_HOSTNAME: 'node1'
|
||||
AUTHENTICATION_APIKEY_ENABLED: 'true'
|
||||
AUTHENTICATION_APIKEY_ALLOWED_KEYS: 'WVF5YThaHlkYwhGUSmCRgsX3tD5ngdN8pkih'
|
||||
AUTHENTICATION_APIKEY_USERS: 'hello@dify.ai'
|
||||
AUTHORIZATION_ADMINLIST_ENABLED: 'true'
|
||||
AUTHORIZATION_ADMINLIST_USERS: 'hello@dify.ai'
|
||||
# uncomment to expose weaviate port to host
|
||||
# ports:
|
||||
# - "8080:8080"
|
||||
|
||||
# The DifySandbox
|
||||
sandbox:
|
||||
image: langgenius/dify-sandbox:0.2.1
|
||||
restart: always
|
||||
environment:
|
||||
# The DifySandbox configurations
|
||||
# Make sure you are changing this key for your deployment with a strong key.
|
||||
# You can generate a strong key using `openssl rand -base64 42`.
|
||||
API_KEY: dify-sandbox
|
||||
GIN_MODE: 'release'
|
||||
WORKER_TIMEOUT: 15
|
||||
ENABLE_NETWORK: 'true'
|
||||
HTTP_PROXY: 'http://ssrf_proxy:3128'
|
||||
HTTPS_PROXY: 'http://ssrf_proxy:3128'
|
||||
SANDBOX_PORT: 8194
|
||||
volumes:
|
||||
- ./volumes/sandbox/dependencies:/dependencies
|
||||
networks:
|
||||
- ssrf_proxy_network
|
||||
|
||||
# ssrf_proxy server
|
||||
# for more information, please refer to
|
||||
# https://docs.dify.ai/getting-started/install-self-hosted/install-faq#id-16.-why-is-ssrf_proxy-needed
|
||||
ssrf_proxy:
|
||||
image: ubuntu/squid:latest
|
||||
restart: always
|
||||
volumes:
|
||||
# pls clearly modify the squid.conf file to fit your network environment.
|
||||
- ./volumes/ssrf_proxy/squid.conf:/etc/squid/squid.conf
|
||||
networks:
|
||||
- ssrf_proxy_network
|
||||
- default
|
||||
# Qdrant vector store.
|
||||
# uncomment to use qdrant as vector store.
|
||||
# (if uncommented, you need to comment out the weaviate service above,
|
||||
# and set VECTOR_STORE to qdrant in the api & worker service.)
|
||||
# qdrant:
|
||||
# image: langgenius/qdrant:v1.7.3
|
||||
# restart: always
|
||||
# volumes:
|
||||
# - ./volumes/qdrant:/qdrant/storage
|
||||
# environment:
|
||||
# QDRANT_API_KEY: 'difyai123456'
|
||||
# # uncomment to expose qdrant port to host
|
||||
# # ports:
|
||||
# # - "6333:6333"
|
||||
# # - "6334:6334"
|
||||
|
||||
# The pgvector vector database.
|
||||
# Uncomment to use qdrant as vector store.
|
||||
# pgvector:
|
||||
# image: pgvector/pgvector:pg16
|
||||
# restart: always
|
||||
# environment:
|
||||
# PGUSER: postgres
|
||||
# # The password for the default postgres user.
|
||||
# POSTGRES_PASSWORD: difyai123456
|
||||
# # The name of the default postgres database.
|
||||
# POSTGRES_DB: dify
|
||||
# # postgres data directory
|
||||
# PGDATA: /var/lib/postgresql/data/pgdata
|
||||
# volumes:
|
||||
# - ./volumes/pgvector/data:/var/lib/postgresql/data
|
||||
# # uncomment to expose db(postgresql) port to host
|
||||
# # ports:
|
||||
# # - "5433:5432"
|
||||
# healthcheck:
|
||||
# test: [ "CMD", "pg_isready" ]
|
||||
# interval: 1s
|
||||
# timeout: 3s
|
||||
# retries: 30
|
||||
|
||||
# The oracle vector database.
|
||||
# Uncomment to use oracle23ai as vector store. Also need to Uncomment volumes block
|
||||
# oracle:
|
||||
# image: container-registry.oracle.com/database/free:latest
|
||||
# restart: always
|
||||
# ports:
|
||||
# - 1521:1521
|
||||
# volumes:
|
||||
# - type: volume
|
||||
# source: oradata
|
||||
# target: /opt/oracle/oradata
|
||||
# - ./startupscripts:/opt/oracle/scripts/startup
|
||||
# environment:
|
||||
# - ORACLE_PWD=Dify123456
|
||||
# - ORACLE_CHARACTERSET=AL32UTF8
|
||||
|
||||
|
||||
# The nginx reverse proxy.
|
||||
# used for reverse proxying the API service and Web service.
|
||||
nginx:
|
||||
image: nginx:latest
|
||||
restart: always
|
||||
volumes:
|
||||
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
|
||||
- ./nginx/proxy.conf:/etc/nginx/proxy.conf
|
||||
- ./nginx/conf.d:/etc/nginx/conf.d
|
||||
#- ./nginx/ssl:/etc/ssl
|
||||
depends_on:
|
||||
- api
|
||||
- web
|
||||
ports:
|
||||
- "80:80"
|
||||
#- "443:443"
|
||||
# notice: if you use windows-wsl2, postgres may not work properly due to the ntfs issue.you can use volumes to mount the data directory to the host.
|
||||
# volumes:
|
||||
# postgres:
|
||||
networks:
|
||||
# create a network between sandbox, api and ssrf_proxy, and can not access outside.
|
||||
ssrf_proxy_network:
|
||||
driver: bridge
|
||||
internal: true
|
||||
|
||||
#volumes:
|
||||
# oradata:
|
1
docker-legacy/nginx/ssl/.gitkeep
Normal file
1
docker-legacy/nginx/ssl/.gitkeep
Normal file
|
@ -0,0 +1 @@
|
|||
|
5
docker-legacy/startupscripts/create_user.sql
Executable file
5
docker-legacy/startupscripts/create_user.sql
Executable file
|
@ -0,0 +1,5 @@
|
|||
show pdbs;
|
||||
ALTER SYSTEM SET PROCESSES=500 SCOPE=SPFILE;
|
||||
alter session set container= freepdb1;
|
||||
create user dify identified by dify DEFAULT TABLESPACE users quota unlimited on users;
|
||||
grant DB_DEVELOPER_ROLE to dify;
|
222
docker-legacy/volumes/opensearch/opensearch_dashboards.yml
Normal file
222
docker-legacy/volumes/opensearch/opensearch_dashboards.yml
Normal file
|
@ -0,0 +1,222 @@
|
|||
---
|
||||
# Copyright OpenSearch Contributors
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
# Description:
|
||||
# Default configuration for OpenSearch Dashboards
|
||||
|
||||
# OpenSearch Dashboards is served by a back end server. This setting specifies the port to use.
|
||||
# server.port: 5601
|
||||
|
||||
# Specifies the address to which the OpenSearch Dashboards server will bind. IP addresses and host names are both valid values.
|
||||
# The default is 'localhost', which usually means remote machines will not be able to connect.
|
||||
# To allow connections from remote users, set this parameter to a non-loopback address.
|
||||
# server.host: "localhost"
|
||||
|
||||
# Enables you to specify a path to mount OpenSearch Dashboards at if you are running behind a proxy.
|
||||
# Use the `server.rewriteBasePath` setting to tell OpenSearch Dashboards if it should remove the basePath
|
||||
# from requests it receives, and to prevent a deprecation warning at startup.
|
||||
# This setting cannot end in a slash.
|
||||
# server.basePath: ""
|
||||
|
||||
# Specifies whether OpenSearch Dashboards should rewrite requests that are prefixed with
|
||||
# `server.basePath` or require that they are rewritten by your reverse proxy.
|
||||
# server.rewriteBasePath: false
|
||||
|
||||
# The maximum payload size in bytes for incoming server requests.
|
||||
# server.maxPayloadBytes: 1048576
|
||||
|
||||
# The OpenSearch Dashboards server's name. This is used for display purposes.
|
||||
# server.name: "your-hostname"
|
||||
|
||||
# The URLs of the OpenSearch instances to use for all your queries.
|
||||
# opensearch.hosts: ["http://localhost:9200"]
|
||||
|
||||
# OpenSearch Dashboards uses an index in OpenSearch to store saved searches, visualizations and
|
||||
# dashboards. OpenSearch Dashboards creates a new index if the index doesn't already exist.
|
||||
# opensearchDashboards.index: ".opensearch_dashboards"
|
||||
|
||||
# The default application to load.
|
||||
# opensearchDashboards.defaultAppId: "home"
|
||||
|
||||
# Setting for an optimized healthcheck that only uses the local OpenSearch node to do Dashboards healthcheck.
|
||||
# This settings should be used for large clusters or for clusters with ingest heavy nodes.
|
||||
# It allows Dashboards to only healthcheck using the local OpenSearch node rather than fan out requests across all nodes.
|
||||
#
|
||||
# It requires the user to create an OpenSearch node attribute with the same name as the value used in the setting
|
||||
# This node attribute should assign all nodes of the same cluster an integer value that increments with each new cluster that is spun up
|
||||
# e.g. in opensearch.yml file you would set the value to a setting using node.attr.cluster_id:
|
||||
# Should only be enabled if there is a corresponding node attribute created in your OpenSearch config that matches the value here
|
||||
# opensearch.optimizedHealthcheckId: "cluster_id"
|
||||
|
||||
# If your OpenSearch is protected with basic authentication, these settings provide
|
||||
# the username and password that the OpenSearch Dashboards server uses to perform maintenance on the OpenSearch Dashboards
|
||||
# index at startup. Your OpenSearch Dashboards users still need to authenticate with OpenSearch, which
|
||||
# is proxied through the OpenSearch Dashboards server.
|
||||
# opensearch.username: "opensearch_dashboards_system"
|
||||
# opensearch.password: "pass"
|
||||
|
||||
# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
|
||||
# These settings enable SSL for outgoing requests from the OpenSearch Dashboards server to the browser.
|
||||
# server.ssl.enabled: false
|
||||
# server.ssl.certificate: /path/to/your/server.crt
|
||||
# server.ssl.key: /path/to/your/server.key
|
||||
|
||||
# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
|
||||
# These files are used to verify the identity of OpenSearch Dashboards to OpenSearch and are required when
|
||||
# xpack.security.http.ssl.client_authentication in OpenSearch is set to required.
|
||||
# opensearch.ssl.certificate: /path/to/your/client.crt
|
||||
# opensearch.ssl.key: /path/to/your/client.key
|
||||
|
||||
# Optional setting that enables you to specify a path to the PEM file for the certificate
|
||||
# authority for your OpenSearch instance.
|
||||
# opensearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]
|
||||
|
||||
# To disregard the validity of SSL certificates, change this setting's value to 'none'.
|
||||
# opensearch.ssl.verificationMode: full
|
||||
|
||||
# Time in milliseconds to wait for OpenSearch to respond to pings. Defaults to the value of
|
||||
# the opensearch.requestTimeout setting.
|
||||
# opensearch.pingTimeout: 1500
|
||||
|
||||
# Time in milliseconds to wait for responses from the back end or OpenSearch. This value
|
||||
# must be a positive integer.
|
||||
# opensearch.requestTimeout: 30000
|
||||
|
||||
# List of OpenSearch Dashboards client-side headers to send to OpenSearch. To send *no* client-side
|
||||
# headers, set this value to [] (an empty list).
|
||||
# opensearch.requestHeadersWhitelist: [ authorization ]
|
||||
|
||||
# Header names and values that are sent to OpenSearch. Any custom headers cannot be overwritten
|
||||
# by client-side headers, regardless of the opensearch.requestHeadersWhitelist configuration.
|
||||
# opensearch.customHeaders: {}
|
||||
|
||||
# Time in milliseconds for OpenSearch to wait for responses from shards. Set to 0 to disable.
|
||||
# opensearch.shardTimeout: 30000
|
||||
|
||||
# Logs queries sent to OpenSearch. Requires logging.verbose set to true.
|
||||
# opensearch.logQueries: false
|
||||
|
||||
# Specifies the path where OpenSearch Dashboards creates the process ID file.
|
||||
# pid.file: /var/run/opensearchDashboards.pid
|
||||
|
||||
# Enables you to specify a file where OpenSearch Dashboards stores log output.
|
||||
# logging.dest: stdout
|
||||
|
||||
# Set the value of this setting to true to suppress all logging output.
|
||||
# logging.silent: false
|
||||
|
||||
# Set the value of this setting to true to suppress all logging output other than error messages.
|
||||
# logging.quiet: false
|
||||
|
||||
# Set the value of this setting to true to log all events, including system usage information
|
||||
# and all requests.
|
||||
# logging.verbose: false
|
||||
|
||||
# Set the interval in milliseconds to sample system and process performance
|
||||
# metrics. Minimum is 100ms. Defaults to 5000.
|
||||
# ops.interval: 5000
|
||||
|
||||
# Specifies locale to be used for all localizable strings, dates and number formats.
|
||||
# Supported languages are the following: English - en , by default , Chinese - zh-CN .
|
||||
# i18n.locale: "en"
|
||||
|
||||
# Set the allowlist to check input graphite Url. Allowlist is the default check list.
|
||||
# vis_type_timeline.graphiteAllowedUrls: ['https://www.hostedgraphite.com/UID/ACCESS_KEY/graphite']
|
||||
|
||||
# Set the blocklist to check input graphite Url. Blocklist is an IP list.
|
||||
# Below is an example for reference
|
||||
# vis_type_timeline.graphiteBlockedIPs: [
|
||||
# //Loopback
|
||||
# '127.0.0.0/8',
|
||||
# '::1/128',
|
||||
# //Link-local Address for IPv6
|
||||
# 'fe80::/10',
|
||||
# //Private IP address for IPv4
|
||||
# '10.0.0.0/8',
|
||||
# '172.16.0.0/12',
|
||||
# '192.168.0.0/16',
|
||||
# //Unique local address (ULA)
|
||||
# 'fc00::/7',
|
||||
# //Reserved IP address
|
||||
# '0.0.0.0/8',
|
||||
# '100.64.0.0/10',
|
||||
# '192.0.0.0/24',
|
||||
# '192.0.2.0/24',
|
||||
# '198.18.0.0/15',
|
||||
# '192.88.99.0/24',
|
||||
# '198.51.100.0/24',
|
||||
# '203.0.113.0/24',
|
||||
# '224.0.0.0/4',
|
||||
# '240.0.0.0/4',
|
||||
# '255.255.255.255/32',
|
||||
# '::/128',
|
||||
# '2001:db8::/32',
|
||||
# 'ff00::/8',
|
||||
# ]
|
||||
# vis_type_timeline.graphiteBlockedIPs: []
|
||||
|
||||
# opensearchDashboards.branding:
|
||||
# logo:
|
||||
# defaultUrl: ""
|
||||
# darkModeUrl: ""
|
||||
# mark:
|
||||
# defaultUrl: ""
|
||||
# darkModeUrl: ""
|
||||
# loadingLogo:
|
||||
# defaultUrl: ""
|
||||
# darkModeUrl: ""
|
||||
# faviconUrl: ""
|
||||
# applicationTitle: ""
|
||||
|
||||
# Set the value of this setting to true to capture region blocked warnings and errors
|
||||
# for your map rendering services.
|
||||
# map.showRegionBlockedWarning: false%
|
||||
|
||||
# Set the value of this setting to false to suppress search usage telemetry
|
||||
# for reducing the load of OpenSearch cluster.
|
||||
# data.search.usageTelemetry.enabled: false
|
||||
|
||||
# 2.4 renames 'wizard.enabled: false' to 'vis_builder.enabled: false'
|
||||
# Set the value of this setting to false to disable VisBuilder
|
||||
# functionality in Visualization.
|
||||
# vis_builder.enabled: false
|
||||
|
||||
# 2.4 New Experimental Feature
|
||||
# Set the value of this setting to true to enable the experimental multiple data source
|
||||
# support feature. Use with caution.
|
||||
# data_source.enabled: false
|
||||
# Set the value of these settings to customize crypto materials to encryption saved credentials
|
||||
# in data sources.
|
||||
# data_source.encryption.wrappingKeyName: 'changeme'
|
||||
# data_source.encryption.wrappingKeyNamespace: 'changeme'
|
||||
# data_source.encryption.wrappingKey: [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
|
||||
|
||||
# 2.6 New ML Commons Dashboards Feature
|
||||
# Set the value of this setting to true to enable the ml commons dashboards
|
||||
# ml_commons_dashboards.enabled: false
|
||||
|
||||
# 2.12 New experimental Assistant Dashboards Feature
|
||||
# Set the value of this setting to true to enable the assistant dashboards
|
||||
# assistant.chat.enabled: false
|
||||
|
||||
# 2.13 New Query Assistant Feature
|
||||
# Set the value of this setting to false to disable the query assistant
|
||||
# observability.query_assist.enabled: false
|
||||
|
||||
# 2.14 Enable Ui Metric Collectors in Usage Collector
|
||||
# Set the value of this setting to true to enable UI Metric collections
|
||||
# usageCollection.uiMetric.enabled: false
|
||||
|
||||
opensearch.hosts: [https://localhost:9200]
|
||||
opensearch.ssl.verificationMode: none
|
||||
opensearch.username: admin
|
||||
opensearch.password: 'Qazwsxedc!@#123'
|
||||
opensearch.requestHeadersWhitelist: [authorization, securitytenant]
|
||||
|
||||
opensearch_security.multitenancy.enabled: true
|
||||
opensearch_security.multitenancy.tenants.preferred: [Private, Global]
|
||||
opensearch_security.readonly_mode.roles: [kibana_read_only]
|
||||
# Use this setting if you are running opensearch-dashboards without https
|
||||
opensearch_security.cookie.secure: false
|
||||
server.host: '0.0.0.0'
|
598
docker/.env.example
Normal file
598
docker/.env.example
Normal file
|
@ -0,0 +1,598 @@
|
|||
# ------------------------------
|
||||
# Environment Variables for API service & worker
|
||||
# ------------------------------
|
||||
|
||||
# ------------------------------
|
||||
# Common Variables
|
||||
# ------------------------------
|
||||
|
||||
# The backend URL of the console API,
|
||||
# used to concatenate the authorization callback.
|
||||
# If empty, it is the same domain.
|
||||
# Example: https://api.console.dify.ai
|
||||
CONSOLE_API_URL=
|
||||
|
||||
# The front-end URL of the console web,
|
||||
# used to concatenate some front-end addresses and for CORS configuration use.
|
||||
# If empty, it is the same domain.
|
||||
# Example: https://console.dify.ai
|
||||
CONSOLE_WEB_URL=
|
||||
|
||||
# Service API Url,
|
||||
# used to display Service API Base Url to the front-end.
|
||||
# If empty, it is the same domain.
|
||||
# Example: https://api.dify.ai
|
||||
SERVICE_API_URL=
|
||||
|
||||
# WebApp API backend Url,
|
||||
# used to declare the back-end URL for the front-end API.
|
||||
# If empty, it is the same domain.
|
||||
# Example: https://api.app.dify.ai
|
||||
APP_API_URL=
|
||||
|
||||
# WebApp Url,
|
||||
# used to display WebAPP API Base Url to the front-end.
|
||||
# If empty, it is the same domain.
|
||||
# Example: https://app.dify.ai
|
||||
APP_WEB_URL=
|
||||
|
||||
# File preview or download Url prefix.
|
||||
# used to display File preview or download Url to the front-end or as Multi-model inputs;
|
||||
# Url is signed and has expiration time.
|
||||
FILES_URL=
|
||||
|
||||
# ------------------------------
|
||||
# Server Configuration
|
||||
# ------------------------------
|
||||
|
||||
# The log level for the application.
|
||||
# Supported values are `DEBUG`, `INFO`, `WARNING`, `ERROR`, `CRITICAL`
|
||||
LOG_LEVEL=INFO
|
||||
|
||||
# Debug mode, default is false.
|
||||
# It is recommended to turn on this configuration for local development
|
||||
# to prevent some problems caused by monkey patch.
|
||||
DEBUG=false
|
||||
|
||||
# Flask debug mode, it can output trace information at the interface when turned on,
|
||||
# which is convenient for debugging.
|
||||
FLASK_DEBUG=false
|
||||
|
||||
# A secretkey that is used for securely signing the session cookie
|
||||
# and encrypting sensitive information on the database.
|
||||
# You can generate a strong key using `openssl rand -base64 42`.
|
||||
SECRET_KEY=sk-9f73s3ljTXVcMT3Blb3ljTqtsKiGHXVcMT3BlbkFJLK7U
|
||||
|
||||
# Password for admin user initialization.
|
||||
# If left unset, admin user will not be prompted for a password
|
||||
# when creating the initial admin account.
|
||||
INIT_PASSWORD=
|
||||
|
||||
# Deployment environment.
|
||||
# Supported values are `PRODUCTION`, `TESTING`. Default is `PRODUCTION`.
|
||||
# Testing environment. There will be a distinct color label on the front-end page,
|
||||
# indicating that this environment is a testing environment.
|
||||
DEPLOY_ENV=PRODUCTION
|
||||
|
||||
# Whether to enable the version check policy.
|
||||
# If set to false, https://updates.dify.ai will not be called for version check.
|
||||
CHECK_UPDATE_URL=false
|
||||
|
||||
# Used to change the OpenAI base address, default is https://api.openai.com/v1.
|
||||
# When OpenAI cannot be accessed in China, replace it with a domestic mirror address,
|
||||
# or when a local model provides OpenAI compatible API, it can be replaced.
|
||||
OPENAI_API_BASE=https://api.openai.com/v1
|
||||
|
||||
# When enabled, migrations will be executed prior to application startup
|
||||
# and the application will start after the migrations have completed.
|
||||
MIGRATION_ENABLED=true
|
||||
|
||||
# File Access Time specifies a time interval in seconds for the file to be accessed.
|
||||
# The default value is 300 seconds.
|
||||
FILES_ACCESS_TIMEOUT=300
|
||||
|
||||
# ------------------------------
|
||||
# Container Startup Related Configuration
|
||||
# Only effective when starting with docker image or docker-compose.
|
||||
# ------------------------------
|
||||
|
||||
# API service binding address, default: 0.0.0.0, i.e., all addresses can be accessed.
|
||||
DIFY_BIND_ADDRESS=
|
||||
|
||||
# API service binding port number, default 5001.
|
||||
DIFY_PORT=
|
||||
|
||||
# The number of API server workers, i.e., the number of gevent workers.
|
||||
# Formula: number of cpu cores x 2 + 1
|
||||
# Reference: https://docs.gunicorn.org/en/stable/design.html#how-many-workers
|
||||
SERVER_WORKER_AMOUNT=
|
||||
|
||||
# Defaults to gevent. If using windows, it can be switched to sync or solo.
|
||||
SERVER_WORKER_CLASS=
|
||||
|
||||
# Similar to SERVER_WORKER_CLASS. Default is gevent.
|
||||
# If using windows, it can be switched to sync or solo.
|
||||
CELERY_WORKER_CLASS=
|
||||
|
||||
# Request handling timeout. The default is 200,
|
||||
# it is recommended to set it to 360 to support a longer sse connection time.
|
||||
GUNICORN_TIMEOUT=360
|
||||
|
||||
# The number of Celery workers. The default is 1, and can be set as needed.
|
||||
CELERY_WORKER_AMOUNT=
|
||||
|
||||
# ------------------------------
|
||||
# Database Configuration
|
||||
# The database uses PostgreSQL. Please use the public schema.
|
||||
# It is consistent with the configuration in the 'db' service below.
|
||||
# ------------------------------
|
||||
|
||||
DB_USERNAME=postgres
|
||||
DB_PASSWORD=difyai123456
|
||||
DB_HOST=db
|
||||
DB_PORT=5432
|
||||
DB_DATABASE=dify
|
||||
# The size of the database connection pool.
|
||||
# The default is 30 connections, which can be appropriately increased.
|
||||
SQLALCHEMY_POOL_SIZE=30
|
||||
# Database connection pool recycling time, the default is 3600 seconds.
|
||||
SQLALCHEMY_POOL_RECYCLE=3600
|
||||
# Whether to print SQL, default is false.
|
||||
SQLALCHEMY_ECHO=false
|
||||
|
||||
# ------------------------------
|
||||
# Redis Configuration
|
||||
# This Redis configuration is used for caching and for pub/sub during conversation.
|
||||
# ------------------------------
|
||||
|
||||
REDIS_HOST=redis
|
||||
REDIS_PORT=6379
|
||||
REDIS_USERNAME=
|
||||
REDIS_PASSWORD=difyai123456
|
||||
REDIS_USE_SSL=false
|
||||
|
||||
# ------------------------------
|
||||
# Celery Configuration
|
||||
# ------------------------------
|
||||
|
||||
# Use redis as the broker, and redis db 1 for celery broker.
|
||||
# Format as follows: `redis://<redis_username>:<redis_password>@<redis_host>:<redis_port>/<redis_database>`
|
||||
# Example: redis://:difyai123456@redis:6379/1
|
||||
CELERY_BROKER_URL=redis://:difyai123456@redis:6379/1
|
||||
BROKER_USE_SSL=false
|
||||
|
||||
# ------------------------------
|
||||
# CORS Configuration
|
||||
# Used to set the front-end cross-domain access policy.
|
||||
# ------------------------------
|
||||
|
||||
# Specifies the allowed origins for cross-origin requests to the Web API,
|
||||
# e.g. https://dify.app or * for all origins.
|
||||
WEB_API_CORS_ALLOW_ORIGINS=*
|
||||
|
||||
# Specifies the allowed origins for cross-origin requests to the console API,
|
||||
# e.g. https://cloud.dify.ai or * for all origins.
|
||||
CONSOLE_CORS_ALLOW_ORIGINS=*
|
||||
|
||||
# ------------------------------
|
||||
# File Storage Configuration
|
||||
# ------------------------------
|
||||
|
||||
# The type of storage to use for storing user files.
|
||||
# Supported values are `local` and `s3` and `azure-blob` and `google-storage` and `tencent-cos`,
|
||||
# Default: `local`
|
||||
STORAGE_TYPE=local
|
||||
|
||||
# S3 Configuration
|
||||
# Whether to use AWS managed IAM roles for authenticating with the S3 service.
|
||||
# If set to false, the access key and secret key must be provided.
|
||||
S3_USE_AWS_MANAGED_IAM=false
|
||||
# The endpoint of the S3 service.
|
||||
S3_ENDPOINT=
|
||||
# The region of the S3 service.
|
||||
S3_REGION=us-east-1
|
||||
# The name of the S3 bucket to use for storing files.
|
||||
S3_BUCKET_NAME=difyai
|
||||
# The access key to use for authenticating with the S3 service.
|
||||
S3_ACCESS_KEY=
|
||||
# The secret key to use for authenticating with the S3 service.
|
||||
S3_SECRET_KEY=
|
||||
|
||||
# Azure Blob Configuration
|
||||
# The name of the Azure Blob Storage account to use for storing files.
|
||||
AZURE_BLOB_ACCOUNT_NAME=difyai
|
||||
# The access key to use for authenticating with the Azure Blob Storage account.
|
||||
AZURE_BLOB_ACCOUNT_KEY=difyai
|
||||
# The name of the Azure Blob Storage container to use for storing files.
|
||||
AZURE_BLOB_CONTAINER_NAME=difyai-container
|
||||
# The URL of the Azure Blob Storage account.
|
||||
AZURE_BLOB_ACCOUNT_URL=https://<your_account_name>.blob.core.windows.net
|
||||
|
||||
# Google Storage Configuration
|
||||
# The name of the Google Storage bucket to use for storing files.
|
||||
GOOGLE_STORAGE_BUCKET_NAME=yout-bucket-name
|
||||
# The service account JSON key to use for authenticating with the Google Storage service.
|
||||
GOOGLE_STORAGE_SERVICE_ACCOUNT_JSON_BASE64=your-google-service-account-json-base64-string
|
||||
|
||||
# The Alibaba Cloud OSS configurations,
|
||||
# only available when STORAGE_TYPE is `aliyun-oss`
|
||||
ALIYUN_OSS_BUCKET_NAME=your-bucket-name
|
||||
ALIYUN_OSS_ACCESS_KEY=your-access-key
|
||||
ALIYUN_OSS_SECRET_KEY=your-secret-key
|
||||
ALIYUN_OSS_ENDPOINT=https://oss-ap-southeast-1-internal.aliyuncs.com
|
||||
ALIYUN_OSS_REGION=ap-southeast-1
|
||||
ALIYUN_OSS_AUTH_VERSION=v4
|
||||
|
||||
# Tencent COS Configuration
|
||||
# The name of the Tencent COS bucket to use for storing files.
|
||||
TENCENT_COS_BUCKET_NAME=your-bucket-name
|
||||
# The secret key to use for authenticating with the Tencent COS service.
|
||||
TENCENT_COS_SECRET_KEY=your-secret-key
|
||||
# The secret id to use for authenticating with the Tencent COS service.
|
||||
TENCENT_COS_SECRET_ID=your-secret-id
|
||||
# The region of the Tencent COS service.
|
||||
TENCENT_COS_REGION=your-region
|
||||
# The scheme of the Tencent COS service.
|
||||
TENCENT_COS_SCHEME=your-scheme
|
||||
|
||||
# ------------------------------
|
||||
# Vector Database Configuration
|
||||
# ------------------------------
|
||||
|
||||
# The type of vector store to use.
|
||||
# Supported values are `weaviate`, `qdrant`, `milvus`, `relyt`, `pgvector`, `chroma`, `opensearch`, `tidb_vector`, `oracle`, `tencent`.
|
||||
VECTOR_STORE=weaviate
|
||||
|
||||
# The Weaviate endpoint URL. Only available when VECTOR_STORE is `weaviate`.
|
||||
WEAVIATE_ENDPOINT=http://weaviate:8080
|
||||
# The Weaviate API key.
|
||||
WEAVIATE_API_KEY=WVF5YThaHlkYwhGUSmCRgsX3tD5ngdN8pkih
|
||||
|
||||
# The Qdrant endpoint URL. Only available when VECTOR_STORE is `qdrant`.
|
||||
QDRANT_URL=http://qdrant:6333
|
||||
# The Qdrant API key.
|
||||
QDRANT_API_KEY=difyai123456
|
||||
# The Qdrant client timeout setting.
|
||||
QDRANT_CLIENT_TIMEOUT=20
|
||||
# The Qdrant client enable gRPC mode.
|
||||
QDRANT_GRPC_ENABLED=false
|
||||
# The Qdrant server gRPC mode PORT.
|
||||
QDRANT_GRPC_PORT=6334
|
||||
|
||||
# Milvus configuration Only available when VECTOR_STORE is `milvus`.
|
||||
# The milvus host.
|
||||
MILVUS_HOST=127.0.0.1
|
||||
# The milvus host.
|
||||
MILVUS_PORT=19530
|
||||
# The milvus username.
|
||||
MILVUS_USER=root
|
||||
# The milvus password.
|
||||
MILVUS_PASSWORD=Milvus
|
||||
# The milvus tls switch.
|
||||
MILVUS_SECURE=false
|
||||
|
||||
# pgvector configurations, only available when VECTOR_STORE is `pgvecto-rs or pgvector`
|
||||
PGVECTOR_HOST=pgvector
|
||||
PGVECTOR_PORT=5432
|
||||
PGVECTOR_USER=postgres
|
||||
PGVECTOR_PASSWORD=difyai123456
|
||||
PGVECTOR_DATABASE=dify
|
||||
|
||||
# TiDB vector configurations, only available when VECTOR_STORE is `tidb`
|
||||
TIDB_VECTOR_HOST=tidb
|
||||
TIDB_VECTOR_PORT=4000
|
||||
TIDB_VECTOR_USER=xxx.root
|
||||
TIDB_VECTOR_PASSWORD=xxxxxx
|
||||
TIDB_VECTOR_DATABASE=dify
|
||||
|
||||
# Chroma configuration, only available when VECTOR_STORE is `chroma`
|
||||
CHROMA_HOST=127.0.0.1
|
||||
CHROMA_PORT=8000
|
||||
CHROMA_TENANT=default_tenant
|
||||
CHROMA_DATABASE=default_database
|
||||
CHROMA_AUTH_PROVIDER=chromadb.auth.token_authn.TokenAuthClientProvider
|
||||
CHROMA_AUTH_CREDENTIALS=xxxxxx
|
||||
|
||||
# Oracle configuration, only available when VECTOR_STORE is `oracle`
|
||||
ORACLE_HOST=oracle
|
||||
ORACLE_PORT=1521
|
||||
ORACLE_USER=dify
|
||||
ORACLE_PASSWORD=dify
|
||||
ORACLE_DATABASE=FREEPDB1
|
||||
|
||||
# relyt configurations, only available when VECTOR_STORE is `relyt`
|
||||
RELYT_HOST=db
|
||||
RELYT_PORT=5432
|
||||
RELYT_USER=postgres
|
||||
RELYT_PASSWORD=difyai123456
|
||||
RELYT_DATABASE=postgres
|
||||
|
||||
# open search configuration, only available when VECTOR_STORE is `opensearch`
|
||||
OPENSEARCH_HOST=127.0.0.1
|
||||
OPENSEARCH_PORT=9200
|
||||
OPENSEARCH_USER=admin
|
||||
OPENSEARCH_PASSWORD=admin
|
||||
OPENSEARCH_SECURE=true
|
||||
|
||||
# tencent vector configurations, only available when VECTOR_STORE is `tencent`
|
||||
TENCENT_VECTOR_DB_URL=http://127.0.0.1
|
||||
TENCENT_VECTOR_DB_API_KEY=dify
|
||||
TENCENT_VECTOR_DB_TIMEOUT=30
|
||||
TENCENT_VECTOR_DB_USERNAME=dify
|
||||
TENCENT_VECTOR_DB_DATABASE=dify
|
||||
TENCENT_VECTOR_DB_SHARD=1
|
||||
TENCENT_VECTOR_DB_REPLICAS=2
|
||||
|
||||
# ------------------------------
|
||||
# Knowledge Configuration
|
||||
# ------------------------------
|
||||
|
||||
# Upload file size limit, default 15M.
|
||||
UPLOAD_FILE_SIZE_LIMIT=15
|
||||
|
||||
# The maximum number of files that can be uploaded at a time, default 5.
|
||||
UPLOAD_FILE_BATCH_LIMIT=5
|
||||
|
||||
# ETl type, support: `dify`, `Unstructured`
|
||||
# `dify` Dify's proprietary file extraction scheme
|
||||
# `Unstructured` Unstructured.io file extraction scheme
|
||||
ETL_TYPE=dify
|
||||
|
||||
# Unstructured API path, needs to be configured when ETL_TYPE is Unstructured.
|
||||
# For example: http://unstructured:8000/general/v0/general
|
||||
UNSTRUCTURED_API_URL=
|
||||
|
||||
# ------------------------------
|
||||
# Multi-modal Configuration
|
||||
# ------------------------------
|
||||
|
||||
# The format of the image sent when the multi-modal model is input,
|
||||
# the default is base64, optional url.
|
||||
# The delay of the call in url mode will be lower than that in base64 mode.
|
||||
# It is generally recommended to use the more compatible base64 mode.
|
||||
# If configured as url, you need to configure FILES_URL as an externally accessible address so that the multi-modal model can access the image.
|
||||
MULTIMODAL_SEND_IMAGE_FORMAT=base64
|
||||
|
||||
# Upload image file size limit, default 10M.
|
||||
UPLOAD_IMAGE_FILE_SIZE_LIMIT=10
|
||||
|
||||
# ------------------------------
|
||||
# Sentry Configuration
|
||||
# Used for application monitoring and error log tracking.
|
||||
# ------------------------------
|
||||
|
||||
# Sentry DSN address, default is empty, when empty,
|
||||
# all monitoring information is not reported to Sentry.
|
||||
# If not set, Sentry error reporting will be disabled.
|
||||
SENTRY_DSN=
|
||||
|
||||
# The reporting ratio of Sentry events, if it is 0.01, it is 1%.
|
||||
SENTRY_TRACES_SAMPLE_RATE=1.0
|
||||
|
||||
# The reporting ratio of Sentry profiles, if it is 0.01, it is 1%.
|
||||
SENTRY_PROFILES_SAMPLE_RATE=1.0
|
||||
|
||||
# ------------------------------
|
||||
# Notion Integration Configuration
|
||||
# Variables can be obtained by applying for Notion integration: https://www.notion.so/my-integrations
|
||||
# ------------------------------
|
||||
|
||||
# Configure as "public" or "internal".
|
||||
# Since Notion's OAuth redirect URL only supports HTTPS,
|
||||
# if deploying locally, please use Notion's internal integration.
|
||||
NOTION_INTEGRATION_TYPE=public
|
||||
# Notion OAuth client secret (used for public integration type)
|
||||
NOTION_CLIENT_SECRET=
|
||||
# Notion OAuth client id (used for public integration type)
|
||||
NOTION_CLIENT_ID=
|
||||
# Notion internal integration secret.
|
||||
# If the value of NOTION_INTEGRATION_TYPE is "internal",
|
||||
# you need to configure this variable.
|
||||
NOTION_INTERNAL_SECRET=
|
||||
|
||||
# ------------------------------
|
||||
# Mail related configuration
|
||||
# ------------------------------
|
||||
|
||||
# Mail type, support: resend, smtp
|
||||
MAIL_TYPE=resend
|
||||
|
||||
# Default send from email address, if not specified
|
||||
MAIL_DEFAULT_SEND_FROM=
|
||||
|
||||
# API-Key for the Resend email provider, used when MAIL_TYPE is `resend`.
|
||||
RESEND_API_KEY=your-resend-api-key
|
||||
|
||||
# SMTP server configuration, used when MAIL_TYPE is `smtp`
|
||||
SMTP_SERVER=
|
||||
SMTP_PORT=
|
||||
SMTP_USERNAME=
|
||||
SMTP_PASSWORD=
|
||||
SMTP_USE_TLS=true
|
||||
SMTP_OPPORTUNISTIC_TLS=false
|
||||
|
||||
# ------------------------------
|
||||
# Others Configuration
|
||||
# ------------------------------
|
||||
|
||||
# Maximum length of segmentation tokens for indexing
|
||||
INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH=1000
|
||||
|
||||
# Member invitation link valid time (hours),
|
||||
# Default: 72.
|
||||
INVITE_EXPIRY_HOURS=72
|
||||
|
||||
# The sandbox service endpoint.
|
||||
CODE_EXECUTION_ENDPOINT=http://sandbox:8194
|
||||
CODE_EXECUTION_API_KEY=dify-sandbox
|
||||
CODE_MAX_NUMBER=9223372036854775807
|
||||
CODE_MIN_NUMBER=-9223372036854775808
|
||||
CODE_MAX_STRING_LENGTH=80000
|
||||
TEMPLATE_TRANSFORM_MAX_LENGTH=80000
|
||||
CODE_MAX_STRING_ARRAY_LENGTH=30
|
||||
CODE_MAX_OBJECT_ARRAY_LENGTH=30
|
||||
CODE_MAX_NUMBER_ARRAY_LENGTH=1000
|
||||
|
||||
# SSRF Proxy server HTTP URL
|
||||
SSRF_PROXY_HTTP_URL=http://ssrf_proxy:3128
|
||||
# SSRF Proxy server HTTPS URL
|
||||
SSRF_PROXY_HTTPS_URL=http://ssrf_proxy:3128
|
||||
|
||||
# ------------------------------
|
||||
# Environment Variables for db Service
|
||||
# ------------------------------
|
||||
|
||||
PGUSER=${DB_USERNAME}
|
||||
# The password for the default postgres user.
|
||||
POSTGRES_PASSWORD=${DB_PASSWORD}
|
||||
# The name of the default postgres database.
|
||||
POSTGRES_DB=${DB_DATABASE}
|
||||
# postgres data directory
|
||||
PGDATA=/var/lib/postgresql/data/pgdata
|
||||
|
||||
# ------------------------------
|
||||
# Environment Variables for sandbox Service
|
||||
# ------------------------------
|
||||
|
||||
# The API key for the sandbox service
|
||||
API_KEY=dify-sandbox
|
||||
# The mode in which the Gin framework runs
|
||||
GIN_MODE=release
|
||||
# The timeout for the worker in seconds
|
||||
WORKER_TIMEOUT=15
|
||||
# Enable network for the sandbox service
|
||||
ENABLE_NETWORK=true
|
||||
# HTTP proxy URL for SSRF protection
|
||||
HTTP_PROXY=http://ssrf_proxy:3128
|
||||
# HTTPS proxy URL for SSRF protection
|
||||
HTTPS_PROXY=http://ssrf_proxy:3128
|
||||
# The port on which the sandbox service runs
|
||||
SANDBOX_PORT=8194
|
||||
|
||||
# ------------------------------
|
||||
# Environment Variables for qdrant Service
|
||||
# (only used when VECTOR_STORE is qdrant)
|
||||
# ------------------------------
|
||||
QDRANT_API_KEY=difyai123456
|
||||
|
||||
# ------------------------------
|
||||
# Environment Variables for weaviate Service
|
||||
# (only used when VECTOR_STORE is weaviate)
|
||||
# ------------------------------
|
||||
PERSISTENCE_DATA_PATH='/var/lib/weaviate'
|
||||
QUERY_DEFAULTS_LIMIT=25
|
||||
AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED=true
|
||||
DEFAULT_VECTORIZER_MODULE=none
|
||||
CLUSTER_HOSTNAME=node1
|
||||
AUTHENTICATION_APIKEY_ENABLED=true
|
||||
AUTHENTICATION_APIKEY_ALLOWED_KEYS=WVF5YThaHlkYwhGUSmCRgsX3tD5ngdN8pkih
|
||||
AUTHENTICATION_APIKEY_USERS=hello@dify.ai
|
||||
AUTHORIZATION_ADMINLIST_ENABLED=true
|
||||
AUTHORIZATION_ADMINLIST_USERS=hello@dify.ai
|
||||
|
||||
# ------------------------------
|
||||
# Environment Variables for Chroma
|
||||
# (only used when VECTOR_STORE is chroma)
|
||||
# ------------------------------
|
||||
|
||||
# Authentication credentials for Chroma server
|
||||
CHROMA_SERVER_AUTHN_CREDENTIALS=difyai123456
|
||||
# Authentication provider for Chroma server
|
||||
CHROMA_SERVER_AUTHN_PROVIDER=chromadb.auth.token_authn.TokenAuthenticationServerProvider
|
||||
# Persistence setting for Chroma server
|
||||
IS_PERSISTENT=TRUE
|
||||
|
||||
# ------------------------------
|
||||
# Environment Variables for Oracle Service
|
||||
# (only used when VECTOR_STORE is Oracle)
|
||||
# ------------------------------
|
||||
ORACLE_PWD=Dify123456
|
||||
ORACLE_CHARACTERSET=AL32UTF8
|
||||
|
||||
# ------------------------------
|
||||
# Environment Variables for milvus Service
|
||||
# (only used when VECTOR_STORE is milvus)
|
||||
# ------------------------------
|
||||
# ETCD configuration for auto compaction mode
|
||||
ETCD_AUTO_COMPACTION_MODE=revision
|
||||
# ETCD configuration for auto compaction retention in terms of number of revisions
|
||||
ETCD_AUTO_COMPACTION_RETENTION=1000
|
||||
# ETCD configuration for backend quota in bytes
|
||||
ETCD_QUOTA_BACKEND_BYTES=4294967296
|
||||
# ETCD configuration for the number of changes before triggering a snapshot
|
||||
ETCD_SNAPSHOT_COUNT=50000
|
||||
# MinIO access key for authentication
|
||||
MINIO_ACCESS_KEY=minioadmin
|
||||
# MinIO secret key for authentication
|
||||
MINIO_SECRET_KEY=minioadmin
|
||||
# ETCD service endpoints
|
||||
ETCD_ENDPOINTS=etcd:2379
|
||||
# MinIO service address
|
||||
MINIO_ADDRESS=minio:9000
|
||||
# Enable or disable security authorization
|
||||
MILVUS_AUTHORIZATION_ENABLED=true
|
||||
|
||||
# ------------------------------
|
||||
# Environment Variables for pgvector / pgvector-rs Service
|
||||
# (only used when VECTOR_STORE is pgvector / pgvector-rs)
|
||||
# ------------------------------
|
||||
PGVECTOR_PGUSER=postgres
|
||||
# The password for the default postgres user.
|
||||
PGVECTOR_POSTGRES_PASSWORD=difyai123456
|
||||
# The name of the default postgres database.
|
||||
PGVECTOR_POSTGRES_DB=dify
|
||||
# postgres data directory
|
||||
PGVECTOR_PGDATA=/var/lib/postgresql/data/pgdata
|
||||
|
||||
# ------------------------------
|
||||
# Environment Variables for opensearch
|
||||
# (only used when VECTOR_STORE is opensearch)
|
||||
# ------------------------------
|
||||
OPENSEARCH_DISCOVERY_TYPE=single-node
|
||||
OPENSEARCH_BOOTSTRAP_MEMORY_LOCK=true
|
||||
OPENSEARCH_JAVA_OPTS_MIN=512m
|
||||
OPENSEARCH_JAVA_OPTS_MAX=1024m
|
||||
OPENSEARCH_INITIAL_ADMIN_PASSWORD=Qazwsxedc!@#123
|
||||
OPENSEARCH_MEMLOCK_SOFT=-1
|
||||
OPENSEARCH_MEMLOCK_HARD=-1
|
||||
OPENSEARCH_NOFILE_SOFT=65536
|
||||
OPENSEARCH_NOFILE_HARD=65536
|
||||
|
||||
# ------------------------------
|
||||
# Environment Variables for Nginx reverse proxy
|
||||
# ------------------------------
|
||||
NGINX_SERVER_NAME=_
|
||||
HTTPS_ENABLED=false
|
||||
# HTTP port
|
||||
NGINX_PORT=80
|
||||
# SSL settings are only applied when HTTPS_ENABLED is true
|
||||
NGINX_SSL_PORT=443
|
||||
# if HTTPS_ENABLED is true, you're required to add your own SSL certificates/keys to the `./nginx/ssl` directory
|
||||
# and modify the env vars below accordingly.
|
||||
NGINX_SSL_CERT_FILENAME=dify.crt
|
||||
NGINX_SSL_CERT_KEY_FILENAME=dify.key
|
||||
NGINX_SSL_PROTOCOLS=TLSv1.1 TLSv1.2 TLSv1.3
|
||||
|
||||
# Nginx performance tuning
|
||||
NGINX_WORKER_PROCESSES=auto
|
||||
NGINX_CLIENT_MAX_BODY_SIZE=15M
|
||||
NGINX_KEEPALIVE_TIMEOUT=65
|
||||
|
||||
# Proxy settings
|
||||
NGINX_PROXY_READ_TIMEOUT=3600s
|
||||
NGINX_PROXY_SEND_TIMEOUT=3600s
|
||||
|
||||
# ------------------------------
|
||||
# Environment Variables for SSRF Proxy
|
||||
# ------------------------------
|
||||
HTTP_PORT=3128
|
||||
COREDUMP_DIR=/var/spool/squid
|
||||
REVERSE_PROXY_PORT=8194
|
||||
SANDBOX_HOST=sandbox
|
||||
|
||||
# ------------------------------
|
||||
# docker env var for specifying vector db type at startup
|
||||
# (based on the vector db type, the corresponding docker
|
||||
# compose profile will be used)
|
||||
# ------------------------------
|
||||
COMPOSE_PROFILES=${VECTOR_STORE:-weaviate}
|
1
docker/.gitignore
vendored
Normal file
1
docker/.gitignore
vendored
Normal file
|
@ -0,0 +1 @@
|
|||
nginx/conf.d/default.conf
|
|
@ -3,13 +3,12 @@ services:
|
|||
db:
|
||||
image: postgres:15-alpine
|
||||
restart: always
|
||||
env_file:
|
||||
- ./middleware.env
|
||||
environment:
|
||||
# The password for the default postgres user.
|
||||
POSTGRES_PASSWORD: difyai123456
|
||||
# The name of the default postgres database.
|
||||
POSTGRES_DB: dify
|
||||
# postgres data directory
|
||||
PGDATA: /var/lib/postgresql/data/pgdata
|
||||
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-difyai123456}
|
||||
POSTGRES_DB: ${POSTGRES_DB:-dify}
|
||||
PGDATA: ${PGDATA:-/var/lib/postgresql/data/pgdata}
|
||||
volumes:
|
||||
- ./volumes/db/data:/var/lib/postgresql/data
|
||||
ports:
|
||||
|
@ -34,19 +33,21 @@ services:
|
|||
volumes:
|
||||
# Mount the Weaviate data directory to the container.
|
||||
- ./volumes/weaviate:/var/lib/weaviate
|
||||
env_file:
|
||||
- ./middleware.env
|
||||
environment:
|
||||
# The Weaviate configurations
|
||||
# You can refer to the [Weaviate](https://weaviate.io/developers/weaviate/config-refs/env-vars) documentation for more information.
|
||||
QUERY_DEFAULTS_LIMIT: 25
|
||||
AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED: 'false'
|
||||
PERSISTENCE_DATA_PATH: '/var/lib/weaviate'
|
||||
DEFAULT_VECTORIZER_MODULE: 'none'
|
||||
CLUSTER_HOSTNAME: 'node1'
|
||||
AUTHENTICATION_APIKEY_ENABLED: 'true'
|
||||
AUTHENTICATION_APIKEY_ALLOWED_KEYS: 'WVF5YThaHlkYwhGUSmCRgsX3tD5ngdN8pkih'
|
||||
AUTHENTICATION_APIKEY_USERS: 'hello@dify.ai'
|
||||
AUTHORIZATION_ADMINLIST_ENABLED: 'true'
|
||||
AUTHORIZATION_ADMINLIST_USERS: 'hello@dify.ai'
|
||||
PERSISTENCE_DATA_PATH: ${PERSISTENCE_DATA_PATH:-'/var/lib/weaviate'}
|
||||
QUERY_DEFAULTS_LIMIT: ${QUERY_DEFAULTS_LIMIT:-25}
|
||||
AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED: ${AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED:-false}
|
||||
DEFAULT_VECTORIZER_MODULE: ${DEFAULT_VECTORIZER_MODULE:-none}
|
||||
CLUSTER_HOSTNAME: ${CLUSTER_HOSTNAME:-node1}
|
||||
AUTHENTICATION_APIKEY_ENABLED: ${AUTHENTICATION_APIKEY_ENABLED:-true}
|
||||
AUTHENTICATION_APIKEY_ALLOWED_KEYS: ${AUTHENTICATION_APIKEY_ALLOWED_KEYS:-WVF5YThaHlkYwhGUSmCRgsX3tD5ngdN8pkih}
|
||||
AUTHENTICATION_APIKEY_USERS: ${AUTHENTICATION_APIKEY_USERS:-hello@dify.ai}
|
||||
AUTHORIZATION_ADMINLIST_ENABLED: ${AUTHORIZATION_ADMINLIST_ENABLED:-true}
|
||||
AUTHORIZATION_ADMINLIST_USERS: ${AUTHORIZATION_ADMINLIST_USERS:-hello@dify.ai}
|
||||
ports:
|
||||
- "8080:8080"
|
||||
|
||||
|
@ -58,13 +59,13 @@ services:
|
|||
# The DifySandbox configurations
|
||||
# Make sure you are changing this key for your deployment with a strong key.
|
||||
# You can generate a strong key using `openssl rand -base64 42`.
|
||||
API_KEY: dify-sandbox
|
||||
GIN_MODE: 'release'
|
||||
WORKER_TIMEOUT: 15
|
||||
ENABLE_NETWORK: 'true'
|
||||
HTTP_PROXY: 'http://ssrf_proxy:3128'
|
||||
HTTPS_PROXY: 'http://ssrf_proxy:3128'
|
||||
SANDBOX_PORT: 8194
|
||||
API_KEY: ${API_KEY:-dify-sandbox}
|
||||
GIN_MODE: ${GIN_MODE:-release}
|
||||
WORKER_TIMEOUT: ${WORKER_TIMEOUT:-15}
|
||||
ENABLE_NETWORK: ${ENABLE_NETWORK:-true}
|
||||
HTTP_PROXY: ${HTTP_PROXY:-http://ssrf_proxy:3128}
|
||||
HTTPS_PROXY: ${HTTPS_PROXY:-http://ssrf_proxy:3128}
|
||||
SANDBOX_PORT: ${SANDBOX_PORT:-8194}
|
||||
volumes:
|
||||
- ./volumes/sandbox/dependencies:/dependencies
|
||||
networks:
|
||||
|
@ -76,30 +77,23 @@ services:
|
|||
ssrf_proxy:
|
||||
image: ubuntu/squid:latest
|
||||
restart: always
|
||||
volumes:
|
||||
- ./ssrf_proxy/squid.conf.template:/etc/squid/squid.conf.template
|
||||
- ./ssrf_proxy/docker-entrypoint.sh:/docker-entrypoint.sh
|
||||
entrypoint: /docker-entrypoint.sh
|
||||
ports:
|
||||
- "3128:3128"
|
||||
- "8194:8194"
|
||||
volumes:
|
||||
# pls clearly modify the squid.conf file to fit your network environment.
|
||||
- ./volumes/ssrf_proxy/squid.conf:/etc/squid/squid.conf
|
||||
environment:
|
||||
# pls clearly modify the squid env vars to fit your network environment.
|
||||
HTTP_PORT: ${HTTP_PORT:-3128}
|
||||
COREDUMP_DIR: ${COREDUMP_DIR:-/var/spool/squid}
|
||||
REVERSE_PROXY_PORT: ${REVERSE_PROXY_PORT:-8194}
|
||||
SANDBOX_HOST: ${SANDBOX_HOST:-sandbox}
|
||||
SANDBOX_PORT: ${SANDBOX_PORT:-8194}
|
||||
networks:
|
||||
- ssrf_proxy_network
|
||||
- default
|
||||
# Qdrant vector store.
|
||||
# uncomment to use qdrant as vector store.
|
||||
# (if uncommented, you need to comment out the weaviate service above,
|
||||
# and set VECTOR_STORE to qdrant in the api & worker service.)
|
||||
# qdrant:
|
||||
# image: qdrant/qdrant:1.7.3
|
||||
# restart: always
|
||||
# volumes:
|
||||
# - ./volumes/qdrant:/qdrant/storage
|
||||
# environment:
|
||||
# QDRANT_API_KEY: 'difyai123456'
|
||||
# ports:
|
||||
# - "6333:6333"
|
||||
# - "6334:6334"
|
||||
|
||||
|
||||
networks:
|
||||
# create a network between sandbox, api and ssrf_proxy, and can not access outside.
|
||||
|
|
File diff suppressed because it is too large
Load Diff
42
docker/middleware.env.example
Normal file
42
docker/middleware.env.example
Normal file
|
@ -0,0 +1,42 @@
|
|||
# ------------------------------
|
||||
# Environment Variables for db Service
|
||||
# ------------------------------
|
||||
PGUSER=postgres
|
||||
# The password for the default postgres user.
|
||||
POSTGRES_PASSWORD=difyai123456
|
||||
# The name of the default postgres database.
|
||||
POSTGRES_DB=dify
|
||||
# postgres data directory
|
||||
PGDATA=/var/lib/postgresql/data/pgdata
|
||||
|
||||
|
||||
# ------------------------------
|
||||
# Environment Variables for qdrant Service
|
||||
# (only used when VECTOR_STORE is qdrant)
|
||||
# ------------------------------
|
||||
QDRANT_API_KEY=difyai123456
|
||||
|
||||
# ------------------------------
|
||||
# Environment Variables for sandbox Service
|
||||
API_KEY=dify-sandbox
|
||||
GIN_MODE=release
|
||||
WORKER_TIMEOUT=15
|
||||
ENABLE_NETWORK=true
|
||||
HTTP_PROXY=http://ssrf_proxy:3128
|
||||
HTTPS_PROXY=http://ssrf_proxy:3128
|
||||
SANDBOX_PORT=8194
|
||||
# ------------------------------
|
||||
|
||||
# ------------------------------
|
||||
# Environment Variables for weaviate Service
|
||||
# (only used when VECTOR_STORE is weaviate)
|
||||
# ------------------------------
|
||||
QUERY_DEFAULTS_LIMIT=25
|
||||
AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED=true
|
||||
DEFAULT_VECTORIZER_MODULE=none
|
||||
CLUSTER_HOSTNAME=node1
|
||||
AUTHENTICATION_APIKEY_ENABLED=true
|
||||
AUTHENTICATION_APIKEY_ALLOWED_KEYS=WVF5YThaHlkYwhGUSmCRgsX3tD5ngdN8pkih
|
||||
AUTHENTICATION_APIKEY_USERS=hello@dify.ai
|
||||
AUTHORIZATION_ADMINLIST_ENABLED=true
|
||||
AUTHORIZATION_ADMINLIST_USERS=hello@dify.ai
|
34
docker/nginx/conf.d/default.conf.template
Normal file
34
docker/nginx/conf.d/default.conf.template
Normal file
|
@ -0,0 +1,34 @@
|
|||
# Please do not directly edit this file. Instead, modify the .env variables related to NGINX configuration.
|
||||
|
||||
server {
|
||||
listen 80;
|
||||
server_name ${NGINX_SERVER_NAME};
|
||||
|
||||
location /console/api {
|
||||
proxy_pass http://api:5001;
|
||||
include proxy.conf;
|
||||
}
|
||||
|
||||
location /api {
|
||||
proxy_pass http://api:5001;
|
||||
include proxy.conf;
|
||||
}
|
||||
|
||||
location /v1 {
|
||||
proxy_pass http://api:5001;
|
||||
include proxy.conf;
|
||||
}
|
||||
|
||||
location /files {
|
||||
proxy_pass http://api:5001;
|
||||
include proxy.conf;
|
||||
}
|
||||
|
||||
location / {
|
||||
proxy_pass http://web:3000;
|
||||
include proxy.conf;
|
||||
}
|
||||
|
||||
# placeholder for https config defined in https.conf.template
|
||||
${HTTPS_CONFIG}
|
||||
}
|
19
docker/nginx/docker-entrypoint.sh
Executable file
19
docker/nginx/docker-entrypoint.sh
Executable file
|
@ -0,0 +1,19 @@
|
|||
#!/bin/bash
|
||||
|
||||
if [ "${HTTPS_ENABLED}" = "true" ]; then
|
||||
# set the HTTPS_CONFIG environment variable to the content of the https.conf.template
|
||||
HTTPS_CONFIG=$(envsubst < /etc/nginx/https.conf.template)
|
||||
export HTTPS_CONFIG
|
||||
# Substitute the HTTPS_CONFIG in the default.conf.template with content from https.conf.template
|
||||
envsubst '${HTTPS_CONFIG}' < /etc/nginx/conf.d/default.conf.template > /etc/nginx/conf.d/default.conf
|
||||
fi
|
||||
|
||||
env_vars=$(printenv | cut -d= -f1 | sed 's/^/$/g' | paste -sd, -)
|
||||
|
||||
envsubst "$env_vars" < /etc/nginx/nginx.conf.template > /etc/nginx/nginx.conf
|
||||
envsubst "$env_vars" < /etc/nginx/proxy.conf.template > /etc/nginx/proxy.conf
|
||||
|
||||
envsubst < /etc/nginx/conf.d/default.conf.template > /etc/nginx/conf.d/default.conf
|
||||
|
||||
# Start Nginx using the default entrypoint
|
||||
exec nginx -g 'daemon off;'
|
9
docker/nginx/https.conf.template
Normal file
9
docker/nginx/https.conf.template
Normal file
|
@ -0,0 +1,9 @@
|
|||
# Please do not directly edit this file. Instead, modify the .env variables related to NGINX configuration.
|
||||
|
||||
listen ${NGINX_SSL_PORT} ssl;
|
||||
ssl_certificate ./../ssl/${NGINX_SSL_CERT_FILENAME};
|
||||
ssl_certificate_key ./../ssl/${NGINX_SSL_CERT_KEY_FILENAME};
|
||||
ssl_protocols ${NGINX_SSL_PROTOCOLS};
|
||||
ssl_prefer_server_ciphers on;
|
||||
ssl_session_cache shared:SSL:10m;
|
||||
ssl_session_timeout 10m;
|
34
docker/nginx/nginx.conf.template
Normal file
34
docker/nginx/nginx.conf.template
Normal file
|
@ -0,0 +1,34 @@
|
|||
# Please do not directly edit this file. Instead, modify the .env variables related to NGINX configuration.
|
||||
|
||||
user nginx;
|
||||
worker_processes ${NGINX_WORKER_PROCESSES};
|
||||
|
||||
error_log /var/log/nginx/error.log notice;
|
||||
pid /var/run/nginx.pid;
|
||||
|
||||
|
||||
events {
|
||||
worker_connections 1024;
|
||||
}
|
||||
|
||||
|
||||
http {
|
||||
include /etc/nginx/mime.types;
|
||||
default_type application/octet-stream;
|
||||
|
||||
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
|
||||
'$status $body_bytes_sent "$http_referer" '
|
||||
'"$http_user_agent" "$http_x_forwarded_for"';
|
||||
|
||||
access_log /var/log/nginx/access.log main;
|
||||
|
||||
sendfile on;
|
||||
#tcp_nopush on;
|
||||
|
||||
keepalive_timeout ${NGINX_KEEPALIVE_TIMEOUT};
|
||||
|
||||
#gzip on;
|
||||
client_max_body_size ${NGINX_CLIENT_MAX_BODY_SIZE};
|
||||
|
||||
include /etc/nginx/conf.d/*.conf;
|
||||
}
|
10
docker/nginx/proxy.conf.template
Normal file
10
docker/nginx/proxy.conf.template
Normal file
|
@ -0,0 +1,10 @@
|
|||
# Please do not directly edit this file. Instead, modify the .env variables related to NGINX configuration.
|
||||
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Connection "";
|
||||
proxy_buffering off;
|
||||
proxy_read_timeout ${NGINX_PROXY_READ_TIMEOUT};
|
||||
proxy_send_timeout ${NGINX_PROXY_SEND_TIMEOUT};
|
|
@ -1 +0,0 @@
|
|||
|
42
docker/ssrf_proxy/docker-entrypoint.sh
Executable file
42
docker/ssrf_proxy/docker-entrypoint.sh
Executable file
|
@ -0,0 +1,42 @@
|
|||
#!/bin/bash
|
||||
|
||||
# Modified based on Squid OCI image entrypoint
|
||||
|
||||
# This entrypoint aims to forward the squid logs to stdout to assist users of
|
||||
# common container related tooling (e.g., kubernetes, docker-compose, etc) to
|
||||
# access the service logs.
|
||||
|
||||
# Moreover, it invokes the squid binary, leaving all the desired parameters to
|
||||
# be provided by the "command" passed to the spawned container. If no command
|
||||
# is provided by the user, the default behavior (as per the CMD statement in
|
||||
# the Dockerfile) will be to use Ubuntu's default configuration [1] and run
|
||||
# squid with the "-NYC" options to mimic the behavior of the Ubuntu provided
|
||||
# systemd unit.
|
||||
|
||||
# [1] The default configuration is changed in the Dockerfile to allow local
|
||||
# network connections. See the Dockerfile for further information.
|
||||
|
||||
echo "[ENTRYPOINT] re-create snakeoil self-signed certificate removed in the build process"
|
||||
if [ ! -f /etc/ssl/private/ssl-cert-snakeoil.key ]; then
|
||||
/usr/sbin/make-ssl-cert generate-default-snakeoil --force-overwrite > /dev/null 2>&1
|
||||
fi
|
||||
|
||||
tail -F /var/log/squid/access.log 2>/dev/null &
|
||||
tail -F /var/log/squid/error.log 2>/dev/null &
|
||||
tail -F /var/log/squid/store.log 2>/dev/null &
|
||||
tail -F /var/log/squid/cache.log 2>/dev/null &
|
||||
|
||||
# Replace environment variables in the template and output to the squid.conf
|
||||
echo "[ENTRYPOINT] replacing environment variables in the template"
|
||||
awk '{
|
||||
while(match($0, /\${[A-Za-z_][A-Za-z_0-9]*}/)) {
|
||||
var = substr($0, RSTART+2, RLENGTH-3)
|
||||
val = ENVIRON[var]
|
||||
$0 = substr($0, 1, RSTART-1) val substr($0, RSTART+RLENGTH)
|
||||
}
|
||||
print
|
||||
}' /etc/squid/squid.conf.template > /etc/squid/squid.conf
|
||||
|
||||
/usr/sbin/squid -Nz
|
||||
echo "[ENTRYPOINT] starting squid"
|
||||
/usr/sbin/squid -f /etc/squid/squid.conf -NYC 1
|
50
docker/ssrf_proxy/squid.conf.template
Normal file
50
docker/ssrf_proxy/squid.conf.template
Normal file
|
@ -0,0 +1,50 @@
|
|||
acl localnet src 0.0.0.1-0.255.255.255 # RFC 1122 "this" network (LAN)
|
||||
acl localnet src 10.0.0.0/8 # RFC 1918 local private network (LAN)
|
||||
acl localnet src 100.64.0.0/10 # RFC 6598 shared address space (CGN)
|
||||
acl localnet src 169.254.0.0/16 # RFC 3927 link-local (directly plugged) machines
|
||||
acl localnet src 172.16.0.0/12 # RFC 1918 local private network (LAN)
|
||||
acl localnet src 192.168.0.0/16 # RFC 1918 local private network (LAN)
|
||||
acl localnet src fc00::/7 # RFC 4193 local private network range
|
||||
acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines
|
||||
acl SSL_ports port 443
|
||||
acl Safe_ports port 80 # http
|
||||
acl Safe_ports port 21 # ftp
|
||||
acl Safe_ports port 443 # https
|
||||
acl Safe_ports port 70 # gopher
|
||||
acl Safe_ports port 210 # wais
|
||||
acl Safe_ports port 1025-65535 # unregistered ports
|
||||
acl Safe_ports port 280 # http-mgmt
|
||||
acl Safe_ports port 488 # gss-http
|
||||
acl Safe_ports port 591 # filemaker
|
||||
acl Safe_ports port 777 # multiling http
|
||||
acl CONNECT method CONNECT
|
||||
http_access deny !Safe_ports
|
||||
http_access deny CONNECT !SSL_ports
|
||||
http_access allow localhost manager
|
||||
http_access deny manager
|
||||
http_access allow localhost
|
||||
include /etc/squid/conf.d/*.conf
|
||||
http_access deny all
|
||||
|
||||
################################## Proxy Server ################################
|
||||
http_port ${HTTP_PORT}
|
||||
coredump_dir ${COREDUMP_DIR}
|
||||
refresh_pattern ^ftp: 1440 20% 10080
|
||||
refresh_pattern ^gopher: 1440 0% 1440
|
||||
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
|
||||
refresh_pattern \/(Packages|Sources)(|\.bz2|\.gz|\.xz)$ 0 0% 0 refresh-ims
|
||||
refresh_pattern \/Release(|\.gpg)$ 0 0% 0 refresh-ims
|
||||
refresh_pattern \/InRelease$ 0 0% 0 refresh-ims
|
||||
refresh_pattern \/(Translation-.*)(|\.bz2|\.gz|\.xz)$ 0 0% 0 refresh-ims
|
||||
refresh_pattern . 0 20% 4320
|
||||
|
||||
|
||||
# cache_dir ufs /var/spool/squid 100 16 256
|
||||
# upstream proxy, set to your own upstream proxy IP to avoid SSRF attacks
|
||||
# cache_peer 172.1.1.1 parent 3128 0 no-query no-digest no-netdb-exchange default
|
||||
|
||||
################################## Reverse Proxy To Sandbox ################################
|
||||
http_port ${REVERSE_PROXY_PORT} accel vhost
|
||||
cache_peer ${SANDBOX_HOST} parent ${SANDBOX_PORT} 0 no-query originserver
|
||||
acl src_all src all
|
||||
http_access allow src_all
|
7570
web/yarn.lock
7570
web/yarn.lock
File diff suppressed because it is too large
Load Diff
Loading…
Reference in New Issue
Block a user