Feat/9081 add support for llamaguard through groq provider (#9083)
Some checks are pending
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/amd64, build-api-amd64) (push) Waiting to run
Build and Push API & Web / build (api, DIFY_API_IMAGE_NAME, linux/arm64, build-api-arm64) (push) Waiting to run
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/amd64, build-web-amd64) (push) Waiting to run
Build and Push API & Web / build (web, DIFY_WEB_IMAGE_NAME, linux/arm64, build-web-arm64) (push) Waiting to run
Build and Push API & Web / create-manifest (api, DIFY_API_IMAGE_NAME, merge-api-images) (push) Blocked by required conditions
Build and Push API & Web / create-manifest (web, DIFY_WEB_IMAGE_NAME, merge-web-images) (push) Blocked by required conditions

This commit is contained in:
crazywoola 2024-10-09 01:00:10 +08:00 committed by GitHub
parent 5b7de7705e
commit 3a0734d94c
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
2 changed files with 26 additions and 0 deletions

View File

@ -5,3 +5,4 @@
- llama3-8b-8192
- mixtral-8x7b-32768
- llama2-70b-4096
- llama-guard-3-8b

View File

@ -0,0 +1,25 @@
model: llama-guard-3-8b
label:
zh_Hans: Llama-Guard-3-8B
en_US: Llama-Guard-3-8B
model_type: llm
features:
- agent-thought
model_properties:
mode: chat
context_size: 8192
parameter_rules:
- name: temperature
use_template: temperature
- name: top_p
use_template: top_p
- name: max_tokens
use_template: max_tokens
default: 512
min: 1
max: 8192
pricing:
input: '0.20'
output: '0.20'
unit: '0.000001'
currency: USD