dify/api/core/model_providers
2023-08-26 19:48:34 +08:00
..
models feat: hf inference endpoint stream support (#1028) 2023-08-26 19:48:34 +08:00
providers feat: optimize xinference request max token key and stop reason (#998) 2023-08-24 18:11:15 +08:00
rules feat: add openllm support (#928) 2023-08-20 19:04:33 +08:00
error.py feat: server multi models support (#799) 2023-08-12 00:57:00 +08:00
model_factory.py fix: universal chat when default model invalid (#905) 2023-08-18 16:20:42 +08:00
model_provider_factory.py feat: add openllm support (#928) 2023-08-20 19:04:33 +08:00
rules.py feat: server multi models support (#799) 2023-08-12 00:57:00 +08:00