.. |
__base
|
feat: backend model load balancing support (#4927)
|
2024-06-05 00:13:04 +08:00 |
anthropic
|
chore: skip unnecessary key checks prior to accessing a dictionary (#4497)
|
2024-05-19 18:30:45 +08:00 |
azure_openai
|
chore:update gpt-3.5-turbo and gpt-4-turbo parameter for azure (#4596)
|
2024-05-23 11:51:38 +08:00 |
baichuan
|
chore: fix indention violations by applying E111 to E117 ruff rules (#4925)
|
2024-06-05 14:05:15 +08:00 |
bedrock
|
chore: fix indention violations by applying E111 to E117 ruff rules (#4925)
|
2024-06-05 14:05:15 +08:00 |
chatglm
|
fix: miss usage of os.path.join for URL assembly and add tests on yarl (#4224)
|
2024-05-10 18:14:48 +08:00 |
cohere
|
feat: add proxy configuration for Cohere model (#4152)
|
2024-05-07 18:12:13 +08:00 |
deepseek
|
Typo on deepseek.yaml and yi.yaml (#4170)
|
2024-05-08 10:52:04 +08:00 |
google
|
fix: gemini timeout error (#4955)
|
2024-06-06 10:19:03 +08:00 |
groq
|
fix: credentials validate failed for groqcloud model provider (#3817)
|
2024-04-25 12:09:44 +08:00 |
huggingface_hub
|
|
|
jina
|
feat: update model_provider jina to support custom url and model (#4110)
|
2024-05-07 17:43:24 +08:00 |
leptonai
|
Leptonai integrate (#4079)
|
2024-05-05 14:37:47 +08:00 |
localai
|
fix: Show rerank in system for localai (#4652)
|
2024-05-27 12:09:51 +08:00 |
minimax
|
feat:Provide parameter config for mask_sensitive_info of MiniMax mode… (#4294)
|
2024-05-20 10:15:27 +08:00 |
mistralai
|
|
|
moonshot
|
feat: moonshot fc (#3629)
|
2024-04-19 14:04:30 +08:00 |
nvidia
|
add-some-new-models-hosted-on-nvidia (#4303)
|
2024-05-11 21:05:31 +08:00 |
nvidia_nim
|
chore: optimize nvidia nim credential schema and info (#4898)
|
2024-06-04 02:26:26 +08:00 |
ollama
|
add: ollama keep alive parameter added. issue #4024 (#4655)
|
2024-05-31 12:22:02 +08:00 |
openai
|
feat: set default memory messages limit to infinite (#5002)
|
2024-06-06 17:39:44 +08:00 |
openai_api_compatible
|
|
|
openllm
|
chore: skip unnecessary key checks prior to accessing a dictionary (#4497)
|
2024-05-19 18:30:45 +08:00 |
openrouter
|
|
|
replicate
|
feat: replicate supports default version. (#3884)
|
2024-04-26 21:16:22 +08:00 |
spark
|
|
|
togetherai
|
add together ai model setting (#3895)
|
2024-04-26 20:43:17 +08:00 |
tongyi
|
|
|
triton_inference_server
|
|
|
vertex_ai
|
feat: added Anthropic Claude3 models to Google Cloud Vertex AI (#4870)
|
2024-06-04 02:52:46 +08:00 |
volcengine_maas
|
feat: support doubao llm and embeding models (#4431)
|
2024-05-16 11:41:24 +08:00 |
wenxin
|
fix: update presence_penalty configuration for wenxin AI ernie-4.0-8k and ernie-3.5-8k models (#5039)
|
2024-06-09 14:44:11 +08:00 |
xinference
|
feat: support vision models from xinference (#4094)
|
2024-05-07 17:37:36 +08:00 |
yi
|
add yi models (#4335)
|
2024-05-13 17:40:53 +08:00 |
zhipuai
|
add glm-3-turbo max_tokens parameter setting (#4017)
|
2024-04-30 17:08:04 +08:00 |
__init__.py
|
|
|
_position.yaml
|
add-nvidia-mim (#4882)
|
2024-06-03 21:10:18 +08:00 |
model_provider_factory.py
|
feat: backend model load balancing support (#4927)
|
2024-06-05 00:13:04 +08:00 |