Yash Parmar
|
e0da0744b5
|
add: ollama keep alive parameter added. issue #4024 (#4655)
|
2024-05-31 12:22:02 +08:00 |
|
Weaxs
|
b189faca52
|
feat: update ernie model (#4756)
|
2024-05-29 14:57:23 +08:00 |
|
xielong
|
e1cd9aef8f
|
feat: support baichuan3 turbo, baichuan3 turbo 128k, and baichuan4 (#4762)
|
2024-05-29 14:46:04 +08:00 |
|
crazywoola
|
705a6e3a8e
|
Fix/4742 ollama num gpu option not consistent with allowed values (#4751)
|
2024-05-29 13:33:35 +08:00 |
|
xielong
|
793f0c1dd6
|
fix: Corrected schema link in model_runtime's README.md (#4757)
|
2024-05-29 13:03:21 +08:00 |
|
xielong
|
88b4d69278
|
fix: Correct context size for banchuan2-53b and banchuan2-turbo (#4721)
|
2024-05-28 16:37:44 +08:00 |
|
crazywoola
|
27dae156db
|
fix: colon in file mistral.mistral-small-2402-v1:0 (#4673)
|
2024-05-27 13:15:20 +08:00 |
|
Giovanny Gutiérrez
|
2deb23e00e
|
fix: Show rerank in system for localai (#4652)
|
2024-05-27 12:09:51 +08:00 |
|
longzhihun
|
fe9bf5fc4a
|
[seanguo] add support of amazon titan v2 and modify the price of amazon titan v1 (#4643)
Co-authored-by: Chenhe Gu <guchenhe@gmail.com>
|
2024-05-26 23:30:22 +08:00 |
|
miendinh
|
f804adbff3
|
feat: Support for Vertex AI - load Default Application Configuration (#4641)
Co-authored-by: miendinh <miendinh@users.noreply.github.com>
Co-authored-by: crazywoola <427733928@qq.com>
|
2024-05-25 13:40:25 +08:00 |
|
Krasus.Chen
|
f156014daa
|
update lite8k/speed8k/128k max_token to newest (#4636)
Co-authored-by: Your Name <chen@krasus.red>
|
2024-05-24 19:33:42 +08:00 |
|
Bowen Liang
|
3fda2245a4
|
improve: extract method for safe loading yaml file and avoid using PyYaml's FullLoader (#4031)
|
2024-05-24 12:08:12 +08:00 |
|
Patryk Garstecki
|
296887754f
|
Support for Vertex AI (#4586)
|
2024-05-24 12:01:40 +08:00 |
|
QuietRocket
|
9ae72cdcf4
|
feat: Add Gemini Flash (#4616)
|
2024-05-24 11:43:06 +08:00 |
|
takatost
|
11642192d1
|
chore: add https://api.openai.com placeholder in OpenAI api base (#4604)
|
2024-05-23 12:56:05 +08:00 |
|
呆萌闷油瓶
|
e57bdd4e58
|
chore:update gpt-3.5-turbo and gpt-4-turbo parameter for azure (#4596)
|
2024-05-23 11:51:38 +08:00 |
|
somethingwentwell
|
461488e9bf
|
Add Azure OpenAI API version for GPT4o support (#4569)
Co-authored-by: wwwc <wwwc@outlook.com>
|
2024-05-22 17:43:16 +08:00 |
|
Justin Wu
|
3ab19be9ea
|
Fix bedrock claude wrong pricing (#4572)
Co-authored-by: Justin Wu <justin.wu@ringcentral.com>
|
2024-05-22 14:28:28 +08:00 |
|
呆萌闷油瓶
|
d5a33a0323
|
feat:add gpt-4o for azure (#4568)
|
2024-05-22 11:02:43 +08:00 |
|
Bowen Liang
|
e8e213ad1e
|
chore: apply and fix flake8-bugbear lint rules (#4496)
|
2024-05-20 16:34:13 +08:00 |
|
Ever
|
4086f5051c
|
feat:Provide parameter config for mask_sensitive_info of MiniMax mode… (#4294)
Co-authored-by: 老潮 <zhangyongsheng@3vjia.com>
Co-authored-by: takatost <takatost@users.noreply.github.com>
Co-authored-by: takatost <takatost@gmail.com>
|
2024-05-20 10:15:27 +08:00 |
|
fanghongtai
|
1cca100a48
|
fix:modify spelling errors: lanuage ->language in schema.md (#4499)
Co-authored-by: wxfanghongtai <wxfanghongtai@gf.com.cn>
|
2024-05-19 18:31:05 +08:00 |
|
Bowen Liang
|
04ad46dd31
|
chore: skip unnecessary key checks prior to accessing a dictionary (#4497)
|
2024-05-19 18:30:45 +08:00 |
|
Yeuoly
|
091fba74cb
|
enhance: claude stream tool call (#4469)
|
2024-05-17 12:43:58 +08:00 |
|
jiaqianjing
|
0ac5d621b6
|
add llm: ernie-character-8k of wenxin (#4448)
|
2024-05-16 18:31:07 +08:00 |
|
sino
|
6e9066ebf4
|
feat: support doubao llm and embeding models (#4431)
|
2024-05-16 11:41:24 +08:00 |
|
Yash Parmar
|
332baca538
|
FIX: fix the temperature value of ollama model (#4027)
|
2024-05-15 08:05:54 +08:00 |
|
Yeuoly
|
e8311357ff
|
feat: gpt-4o (#4346)
|
2024-05-14 02:52:41 +08:00 |
|
orangeclk
|
ece0f08a2b
|
add yi models (#4335)
Co-authored-by: 陈力坤 <likunchen@caixin.com>
|
2024-05-13 17:40:53 +08:00 |
|
Weaxs
|
8cc492721b
|
fix: minimax streaming function_call message (#4271)
|
2024-05-11 21:07:22 +08:00 |
|
Joshua
|
a80fe20456
|
add-some-new-models-hosted-on-nvidia (#4303)
|
2024-05-11 21:05:31 +08:00 |
|
呆萌闷油瓶
|
4796f9d914
|
feat:add gpt-4-turbo for azure (#4287)
|
2024-05-11 13:02:56 +08:00 |
|
Sebastian.W
|
a588df4371
|
Add rerank model type for LocalAI provider (#3952)
|
2024-05-11 11:29:28 +08:00 |
|
Bowen Liang
|
228de1f12a
|
fix: miss usage of os.path.join for URL assembly and add tests on yarl (#4224)
|
2024-05-10 18:14:48 +08:00 |
|
sino
|
4aa21242b6
|
feat: add volcengine maas model provider (#4142)
|
2024-05-08 12:45:53 +08:00 |
|
Yong723
|
8ce93faf08
|
Typo on deepseek.yaml and yi.yaml (#4170)
|
2024-05-08 10:52:04 +08:00 |
|
Su Yang
|
9f440c11e0
|
feat: DeepSeek (#4162)
|
2024-05-08 00:28:16 +08:00 |
|
Joshua
|
58bd5627bf
|
Add-Deepseek (#4157)
|
2024-05-07 22:45:38 +08:00 |
|
Moonlit
|
2fdd64c1b5
|
feat: add proxy configuration for Cohere model (#4152)
|
2024-05-07 18:12:13 +08:00 |
|
VoidIsVoid
|
543a00e597
|
feat: update model_provider jina to support custom url and model (#4110)
Co-authored-by: Gimling <huangjl@ruyi.ai>
Co-authored-by: takatost <takatost@gmail.com>
|
2024-05-07 17:43:24 +08:00 |
|
Minamiyama
|
f361c7004d
|
feat: support vision models from xinference (#4094)
Co-authored-by: Yeuoly <admin@srmxy.cn>
|
2024-05-07 17:37:36 +08:00 |
|
Tomy
|
bb7c62777d
|
Add support for local ai speech to text (#3921)
Co-authored-by: Yeuoly <admin@srmxy.cn>
|
2024-05-07 17:14:24 +08:00 |
|
Charlie.Wei
|
087b7a6607
|
azure_openai add gpt-4-turbo-2024-04-09 model (#4144)
Co-authored-by: luowei <glpat-EjySCyNjWiLqAED-YmwM>
Co-authored-by: crazywoola <427733928@qq.com>
Co-authored-by: crazywoola <100913391+crazywoola@users.noreply.github.com>
|
2024-05-07 15:55:23 +08:00 |
|
Weaxs
|
6f1911533c
|
bug fix: update minimax model_apis (#4116)
|
2024-05-07 14:40:24 +08:00 |
|
Yeuoly
|
d5d8b98d82
|
feat: support openai stream usage (#4140)
|
2024-05-07 13:49:45 +08:00 |
|
Joshua
|
51a9e678f0
|
Leptonai integrate (#4079)
|
2024-05-05 14:37:47 +08:00 |
|
chenx5
|
ad76ee76a8
|
Update bedrock.yaml add Region Asia Pacific (Sydney) (#4016)
|
2024-05-05 10:49:17 +08:00 |
|
orangeclk
|
cbdb861ee4
|
add glm-3-turbo max_tokens parameter setting (#4017)
Co-authored-by: 陈力坤 <likunchen@caixin.com>
|
2024-04-30 17:08:04 +08:00 |
|
Weaxs
|
1e6e8b446d
|
feat: support minimax abab6.5, abab6.5s (#4012)
|
2024-04-30 17:02:01 +08:00 |
|
Joshua
|
2f84d00300
|
fix-nvidia-llama3 (#3973)
|
2024-04-29 13:41:15 +08:00 |
|