Bowen Liang
|
063191889d
|
chore: apply ruff's pyupgrade linter rules to modernize Python code with targeted version (#2419)
|
2024-02-09 15:21:33 +08:00 |
|
Bowen Liang
|
65a02f7d32
|
chore: apply F811 linter rule to eliminate redefined imports and methods (#2412)
|
2024-02-07 16:28:45 +08:00 |
|
呆萌闷油瓶
|
2166473852
|
Feat/add spark3.5 llm (#2314)
Co-authored-by: lux@njuelectronics.com <lux@njuelectronics.com>
Co-authored-by: crazywoola <427733928@qq.com>
|
2024-01-31 17:57:17 +08:00 |
|
Bowen Liang
|
cc9e74123c
|
improve: introduce isort for linting Python imports (#1983)
|
2024-01-12 12:34:01 +08:00 |
|
takatost
|
d069c668f8
|
Model Runtime (#1858)
Co-authored-by: StyleZhang <jasonapring2015@outlook.com>
Co-authored-by: Garfield Dai <dai.hai@foxmail.com>
Co-authored-by: chenhe <guchenhe@gmail.com>
Co-authored-by: jyong <jyong@dify.ai>
Co-authored-by: Joel <iamjoel007@gmail.com>
Co-authored-by: Yeuoly <admin@srmxy.cn>
|
2024-01-02 23:42:00 +08:00 |
|
Charlie.Wei
|
b0d8d196e1
|
azure openai add gpt-4-1106-preview、gpt-4-vision-preview models (#1751)
Co-authored-by: luowei <glpat-EjySCyNjWiLqAED-YmwM>
Co-authored-by: crazywoola <100913391+crazywoola@users.noreply.github.com>
|
2023-12-14 09:55:30 +08:00 |
|
zxhlyh
|
451af66be0
|
feat: add jina embedding (#1647)
Co-authored-by: takatost <takatost@gmail.com>
|
2023-11-29 14:58:11 +08:00 |
|
takatost
|
4a55d5729d
|
feat: add anthropic claude-2.1 support (#1591)
|
2023-11-22 01:46:19 +08:00 |
|
takatost
|
d654770732
|
feat: supports for new version of openllm (#1554)
|
2023-11-17 14:07:36 +08:00 |
|
takatost
|
41d0a8b295
|
feat: [backend] vision support (#1510)
Co-authored-by: Garfield Dai <dai.hai@foxmail.com>
|
2023-11-13 22:05:46 +08:00 |
|
takatost
|
9de67c586f
|
feat: update free plan rules of spark (#1515)
|
2023-11-13 17:00:36 +08:00 |
|
takatost
|
4dfbcd0b4e
|
feat: support chatglm_turbo model #1443 (#1460)
|
2023-11-06 04:33:05 -06:00 |
|
takatost
|
076f3289d2
|
feat: add spark v3.0 llm support (#1434)
|
2023-10-31 03:13:11 -05:00 |
|
takatost
|
7c9b585a47
|
feat: support weixin ernie-bot-4 and chat mode (#1375)
|
2023-10-18 02:35:24 -05:00 |
|
takatost
|
3efaa713da
|
feat: use xinference client instead of xinference (#1339)
|
2023-10-13 02:46:09 -05:00 |
|
takatost
|
f4be2b8bcd
|
fix: raise error in minimax stream generate (#1336)
|
2023-10-12 23:48:28 -05:00 |
|
takatost
|
2851a9f04e
|
feat: optimize minimax llm call (#1312)
|
2023-10-11 07:17:41 -05:00 |
|
takatost
|
875dfbbf0e
|
fix: openllm completion start with prompt, remove it (#1303)
|
2023-10-10 04:44:19 -05:00 |
|
takatost
|
4ab4bcc074
|
feat: support openllm embedding (#1293)
|
2023-10-09 23:09:35 -05:00 |
|
takatost
|
1d4f019de4
|
feat: add baichuan llm support (#1294)
Co-authored-by: zxhlyh <jasonapring2015@outlook.com>
|
2023-10-09 23:09:26 -05:00 |
|
Garfield Dai
|
e409895c02
|
Feat/huggingface embedding support (#1211)
Co-authored-by: StyleZhang <jasonapring2015@outlook.com>
|
2023-09-22 13:59:02 +08:00 |
|
takatost
|
ae3f1ac0a9
|
feat: support gpt-3.5-turbo-instruct model (#1195)
|
2023-09-19 02:05:04 +08:00 |
|
takatost
|
827c97f0d3
|
feat: add zhipuai (#1188)
|
2023-09-18 17:32:31 +08:00 |
|
takatost
|
c4d8bdc3db
|
fix: hf hosted inference check (#1128)
|
2023-09-09 00:29:48 +08:00 |
|
takatost
|
d75e8aeafa
|
feat: disable anthropic retry (#1067)
|
2023-08-31 16:44:46 +08:00 |
|
takatost
|
2eba98a465
|
feat: optimize anthropic connection pool (#1066)
|
2023-08-31 16:18:59 +08:00 |
|
takatost
|
417c19577a
|
feat: add LocalAI local embedding model support (#1021)
Co-authored-by: StyleZhang <jasonapring2015@outlook.com>
|
2023-08-29 22:22:02 +08:00 |
|
takatost
|
0796791de5
|
feat: hf inference endpoint stream support (#1028)
|
2023-08-26 19:48:34 +08:00 |
|
Uranus
|
2d9616c29c
|
fix: xinference last token being ignored (#1013)
|
2023-08-25 18:15:05 +08:00 |
|
takatost
|
9ae91a2ec3
|
feat: optimize xinference request max token key and stop reason (#998)
|
2023-08-24 18:11:15 +08:00 |
|
takatost
|
bd3a9b2f8d
|
fix: xinference-chat-stream-response (#991)
|
2023-08-24 14:39:34 +08:00 |
|
takatost
|
18d3877151
|
feat: optimize xinference stream (#989)
|
2023-08-24 13:58:34 +08:00 |
|
takatost
|
a76fde3d23
|
feat: optimize hf inference endpoint (#975)
|
2023-08-23 19:47:50 +08:00 |
|
takatost
|
78d3aa5fcd
|
fix: embedding init err (#956)
|
2023-08-22 17:43:59 +08:00 |
|
takatost
|
4f3053a8cc
|
fix: xinference chat completion error (#952)
|
2023-08-22 15:58:04 +08:00 |
|
takatost
|
866ee5da91
|
fix: openllm generate cutoff (#945)
|
2023-08-22 13:43:36 +08:00 |
|
takatost
|
e0a48c4972
|
fix: xinference chat support (#939)
|
2023-08-21 20:44:29 +08:00 |
|
takatost
|
6c832ee328
|
fix: remove openllm pypi package because of this package too large (#931)
|
2023-08-21 02:12:28 +08:00 |
|
takatost
|
0cc0b6e052
|
fix: error raise status code not exist (#888)
|
2023-08-17 15:33:35 +08:00 |
|
takatost
|
f42e7d1a61
|
feat: add spark v2 support (#885)
|
2023-08-17 15:08:57 +08:00 |
|
takatost
|
c4d759dfba
|
fix: wenxin error not raise when stream mode (#884)
|
2023-08-17 13:40:00 +08:00 |
|
takatost
|
cc52cdc2a9
|
Feat/add free provider apply (#829)
|
2023-08-14 12:44:35 +08:00 |
|
takatost
|
5fa2161b05
|
feat: server multi models support (#799)
|
2023-08-12 00:57:00 +08:00 |
|