Jyong
|
0e08526428
|
fix hybrid search reranking check (#1563)
Co-authored-by: jyong <jyong@dify.ai>
|
2023-11-18 17:06:28 +08:00 |
|
Jyong
|
4588831bff
|
Feat/add retriever rerank (#1560)
Co-authored-by: jyong <jyong@dify.ai>
|
2023-11-17 22:13:37 +08:00 |
|
takatost
|
41d0a8b295
|
feat: [backend] vision support (#1510)
Co-authored-by: Garfield Dai <dai.hai@foxmail.com>
|
2023-11-13 22:05:46 +08:00 |
|
takatost
|
d7ae86799c
|
feat: support basic feature of OpenAI new models (#1476)
|
2023-11-07 04:05:59 -06:00 |
|
takatost
|
4dfbcd0b4e
|
feat: support chatglm_turbo model #1443 (#1460)
|
2023-11-06 04:33:05 -06:00 |
|
takatost
|
076f3289d2
|
feat: add spark v3.0 llm support (#1434)
|
2023-10-31 03:13:11 -05:00 |
|
takatost
|
e122d677ad
|
fix: return wrong when init 0 quota in trial provider (#1394)
|
2023-10-21 14:02:38 -05:00 |
|
takatost
|
4c63cbf5b1
|
feat: adjust anthropic (#1387)
|
2023-10-20 02:27:46 -05:00 |
|
Garfield Dai
|
fe14130b3c
|
refactor advanced prompt core. (#1350)
|
2023-10-18 20:02:52 +08:00 |
|
wayne.wang
|
52ebffa857
|
fix: app config zhipu chatglm_std model, but it still use chatglm_lit… (#1377)
Co-authored-by: wayne.wang <wayne.wang@beibei.com>
|
2023-10-18 05:07:36 -05:00 |
|
takatost
|
7c9b585a47
|
feat: support weixin ernie-bot-4 and chat mode (#1375)
|
2023-10-18 02:35:24 -05:00 |
|
takatost
|
3efaa713da
|
feat: use xinference client instead of xinference (#1339)
|
2023-10-13 02:46:09 -05:00 |
|
takatost
|
9822f687f7
|
fix: max tokens of OpenAI gpt-3.5-turbo-instruct to 4097 (#1338)
|
2023-10-13 02:07:07 -05:00 |
|
Garfield Dai
|
42a5b3ec17
|
feat: advanced prompt backend (#1301)
Co-authored-by: takatost <takatost@gmail.com>
|
2023-10-12 10:13:10 -05:00 |
|
takatost
|
cbf095465c
|
feat: remove llm client use (#1316)
|
2023-10-11 14:02:53 -05:00 |
|
takatost
|
2851a9f04e
|
feat: optimize minimax llm call (#1312)
|
2023-10-11 07:17:41 -05:00 |
|
takatost
|
c536f85b2e
|
fix: compatibility issues with the tongyi model. (#1310)
|
2023-10-11 05:16:26 -05:00 |
|
takatost
|
8480b0197b
|
fix: prompt for baichuan text generation models (#1299)
|
2023-10-10 13:01:18 +08:00 |
|
takatost
|
4ab4bcc074
|
feat: support openllm embedding (#1293)
|
2023-10-09 23:09:35 -05:00 |
|
takatost
|
1d4f019de4
|
feat: add baichuan llm support (#1294)
Co-authored-by: zxhlyh <jasonapring2015@outlook.com>
|
2023-10-09 23:09:26 -05:00 |
|
takatost
|
373e90ee6d
|
fix: detached model in completion thread (#1269)
|
2023-10-02 22:27:25 +08:00 |
|
takatost
|
41d4c5b424
|
fix: count down thread in completion db not commit (#1267)
|
2023-10-02 10:19:26 +08:00 |
|
takatost
|
8606d80c66
|
fix: request timeout when openai completion (#1265)
|
2023-10-01 16:00:23 +08:00 |
|
takatost
|
a31466d34e
|
fix: db session not commit before long llm call running (#1251)
|
2023-09-27 21:40:26 +08:00 |
|
takatost
|
d38eac959b
|
fix: wenxin model name invalid when llm call (#1248)
|
2023-09-27 16:29:13 +08:00 |
|
Garfield Dai
|
e409895c02
|
Feat/huggingface embedding support (#1211)
Co-authored-by: StyleZhang <jasonapring2015@outlook.com>
|
2023-09-22 13:59:02 +08:00 |
|
takatost
|
435f804c6f
|
fix: gpt-3.5-turbo-instruct context size to 8192 (#1196)
|
2023-09-19 02:10:22 +08:00 |
|
takatost
|
ae3f1ac0a9
|
feat: support gpt-3.5-turbo-instruct model (#1195)
|
2023-09-19 02:05:04 +08:00 |
|
takatost
|
827c97f0d3
|
feat: add zhipuai (#1188)
|
2023-09-18 17:32:31 +08:00 |
|
takatost
|
c8bd76cd66
|
fix: inference embedding validate (#1187)
|
2023-09-16 03:09:36 +08:00 |
|
takatost
|
f9082104ed
|
feat: add hosted moderation (#1158)
|
2023-09-12 10:26:12 +08:00 |
|
Jyong
|
642842d61b
|
Feat:dataset retiever resource (#1123)
Co-authored-by: jyong <jyong@dify.ai>
Co-authored-by: StyleZhang <jasonapring2015@outlook.com>
|
2023-09-10 15:17:43 +08:00 |
|
Joel
|
2d5ad0d208
|
feat: support optional query content (#1097)
Co-authored-by: Garfield Dai <dai.hai@foxmail.com>
|
2023-09-10 00:12:34 +08:00 |
|
takatost
|
c4d8bdc3db
|
fix: hf hosted inference check (#1128)
|
2023-09-09 00:29:48 +08:00 |
|
takatost
|
a7cdb745c1
|
feat: support spark v2 validate (#1086)
|
2023-09-01 20:53:32 +08:00 |
|
takatost
|
2eba98a465
|
feat: optimize anthropic connection pool (#1066)
|
2023-08-31 16:18:59 +08:00 |
|
takatost
|
417c19577a
|
feat: add LocalAI local embedding model support (#1021)
Co-authored-by: StyleZhang <jasonapring2015@outlook.com>
|
2023-08-29 22:22:02 +08:00 |
|
takatost
|
0796791de5
|
feat: hf inference endpoint stream support (#1028)
|
2023-08-26 19:48:34 +08:00 |
|
takatost
|
9ae91a2ec3
|
feat: optimize xinference request max token key and stop reason (#998)
|
2023-08-24 18:11:15 +08:00 |
|
takatost
|
2c30d19cbe
|
feat: add baichuan prompt (#985)
|
2023-08-24 10:22:36 +08:00 |
|
takatost
|
9b247fccd4
|
feat: adjust hf max tokens (#979)
|
2023-08-23 22:24:50 +08:00 |
|
takatost
|
a76fde3d23
|
feat: optimize hf inference endpoint (#975)
|
2023-08-23 19:47:50 +08:00 |
|
takatost
|
78d3aa5fcd
|
fix: embedding init err (#956)
|
2023-08-22 17:43:59 +08:00 |
|
takatost
|
e0a48c4972
|
fix: xinference chat support (#939)
|
2023-08-21 20:44:29 +08:00 |
|
takatost
|
6c832ee328
|
fix: remove openllm pypi package because of this package too large (#931)
|
2023-08-21 02:12:28 +08:00 |
|
takatost
|
25264e7852
|
feat: add xinference embedding model support (#930)
|
2023-08-20 19:35:07 +08:00 |
|
takatost
|
18dd0d569d
|
fix: xinference max_tokens alisa error (#929)
|
2023-08-20 19:12:52 +08:00 |
|
takatost
|
3ea8d7a019
|
feat: add openllm support (#928)
|
2023-08-20 19:04:33 +08:00 |
|
takatost
|
da3f10a55e
|
feat: server xinference support (#927)
|
2023-08-20 17:46:41 +08:00 |
|
takatost
|
95b179fb39
|
fix: replicate text generation model validate (#923)
|
2023-08-19 21:40:42 +08:00 |
|