* main: (35 commits)
fix https://github.com/langgenius/dify/issues/9409 (#9433)
update dataset clean rule (#9426)
add clean 7 days datasets (#9424)
fix: resolve overlap issue with API Extension selector and modal (#9407)
refactor: update the default values of top-k parameter in vdb to be consistent (#9367)
fix: incorrect webapp image displayed (#9401)
Fix/economical knowledge retrieval (#9396)
feat: add timezone conversion for time tool (#9393)
fix: Deprecated gemma2-9b model in Fireworks AI Provider (#9373)
feat: storybook (#9324)
fix: use gpt-4o-mini for validating credentials (#9387)
feat: Enable baiduvector intergration test (#9369)
fix: remove the stream option of zhipu and gemini (#9319)
fix: add missing vikingdb param in docker .env.example (#9334)
feat: add minimax abab6.5t support (#9365)
fix: (#9336 followup) skip poetry preperation in style workflow when no change in api folder (#9362)
feat: add glm-4-flashx, deprecated chatglm_turbo (#9357)
fix: Azure OpenAI o1 max_completion_token and get_num_token_from_messages error (#9326)
fix: In the output, the order of 'ta' is sometimes reversed as 'at'. #8015 (#8791)
refactor: Add an enumeration type and use the factory pattern to obtain the corresponding class (#9356)
...
* main: (121 commits)
fix: remove the latest message from the user that does not have any answer yet (#9297)
Add Volcengine VikingDB as new vector provider (#9287)
chore: translate i18n files (#9288)
chore: add baidu-obs and supabase for .env.example (#9289)
chore: add abstract decorator and output log when query embedding fails (#9264)
Feat/new account page (#9236)
Feat/implement-refresh-tokens (#9233)
feat: refresh-token (#9286)
chore: translate i18n files (#9284)
feat:support baidu vector db (#9185)
Feat: rerank model verification in front end (#9271)
Fix/s3 iam add region name (#7819)
chore: optimize the trace ops slow queries on node executions. (#9282)
chore: use cache instead of re-querying node record during workflow execution (#9280)
chore: fix the misclassification of the opensearch-py package (#9266)
fix: add new domain to whitelist (#9265)
fix: move exception to debug mode (#9258)
feat: add supabase object storage (#9229)
fix: dialog box cannot correctly display LaTeX formulas (#9242)
Fix/agent external knowledge retrieval (#9241)
...
- Introduced `TokenPair` model for managing access and refresh tokens.
- Added `refresh_token` method to generate new tokens upon expiration.
- Updated login/logout processes to handle token pairs and enhanced security.
- Replaced `get_remote_ip` with `extract_remote_ip` for clarity.
- Added endpoint for refreshing tokens to maintain user session continuity.
* main: (77 commits)
feat: add voyage ai as a new model provider (#8747)
docs: add english versions for the files customizable_model_scale_out and predefined_model_scale_out (#8871)
fix: #8843 event: tts_message_end always return in api streaming resp… (#8846)
Add Jamba and Llama3.2 model support (#8878)
fix(workflow): update tagging logic in GitHub Actions (#8882)
chore: bump ruff to 0.6.8 for fixing violation in SIM910 (#8869)
refactor: update Callback to an abstract class (#8868)
feat: deprecate gte-Qwen2-7B-instruct embedding model (#8866)
feat: add internlm2.5-20b and qwen2.5-coder-7b model (#8862)
fix: customize model credentials were invalid despite the provider credentials being active (#8864)
fix: update qwen2.5-coder-7b model name (#8861)
fix(workflow/nodes/knowledge-retrieval/use-config): Preserve rerankin… (#8842)
chore: fix wrong VectorType match case (#8857)
feat: add min-connection and max-connection for pgvector (#8841)
feat(Tools): add feishu tools (#8800)
fix: delete harm catalog settings for gemini (#8829)
Add Llama3.2 models in Groq provider (#8831)
feat: deprecate mistral model for siliconflow (#8828)
fix: AnalyticdbVector retrieval scores (#8803)
fix: close log status option raise error (#8826)
...
* main: (40 commits)
feat: allow users to specify timeout for text generations and workflows by environment variable (#8395)
Fix: operation postion of answer in logs (#8411)
fix: when the variable does not exist, an error should be prompted (#8413)
fix(workflow): the answer node after the iteration node containing the answer was output prematurely (#8419)
fix:logs and rm unused codes in CacheEmbedding (#8409)
fix: resolve runtime error when self.folder is None (#8401)
Fix: Support Bedrock cross region inference #8190 (Update Model name to distinguish between different region groups) (#8402)
fix(docker): aliyun oss path env key (#8394)
fix: pyproject.toml typo (#8396)
fix: o1-mini 65563 -> 65536 (#8388)
fix: sandbox issue related httpx and requests (#8397)
chore: improve usage of striping prefix or suffix of string with Ruff 0.6.5 (#8392)
fix (#8322 followup): resolve the violation of pylint rules (#8391)
chore: refurish python code by applying Pylint linter rules (#8322)
support hunyuan-turbo (#8372)
chore: update firecrawl scrape to V1 api (#8367)
fix(workflow): both parallel and single branch errors occur in if-else (#8378)
fix: edit load balancing not pass id (#8370)
fix: add before send to remove langfuse defaultErrorResponse (#8361)
fix: when edit load balancing config not pass the empty filed value hidden (#8366)
...