移除axios依赖,替换百度翻译为有道翻译。指令表支持关键词筛选。 (#434)

* feat: add support for ‘greeting’ and ‘global reply mode’ commands, improve variable naming and remove unnecessary backend output.

* feat: Add support for black and white lists, global reply mode and voice role settings, private chat switch, and active greeting configuration. Refactor some variable names and comment out redundant code for better readability and reduced backend output.

* feat: 为新功能完善了帮助面板

* docs: 完善了‘打招呼’的帮助说明

* Commit Type: feat, bugfix

Add functionality to view plugin command table, fix bug in blacklist/whitelist, and fix bug where chat mode can still be used in private messaging when disabled.

* Commit Type: feat, bugfix

Add functionality to view plugin command table, fix bug in blacklist/whitelist, and fix bug where chat mode can still be used in private messaging when disabled.

* refactor: Remove redundant log output.

* Refactor: optimize code logic

* Fix: 修复绘图指令表被抢指令的bug。

* Refactor:1. Add support for automatically translating replies to Japanese and generating voice messages in VITS voice mode (please monitor remaining quota after enabling). 2. Add translation function. 3. Add emotion configuration for Azure voice mode, allowing the robot to select appropriate emotional styles for replies.

* Refactor:Handle the issue of exceeding character setting limit caused by adding emotion configuration.

* Fix: fix bugs

* Refactor: Added error feedback to translation service

* Refactor: Added support for viewing the list of supported roles for each language mode, and fixed some bugs in the emotion switching feature of the auzre mode.

* Refactor: Optimized some command feedback and added owner restriction to chat record export function.

* Refactor: Optimized feedback when viewing role list to avoid excessive messages.

* Refactor: Optimized feedback when configuring multi-emotion mode.

* Feature: Added help instructions for translation feature.

* chore: Adjust help instructions for mood settings

* Fix: Fixed issue where only first line of multi-line replies were being read and Azure voice was pronouncing punctuation marks.

* Fix: Fixed bug where switching to Azure voice mode prompted for missing key and restricted ability to view voice role list to only when in voice mode.

* Refactor: Add image OCR function and support translation for both quoted text and image.

* fix: Fix issue with error caused by non-image input.

* Refactor: Optimize code to filter emojis that cannot be displayed properly in claude mode.

* Refactor: Optimize some code structures.

* fix: Fix the bug of returning only one result when entering multiple lines of text on Windows system.

* Refactor: Optimize code logic for better user experience

* Refactor: Fix the conflict issue with other plugin translation commands

* Refactor: Replace Baidu Translation with Youdao Translation to eliminate configuration steps; optimize translation experience; add missing dependency prompts instead of causing program errors.Optimize the experience of switching between voice mode and setting global reply mode.

* Refactor: Remove unused files and dependencies in the project.

* Feature: Add Youdao translation service to provide more comprehensive translation support.

* Refactor: Optimize translation experience

* Refactor: Optimize translation experience

* Feature: Add functionality of keyword search command

* Feature: Add functionality of keyword search command.

* Refactor: Remove redundant code

---------

Co-authored-by: Sean <1519059137@qq.com>
Co-authored-by: ikechan8370 <geyinchibuaa@gmail.com>
This commit is contained in:
Sean Murphy 2023-05-18 18:03:43 +08:00 committed by GitHub
parent 82b83bf015
commit f20248a805
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
8 changed files with 363 additions and 260 deletions

View file

@ -9,7 +9,9 @@ import SydneyAIClient from '../utils/SydneyAIClient.js'
import { PoeClient } from '../utils/poe/index.js'
import AzureTTS from '../utils/tts/microsoft-azure.js'
import VoiceVoxTTS from '../utils/tts/voicevox.js'
import { translate } from '../utils/translate.js'
import fs from 'fs'
import { getImg, getImageOcrText } from './entertainment.js'
import {
render, renderUrl,
getMessageById,
@ -33,10 +35,13 @@ import uploadRecord from '../utils/uploadRecord.js'
import { SlackClaudeClient } from '../utils/slack/slackClient.js'
import { ChatgptManagement } from './management.js'
import { getPromptByName } from '../utils/prompts.js'
import Translate from '../utils/baiduTranslate.js'
import BingDrawClient from '../utils/BingDraw.js'
import emojiStrip from 'emoji-strip'
import XinghuoClient from "../utils/xinghuo/xinghuo.js";
import XinghuoClient from '../utils/xinghuo/xinghuo.js'
try {
await import('emoji-strip')
} catch (err) {
logger.warn('【ChatGPT-Plugin】依赖emoji-strip未安装会导致azure语音模式下朗读emoji的问题建议执行pnpm install emoji-strip安装')
}
try {
await import('keyv')
} catch (err) {
@ -801,40 +806,15 @@ export class chatgpt extends plugin {
speaker = convertSpeaker(trySplit[0])
prompt = trySplit[1]
}
if (Config.imgOcr) {
// 取消息中的图片、at的头像、回复的图片放入e.img
if (e.at && !e.source) {
e.img = [`https://q1.qlogo.cn/g?b=qq&s=0&nk=${e.at}`]
const isImg = await getImg(e)
if (Config.imgOcr && !!isImg) {
let imgOcrText = await getImageOcrText(e)
if (imgOcrText) {
prompt = prompt + '"'
for (let imgOcrTextKey in imgOcrText) {
prompt += imgOcrText[imgOcrTextKey]
}
if (e.source) {
let reply
if (e.isGroup) {
reply = (await e.group.getChatHistory(e.source.seq, 1)).pop()?.message
} else {
reply = (await e.friend.getChatHistory(e.source.time, 1)).pop()?.message
}
if (reply) {
for (let val of reply) {
if (val.type === 'image') {
e.img = [val.url]
break
}
}
}
}
if (e.img) {
try {
let imgOcrText = ''
for (let i in e.img) {
const imgorc = await Bot.imageOcr(e.img[i])
// if (imgorc.language === 'zh' || imgorc.language === 'en') {
for (let text of imgorc.wordslist) {
imgOcrText += `${text.words} \n`
}
// }
}
prompt = imgOcrText + prompt
} catch (err) { }
prompt = prompt + ' "'
}
}
// 检索是否有屏蔽词
@ -1016,6 +996,8 @@ export class chatgpt extends plugin {
}
}
let response = chatMessage?.text
// 过滤无法正常显示的emoji
if (use === 'claude') response = response.replace(/:[a-zA-Z_]+:/g, '')
let mood = 'blandness'
if (!response) {
await e.reply('没有任何回复', true)
@ -1141,7 +1123,14 @@ export class chatgpt extends plugin {
ttsRegex = ''
}
ttsResponse = response.replace(ttsRegex, '')
// 处理azure语音会读出emoji的问题
try {
let emojiStrip
emojiStrip = (await import('emoji-strip')).default
ttsResponse = emojiStrip(ttsResponse)
} catch (error) {
await this.reply('依赖emoji-strip未安装请执行pnpm install emoji-strip安装依赖', true)
}
// 处理多行回复有时候只会读第一行和azure语音会读出一些标点符号的问题
ttsResponse = ttsResponse.replace(/[-:_*;\n]/g, '')
// 先把文字回复发出去,避免过久等待合成语音
@ -1158,17 +1147,9 @@ export class chatgpt extends plugin {
}
}
let wav
if (Config.ttsMode === 'vits-uma-genshin-honkai' && Config.ttsSpace && ttsResponse.length <= Config.ttsAutoFallbackThreshold) {
if (Config.autoJapanese && (_.isEmpty(Config.baiduTranslateAppId) || _.isEmpty(Config.baiduTranslateSecret))) {
await this.reply('请检查翻译配置是否正确。')
return false
}
if (Config.ttsMode === 'vits-uma-genshin-honkai' && Config.ttsSpace) {
if (Config.autoJapanese) {
try {
const translate = new Translate({
appid: Config.baiduTranslateAppId,
secret: Config.baiduTranslateSecret
})
ttsResponse = await translate(ttsResponse, '日')
} catch (err) {
logger.error(err)
@ -1182,6 +1163,11 @@ export class chatgpt extends plugin {
await this.reply('合成语音发生错误~')
}
} else if (Config.ttsMode === 'azure' && Config.azureTTSKey) {
const ttsRoleAzure = userReplySetting.ttsRoleAzure
const isEn = AzureTTS.supportConfigurations.find(config => config.code === ttsRoleAzure)?.language.includes('en')
if (isEn) {
ttsResponse = (await translate(ttsResponse, '英')).replace('\n', '')
}
let ssml = AzureTTS.generateSsml(ttsResponse, {
speaker,
emotion,

View file

@ -5,11 +5,10 @@ import { generateAudio } from '../utils/tts.js'
import fs from 'fs'
import { emojiRegex, googleRequestUrl } from '../utils/emoj/index.js'
import fetch from 'node-fetch'
import { mkdirs } from '../utils/common.js'
import { makeForwardMsg, mkdirs } from '../utils/common.js'
import uploadRecord from '../utils/uploadRecord.js'
import { makeWordcloud } from '../utils/wordcloud/wordcloud.js'
import Translate, { transMap } from '../utils/baiduTranslate.js'
import _ from 'lodash'
import { translate, translateLangSupports } from '../utils/translate.js'
let useSilk = false
try {
await import('node-silk')
@ -18,10 +17,10 @@ try {
useSilk = false
}
export class Entertainment extends plugin {
constructor (e) {
constructor(e) {
super({
name: 'ChatGPT-Plugin 娱乐小功能',
dsc: '让你的聊天更有趣!现已支持主动打招呼、表情合成、群聊词云统计与文本翻译小功能!',
dsc: '让你的聊天更有趣!现已支持主动打招呼、表情合成、群聊词云统计、文本翻译与图片ocr小功能!',
event: 'message',
priority: 500,
rule: [
@ -44,8 +43,12 @@ export class Entertainment extends plugin {
fnc: 'wordcloud'
},
{
reg: '^#((?:寄批踢)?翻.*|chatgpt翻译帮助)',
reg: '^#((寄批踢|gpt|GPT)?翻.*|chatgpt翻译帮助)',
fnc: 'translate'
},
{
reg: '^#ocr',
fnc: 'ocr'
}
]
})
@ -59,44 +62,112 @@ export class Entertainment extends plugin {
}
]
}
async translate (e) {
async ocr (e) {
let replyMsg
let imgOcrText = await getImageOcrText(e)
if (!imgOcrText) {
await this.reply('没有识别到文字', e.isGroup)
return false
}
replyMsg = await makeForwardMsg(e, imgOcrText, 'OCR结果')
await this.reply(replyMsg, e.isGroup)
}
async translate(e) {
const translateLangLabels = translateLangSupports.map(item => item.label).join('')
const translateLangLabelAbbrS = translateLangSupports.map(item => item.abbr).join('')
if (e.msg.trim() === '#chatgpt翻译帮助') {
await this.reply('支持中、日、文(文言文)、英、俄、韩语言之间的文本翻译功能,"寄批踢"为可选前缀' +
'\n示例1. #寄批踢翻英 你好' +
'\t2. #翻中 你好' +
'\t3. #寄批踢翻文 hello')
return
await this.reply(`支持以下语种的翻译:
${translateLangLabels}
在使用本工具时请采用简写的方式描述目标语言此外可以引用消息或图片来进行翻译
示例
1. #gpt翻英 你好
2. #gpt翻中 你好
3. #gpt翻译 hello`)
return true
}
const regExp = /^#(寄批踢|gpt|GPT)?翻(.)([\s\S]*)/
const match = e.msg.trim().match(regExp)
let languageCode = match[2] === '译' ? 'auto' : match[2]
let pendingText = match[3]
const isImg = !!(await getImg(e))?.length
let result = []
let multiText = false
if (languageCode !== 'auto' && !translateLangLabelAbbrS.includes(languageCode)) {
e.reply(`输入格式有误或暂不支持该语言,\n当前支持${translateLangLabels}`, e.isGroup)
return false
}
// 引用回复
if (e.source) {
if (pendingText.length) {
await this.reply('引用模式下不需要添加翻译文本,已自动忽略输入文本...((*・∀・)ゞ→→”', e.isGroup)
}
} else {
if (isImg && pendingText) {
await this.reply('检测到图片输入,已自动忽略输入文本...((*・∀・)ゞ→→', e.isGroup)
}
if (!pendingText && !isImg) {
await this.reply('你让我翻译啥呢 ̄へ ̄!', e.isGroup)
return false
}
}
if (isImg) {
let imgOcrText = await getImageOcrText(e)
multiText = Array.isArray(imgOcrText)
if (imgOcrText) {
pendingText = imgOcrText
} else {
await this.reply('没有识别到有效文字(・-・*)', e.isGroup)
return false
}
} else {
if (e.source) {
let previousMsg
if (e.isGroup) {
previousMsg = (await e.group.getChatHistory(e.source.seq, 1)).pop()?.message
} else {
previousMsg = (await e.friend.getChatHistory(e.source.time, 1)).pop()?.message
}
// logger.warn('previousMsg', previousMsg)
if (previousMsg.find(msg => msg.type === 'text')?.text) {
pendingText = previousMsg.find(msg => msg.type === 'text')?.text
} else {
await this.reply('这是什么怪东西!(⊙ˍ⊙)', e.isGroup)
return false
}
if (_.isEmpty(Config.baiduTranslateAppId) || _.isEmpty(Config.baiduTranslateSecret)) {
this.reply('请检查翻译配置是否正确。')
return
}
const regExp = /(#(?:寄批踢)?翻(.))(.*)/
const msg = e.msg.trim()
const match = msg.match(regExp)
let result = ''
if (!(match[2] in transMap)) {
e.reply('输入格式有误或暂不支持该语言,' +
'\n当前支持中、日、文(文言文)、英、俄、韩。', e.isGroup
)
return
}
const PendingText = match[3]
try {
const translate = new Translate({
appid: Config.baiduTranslateAppId,
secret: Config.baiduTranslateSecret
})
result = await translate(PendingText, match[2])
if (multiText) {
result = await Promise.all(pendingText.map(text => translate(text, languageCode)))
} else {
result = await translate(pendingText, languageCode)
}
// logger.warn(multiText, result)
} catch (err) {
logger.error(err)
result = err.message
await this.reply(err.message, e.isGroup)
return false
}
const totalLength = Array.isArray(result)
? result.reduce((acc, cur) => acc + cur.length, 0)
: result.length
if (totalLength > 300 || multiText) {
// 多条翻译结果
if (Array.isArray(result)) {
result = await makeForwardMsg(e, result, '翻译结果')
} else {
result = ('译文:\n' + result.trim()).split()
result.unshift('原文:\n' + pendingText.trim())
result = await makeForwardMsg(e, result, '翻译结果')
}
await this.reply(result, e.isGroup)
return true
}
async wordcloud (e) {
// 保持原格式输出
result = Array.isArray(result) ? result.join('\n') : result
await this.reply(result, e.isGroup)
return true
}
async wordcloud(e) {
if (e.isGroup) {
let groupId = e.group_id
let lock = await redis.get(`CHATGPT:WORDCLOUD:${groupId}`)
@ -105,7 +176,7 @@ export class Entertainment extends plugin {
return true
}
await e.reply('在统计啦,请稍等...')
await redis.set(`CHATGPT:WORDCLOUD:${groupId}`, '1', { EX: 600 })
await redis.set(`CHATGPT:WORDCLOUD:${groupId}`, '1', {EX: 600})
try {
await makeWordcloud(e, e.group_id)
} catch (err) {
@ -118,7 +189,7 @@ export class Entertainment extends plugin {
}
}
async combineEmoj (e) {
async combineEmoj(e) {
let left = e.msg.codePointAt(0).toString(16).toLowerCase()
let right = e.msg.codePointAt(2).toString(16).toLowerCase()
if (left === right) {
@ -166,7 +237,7 @@ export class Entertainment extends plugin {
return true
}
async sendMessage (e) {
async sendMessage(e) {
if (e.msg.match(/^#chatgpt打招呼帮助/) !== null) {
await this.reply('设置主动打招呼的群聊名单,群号之间以,隔开,参数之间空格隔开\n' +
'#chatgpt打招呼+群号:立即在指定群聊发起打招呼' +
@ -197,7 +268,7 @@ export class Entertainment extends plugin {
}
}
async sendRandomMessage () {
async sendRandomMessage() {
if (Config.debug) {
logger.info('开始处理ChatGPT随机打招呼。')
}
@ -231,7 +302,7 @@ export class Entertainment extends plugin {
}
}
async handleSentMessage (e) {
async handleSentMessage(e) {
const addReg = /^#chatgpt设置打招呼[:]?\s?(\S+)(?:\s+(\d+))?(?:\s+(\d+))?$/
const delReg = /^#chatgpt删除打招呼[:\s]?(\S+)/
const checkReg = /^#chatgpt查看打招呼$/
@ -307,3 +378,51 @@ export class Entertainment extends plugin {
return false
}
}
export async function getImg (e) {
// 取消息中的图片、at的头像、回复的图片放入e.img
if (e.at && !e.source) {
e.img = [`https://q1.qlogo.cn/g?b=qq&s=0&nk=${e.at}`]
}
if (e.source) {
let reply
if (e.isGroup) {
reply = (await e.group.getChatHistory(e.source.seq, 1)).pop()?.message
} else {
reply = (await e.friend.getChatHistory(e.source.time, 1)).pop()?.message
}
if (reply) {
let i = []
for (let val of reply) {
if (val.type === 'image') {
i.push(val.url)
}
}
e.img = i
}
}
return e.img
}
export async function getImageOcrText (e) {
const img = await getImg(e)
if (img) {
try {
let resultArr = []
let eachImgRes = ''
for (let i in img) {
const imgOCR = await Bot.imageOcr(img[i])
for (let text of imgOCR.wordslist) {
eachImgRes += (`${text?.words} \n`)
}
if (eachImgRes) resultArr.push(eachImgRes)
eachImgRes = ''
}
// logger.warn('resultArr', resultArr)
return resultArr
} catch (err) {
return false
// logger.error(err)
}
} else {
return false
}
}

View file

@ -230,7 +230,7 @@ export class ChatgptManagement extends plugin {
fnc: 'userPage'
},
{
reg: '^#chatgpt(对话|管理|娱乐|绘图|人物设定|聊天记录)?指令表(帮助)?',
reg: '^#(chatgpt)?(对话|管理|娱乐|绘图|人物设定|聊天记录)?指令表(帮助|搜索(.+))?',
fnc: 'commandHelp'
},
{
@ -276,8 +276,21 @@ export class ChatgptManagement extends plugin {
}
await this.reply(roleList)
}
async ttsSwitch (e) {
let userReplySetting = await redis.get(`CHATGPT:USER:${e.sender.user_id}`)
userReplySetting = !userReplySetting
? getDefaultReplySetting()
: JSON.parse(userReplySetting)
if (!userReplySetting.useTTS) {
let replyMsg
if (userReplySetting.usePicture) {
replyMsg = `当前为${!userReplySetting.useTTS ? '图片模式' : ''},请先切换到语音模式吧~`
} else {
replyMsg = `当前为${!userReplySetting.useTTS ? '文本模式' : ''},请先切换到语音模式吧~`
}
await this.reply(replyMsg, e.isGroup)
return false
}
let regExp = /#语音切换(.*)/
let ttsMode = e.msg.match(regExp)[1]
if (['vits', 'azure', 'voicevox'].includes(ttsMode)) {
@ -295,9 +308,10 @@ export class ChatgptManagement extends plugin {
async commandHelp (e) {
if (!this.e.isMaster) { return this.reply('你没有权限') }
if (e.msg.trim() === '#chatgpt指令表帮助') {
if (/^#(chatgpt)?指令表帮助$/.exec(e.msg.trim())) {
await this.reply('#chatgpt指令表: 查看本插件的所有指令\n' +
'#chatgpt(对话|管理|娱乐|绘图|人物设定|聊天记录)指令表: 查看对应功能分类的指令表')
'#chatgpt(对话|管理|娱乐|绘图|人物设定|聊天记录)指令表: 查看对应功能分类的指令表\n' +
'#chatgpt指令表搜索xxx: 查看包含对应关键词的指令')
return false
}
const categories = {
@ -327,7 +341,33 @@ export class ChatgptManagement extends plugin {
commandSet.push({ name, dsc: plugin.dsc, rule })
}
}
if (e.msg.includes('搜索')) {
let cmd = e.msg.trim().match(/^#(chatgpt)?(对话|管理|娱乐|绘图|人物设定|聊天记录)?指令表(帮助|搜索(.+))?/)[4]
if (!cmd) {
await this.reply('(⊙ˍ⊙)')
return 0
} else {
let searchResults = []
commandSet.forEach(plugin => {
plugin.rule.forEach(item => {
if (item.reg.toLowerCase().includes(cmd.toLowerCase())) {
searchResults.push(item.reg)
}
})
})
if (!searchResults.length) {
await this.reply('没有找到符合的结果,换个关键词吧!', e.isGroup)
return 0
} else if (searchResults.length <= 5) {
await this.reply(searchResults.join('\n'), e.isGroup)
return 1
} else {
let msg = await makeForwardMsg(e, searchResults, e.msg.slice(1).startsWith('chatgpt') ? e.msg.slice(8) : 'chatgpt' + e.msg.slice(1))
await this.reply(msg)
return 1
}
}
}
const generatePrompt = (plugin, command) => {
const category = getCategory(e, plugin)
const commandsStr = command.length ? `正则指令:\n${command.join('\n')}\n` : '正则指令: 无\n'
@ -343,7 +383,7 @@ export class ChatgptManagement extends plugin {
prompts.push(generatePrompt(plugin, commands))
}
}
let msg = await makeForwardMsg(e, prompts, e.msg.slice(1))
let msg = await makeForwardMsg(e, prompts, e.msg.slice(1).startsWith('chatgpt') ? e.msg.slice(1) : ('chatgpt' + e.msg.slice(1)))
await this.reply(msg)
return true
}
@ -501,7 +541,7 @@ export class ChatgptManagement extends plugin {
}
async setDefaultReplySetting (e) {
const reg = /^#chatgpt(打开|关闭|设置)?全局((图片模式|语音模式|(语音角色|角色语音|角色).*)|回复帮助)/
const reg = /^#chatgpt(打开|关闭|设置)?全局((文本模式|图片模式|语音模式|(语音角色|角色语音|角色).*)|回复帮助)/
const matchCommand = e.msg.match(reg)
const settingType = matchCommand[2]
let replyMsg = ''
@ -521,6 +561,23 @@ export class ChatgptManagement extends plugin {
} else if (matchCommand[1] === '设置') {
replyMsg = '请使用“#chatgpt打开全局图片模式”或“#chatgpt关闭全局图片模式”命令来设置回复模式'
} break
case '文本模式':
if (matchCommand[1] === '打开') {
Config.defaultUsePicture = false
Config.defaultUseTTS = false
replyMsg = 'ChatGPT将默认以文本回复'
} else if (matchCommand[1] === '关闭') {
if (Config.defaultUseTTS) {
replyMsg = 'ChatGPT将默认以语音回复'
} else if (Config.defaultUsePicture) {
replyMsg = 'ChatGPT将默认以图片回复'
} else {
Config.defaultUseTTS = true
replyMsg = 'ChatGPT将默认以语音回复'
}
} else if (matchCommand[1] === '设置') {
replyMsg = '请使用“#chatgpt打开全局文本模式”或“#chatgpt关闭全局文本模式”命令来设置回复模式'
} break
case '语音模式':
if (!Config.ttsSpace) {
replyMsg = '您没有配置VITS API请前往锅巴面板进行配置'
@ -541,7 +598,7 @@ export class ChatgptManagement extends plugin {
replyMsg = '请使用“#chatgpt打开全局语音模式”或“#chatgpt关闭全局语音模式”命令来设置回复模式'
} break
case '回复帮助':
replyMsg = '可使用以下命令配置全局回复:\n#chatgpt(打开/关闭)全局(语音/图片)模式\n#chatgpt设置全局(语音角色|角色语音|角色)+角色名称(留空则为随机)'
replyMsg = '可使用以下命令配置全局回复:\n#chatgpt(打开/关闭)全局(语音/图片/文本)模式\n#chatgpt设置全局(语音角色|角色语音|角色)+角色名称(留空则为随机)'
break
default:
if (!Config.ttsSpace) {

View file

@ -159,8 +159,7 @@ export function supportGuoba () {
field: 'autoJapanese',
label: 'vits模式日语输出',
bottomHelpMessage: '使用vits语音时将机器人的文字回复翻译成日文后获取语音。' +
'需要填写下方的翻译配置配置文档http://api.fanyi.baidu.com/doc/21 ' +
'填写配置后另外支持通过本插件使用文字翻译功能,发送"#chatgpt翻译帮助"查看使用方法。',
'若想使用插件的翻译功能,发送"#chatgpt翻译帮助"查看使用方法,支持图片翻译,引用翻译...',
component: 'Switch'
},
{
@ -575,16 +574,6 @@ export function supportGuoba () {
bottomHelpMessage: '可注册2captcha实现跳过验证码收费服务但很便宜。否则可能会遇到验证码而卡住',
component: 'InputPassword'
},
{
field: 'baiduTranslateAppId',
label: '百度翻译应用ID',
component: 'Input'
},
{
field: 'baiduTranslateSecret',
label: '百度翻译密钥',
component: 'Input'
},
{
field: 'ttsSpace',
label: 'vits-uma-genshin-honkai语音转换API地址',

View file

@ -9,11 +9,9 @@
"@slack/bolt": "^3.13.0",
"@waylaidwanderer/chatgpt-api": "^1.33.2",
"asn1.js": "^5.0.0",
"axios": "^1.3.6",
"chatgpt": "^5.1.1",
"delay": "^5.0.0",
"diff": "^5.1.0",
"emoji-strip": "^1.0.1",
"eventsource": "^2.0.2",
"eventsource-parser": "^1.0.0",
"fastify": "^4.13.0",
@ -21,8 +19,8 @@
"https-proxy-agent": "5.0.1",
"keyv": "^4.5.2",
"keyv-file": "^0.2.0",
"md5-node": "^1.0.1",
"microsoft-cognitiveservices-speech-sdk": "^1.27.0",
"emoji-strip": "^1.0.1",
"node-fetch": "^3.3.1",
"openai": "^3.2.1",
"random": "^4.1.0",

View file

@ -1,141 +0,0 @@
import md5 from 'md5-node'
import axios from 'axios'
// noinspection NonAsciiCharacters
export const transMap = { : 'zh', : 'jp', : 'wyw', : 'en', : 'ru', : 'kr' }
const errOr = {
52001: '请求超时,请重试。',
52002: '系统错误,请重试。',
52003: '未授权用户请检查appid是否正确或者服务是否开通。',
54000: '必填参数为空,请检查是否少传参数。',
54001: '签名错误,请检查您的签名生成方法。',
54003: '访问频率受限,请降低您的调用频率,或进行身份认证后切换为高级版/尊享版。',
54004: '账户余额不足,请前往管理控制台为账户充值。',
54005: '长query请求频繁请降低长query的发送频率3s后再试。',
58000: '客户端IP非法检查个人资料里填写的IP地址是否正确可前往开发者信息-基本信息修改。',
58001: '译文语言方向不支持,检查译文语言是否在语言列表里。',
58002: '服务当前已关闭,请前往管理控制台开启服务。',
90107: '认证未通过或未生效,请前往我的认证查看认证进度。'
}
function Translate (config) {
this.requestNumber = 0 // 请求次数
this.config = {
showProgress: 1, // 是否显示进度
requestNumber: 1, // 最大请求次数
agreement: 'http', // 协议
...config
}
this.baiduApi = `${this.config.agreement}://api.fanyi.baidu.com/api/trans/vip/translate`
// 拼接url
this.createUrl = (domain, form) => {
let result = domain + '?'
for (let key in form) {
result += `${key}=${form[key]}&`
}
return result.slice(0, result.length - 1)
}
this.translate = async (value, ...params) => {
let result = ''
let from = 'auto'
let to = 'en'
if (params.length === 1) {
to = transMap[params[0]] || to
} else if (params.length === 2) {
from = transMap[params[0]] || from
to = transMap[params[1]] || to
}
if (typeof value === 'string') {
const res = await this.requestApi(value, { from, to })
result = res[0].dst
}
if (Array.isArray(value) || Object.prototype.toString.call(value) === '[object Object]') {
result = await this._createObjValue(value, { from, to })
}
return result
}
this.requestApi = (value, params) => {
if (this.requestNumber >= this.config.requestNumber) {
return new Promise((resolve) => {
setTimeout(() => {
this.requestApi(value, params).then((res) => {
resolve(res)
})
}, 1000)
})
}
this.requestNumber++
const { appid, secret } = this.config
const q = value
const salt = Math.random()
const sign = md5(`${appid}${q}${salt}${secret}`)
const fromData = {
q: encodeURIComponent(q),
sign,
appid,
salt,
from: params.from || 'auto',
to: params.to || 'en'
}
const fanyiApi = this.createUrl(this.baiduApi, fromData)
return new Promise((resolve, reject) => {
axios
.get(fanyiApi)
.then(({ data: res }) => {
if (!res.error_code) {
const resList = res.trans_result
resolve(resList)
} else {
const errCode = res.error_code
if (errOr[errCode]) {
reject(new Error('翻译出错了~' + errOr[errCode]))
} else {
reject(new Error('翻译出错了~' + errCode))
}
}
})
.finally(() => {
setTimeout(() => {
this.requestNumber--
}, 1000)
})
})
}
// 递归翻译数组或对象
this._createObjValue = async (value, parames) => {
let index = 0
const obj = Array.isArray(value) ? [] : {}
const strDatas = Array.isArray(value) ? value : Object.values(value)
const reqData = strDatas
.filter((item) => typeof item === 'string') // 过滤字符串
.join('\n')
const res = reqData ? await this.requestApi(reqData, parames) : []
for (let key in value) {
if (typeof value[key] === 'string') {
obj[key] = res[index].dst
index++
}
if (
Array.isArray(value[key]) ||
Object.prototype.toString.call(value[key]) === '[object Object]'
) {
obj[key] = await this.translate(value[key], parames) // 递归翻译
}
}
return obj
}
return this.translate
}
export default Translate

View file

@ -119,8 +119,6 @@ const defaultConfig = {
azureTTSSpeaker: 'zh-CN-XiaochenNeural',
voicevoxSpace: '',
voicevoxTTSSpeaker: '护士机器子T',
baiduTranslateAppId: '',
baiduTranslateSecret: '',
azureTTSEmotion: false,
enhanceAzureTTSEmotion: false,
autoJapanese: false,

97
utils/translate.js Normal file
View file

@ -0,0 +1,97 @@
import md5 from 'md5'
import _ from 'lodash'
// 代码参考https://github.com/yeyang52/yenai-plugin/blob/b50b11338adfa5a4ef93912eefd2f1f704e8b990/model/api/funApi.js#L25
export const translateLangSupports = [
{ code: 'ar', label: '阿拉伯语', abbr: '阿', alphabet: 'A' },
{ code: 'de', label: '德语', abbr: '德', alphabet: 'D' },
{ code: 'ru', label: '俄语', abbr: '俄', alphabet: 'E' },
{ code: 'fr', label: '法语', abbr: '法', alphabet: 'F' },
{ code: 'ko', label: '韩语', abbr: '韩', alphabet: 'H' },
{ code: 'nl', label: '荷兰语', abbr: '荷', alphabet: 'H' },
{ code: 'pt', label: '葡萄牙语', abbr: '葡', alphabet: 'P' },
{ code: 'ja', label: '日语', abbr: '日', alphabet: 'R' },
{ code: 'th', label: '泰语', abbr: '泰', alphabet: 'T' },
{ code: 'es', label: '西班牙语', abbr: '西', alphabet: 'X' },
{ code: 'en', label: '英语', abbr: '英', alphabet: 'Y' },
{ code: 'it', label: '意大利语', abbr: '意', alphabet: 'Y' },
{ code: 'vi', label: '越南语', abbr: '越', alphabet: 'Y' },
{ code: 'id', label: '印度尼西亚语', abbr: '印', alphabet: 'Y' },
{ code: 'zh-CHS', label: '中文', abbr: '中', alphabet: 'Z' }
]
const API_ERROR = '出了点小问题,待会再试试吧'
export async function translate (msg, to = 'auto') {
let from = 'auto'
if (to !== 'auto') to = translateLangSupports.find(item => item.abbr == to)?.code
if (!to) return `未找到翻译的语种,支持的语言为:\n${translateLangSupports.map(item => item.abbr).join('')}\n`
// 翻译结果为空的提示
const RESULT_ERROR = '找不到翻译结果'
// API 请求错误提示
const API_ERROR = '翻译服务暂不可用,请稍后再试'
const qs = (obj) => {
let res = ''
for (const [k, v] of Object.entries(obj)) { res += `${k}=${encodeURIComponent(v)}&` }
return res.slice(0, res.length - 1)
}
const appVersion = '5.0 (Windows NT 10.0; Win64; x64) Chrome/98.0.4750.0'
const payload = {
from,
to,
bv: md5(appVersion),
client: 'fanyideskweb',
doctype: 'json',
version: '2.1',
keyfrom: 'fanyi.web',
action: 'FY_BY_DEFAULT',
smartresult: 'dict'
}
const headers = {
Host: 'fanyi.youdao.com',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) Chrome/98.0.4758.102',
Referer: 'https://fanyi.youdao.com/',
'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8',
Cookie: 'OUTFOX_SEARCH_USER_ID_NCOO=133190305.98519628; OUTFOX_SEARCH_USER_ID="2081065877@10.169.0.102";'
}
const api = 'https://fanyi.youdao.com/translate_o?smartresult=dict&smartresult=rule'
const key = 'Ygy_4c=r#e#4EX^NUGUc5'
try {
if (Array.isArray(msg)) {
const results = []
for (let i = 0; i < msg.length; i++) {
const item = msg[i]
const lts = '' + new Date().getTime()
const salt = lts + parseInt(String(10 * Math.random()), 10)
const sign = md5(payload.client + item + salt + key)
const postData = qs(Object.assign({ i: item, lts, sign, salt }, payload))
let { errorCode, translateResult } = await fetch(api, {
method: 'POST',
body: postData,
headers
}).then(res => res.json()).catch(err => console.error(err))
if (errorCode !== 0) return API_ERROR
translateResult = _.flattenDeep(translateResult)?.map(item => item.tgt).join('\n')
if (!translateResult) results.push(RESULT_ERROR)
else results.push(translateResult)
}
return results
} else {
const i = msg // 翻译的内容
const lts = '' + new Date().getTime()
const salt = lts + parseInt(String(10 * Math.random()), 10)
const sign = md5(payload.client + i + salt + key)
const postData = qs(Object.assign({ i, lts, sign, salt }, payload))
let { errorCode, translateResult } = await fetch(api, {
method: 'POST',
body: postData,
headers
}).then(res => res.json()).catch(err => console.error(err))
if (errorCode !== 0) return API_ERROR
translateResult = _.flattenDeep(translateResult)?.map(item => item.tgt).join('\n')
if (!translateResult) return RESULT_ERROR
return translateResult
}
} catch (err) {
return API_ERROR
}
}