feat: 黑白名单支持配置qq号;支持所有语音模式下的随机角色对话;新增查看当前用户的回复设置;完善全局设置的相关功能;支持为Azure语音服务随机选择角色;优化设置全局语音角色和查看角色列表的功能;重构了代码以支持现有语音服务的打招呼功能;修复了Guoba面板上Azure语音角色选择的显示问题。 (#436)

* feat: add support for ‘greeting’ and ‘global reply mode’ commands, improve variable naming and remove unnecessary backend output.

* feat: Add support for black and white lists, global reply mode and voice role settings, private chat switch, and active greeting configuration. Refactor some variable names and comment out redundant code for better readability and reduced backend output.

* feat: 为新功能完善了帮助面板

* docs: 完善了‘打招呼’的帮助说明

* Commit Type: feat, bugfix

Add functionality to view plugin command table, fix bug in blacklist/whitelist, and fix bug where chat mode can still be used in private messaging when disabled.

* Commit Type: feat, bugfix

Add functionality to view plugin command table, fix bug in blacklist/whitelist, and fix bug where chat mode can still be used in private messaging when disabled.

* refactor: Remove redundant log output.

* Refactor: optimize code logic

* Fix: 修复绘图指令表被抢指令的bug。

* Refactor:1. Add support for automatically translating replies to Japanese and generating voice messages in VITS voice mode (please monitor remaining quota after enabling). 2. Add translation function. 3. Add emotion configuration for Azure voice mode, allowing the robot to select appropriate emotional styles for replies.

* Refactor:Handle the issue of exceeding character setting limit caused by adding emotion configuration.

* Fix: fix bugs

* Refactor: Added error feedback to translation service

* Refactor: Added support for viewing the list of supported roles for each language mode, and fixed some bugs in the emotion switching feature of the auzre mode.

* Refactor: Optimized some command feedback and added owner restriction to chat record export function.

* Refactor: Optimized feedback when viewing role list to avoid excessive messages.

* Refactor: Optimized feedback when configuring multi-emotion mode.

* Feature: Added help instructions for translation feature.

* chore: Adjust help instructions for mood settings

* Fix: Fixed issue where only first line of multi-line replies were being read and Azure voice was pronouncing punctuation marks.

* Fix: Fixed bug where switching to Azure voice mode prompted for missing key and restricted ability to view voice role list to only when in voice mode.

* Refactor: Add image OCR function and support translation for both quoted text and image.

* fix: Fix issue with error caused by non-image input.

* Refactor: Optimize code to filter emojis that cannot be displayed properly in claude mode.

* Refactor: Optimize some code structures.

* fix: Fix the bug of returning only one result when entering multiple lines of text on Windows system.

* Refactor: Optimize code logic for better user experience

* Refactor: Fix the conflict issue with other plugin translation commands

* Refactor: Replace Baidu Translation with Youdao Translation to eliminate configuration steps; optimize translation experience; add missing dependency prompts instead of causing program errors.Optimize the experience of switching between voice mode and setting global reply mode.

* Refactor: Remove unused files and dependencies in the project.

* Feature: Add Youdao translation service to provide more comprehensive translation support.

* Refactor: Optimize translation experience

* Refactor: Optimize translation experience

* Feature: Add functionality of keyword search command

* Feature: Add functionality of keyword search command.

* Refactor: Remove redundant code

* Add: Add feature to support randomly selecting roles for Azure voice. Refactor the code to support existing voice services for the ‘greeting’ feature. Fix the display issue of Azure voice role selection on the Guoba panel.

* Refactor: Remove redundant code

* Refactor: Improve the function of setting global voice roles and viewing role lists. Now you can set default roles for each voice service separately or view the supported role list.

* Refactor: Remove redundant code

* Feature: Add new function to support random character dialogues in all voice modes, add the ability to view the current user’s reply settings, and improve related functions in the global settings.

* Refactor: Add compatibility directive for viewing reply settings feature

* Feature: support adding QQ number to blacklist/whitelist

* fix: 处理全局设置指令被上下班指令占用的问题

* fix: 处理全局设置指令被上下班指令占用的问题

---------

Co-authored-by: Sean <1519059137@qq.com>
Co-authored-by: ikechan8370 <geyinchibuaa@gmail.com>
This commit is contained in:
Sean Murphy 2023-06-05 11:40:10 +08:00 committed by GitHub
parent 7007cacf6f
commit bdad936c70
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
8 changed files with 704 additions and 447 deletions

View file

@ -7,11 +7,10 @@ import { ChatGPTAPI } from 'chatgpt'
import { BingAIClient } from '@waylaidwanderer/chatgpt-api'
import SydneyAIClient from '../utils/SydneyAIClient.js'
import { PoeClient } from '../utils/poe/index.js'
import AzureTTS from '../utils/tts/microsoft-azure.js'
import AzureTTS, { supportConfigurations } from '../utils/tts/microsoft-azure.js'
import VoiceVoxTTS from '../utils/tts/voicevox.js'
import { translate } from '../utils/translate.js'
import fs from 'fs'
import { getImg, getImageOcrText } from './entertainment.js'
import {
render, renderUrl,
getMessageById,
@ -21,7 +20,7 @@ import {
completeJSON,
isImage,
getUserData,
getDefaultReplySetting, isCN, getMasterQQ
getDefaultReplySetting, isCN, getMasterQQ, getUserReplySetting, getImageOcrText, getImg, processList
} from '../utils/common.js'
import { ChatGPTPuppeteer } from '../utils/browser.js'
import { KeyvFile } from 'keyv-file'
@ -544,12 +543,7 @@ export class chatgpt extends plugin {
}
async switch2Text (e) {
let userSetting = await redis.get(`CHATGPT:USER:${e.sender.user_id}`)
if (!userSetting) {
userSetting = getDefaultReplySetting()
} else {
userSetting = JSON.parse(userSetting)
}
let userSetting = await getUserReplySetting(this.e)
userSetting.usePicture = false
userSetting.useTTS = false
await redis.set(`CHATGPT:USER:${e.sender.user_id}`, JSON.stringify(userSetting))
@ -577,12 +571,7 @@ export class chatgpt extends plugin {
}
break
}
let userSetting = await redis.get(`CHATGPT:USER:${e.sender.user_id}`)
if (!userSetting) {
userSetting = getDefaultReplySetting()
} else {
userSetting = JSON.parse(userSetting)
}
let userSetting = await getUserReplySetting(this.e)
userSetting.useTTS = true
userSetting.usePicture = false
await redis.set(`CHATGPT:USER:${e.sender.user_id}`, JSON.stringify(userSetting))
@ -629,37 +618,35 @@ export class chatgpt extends plugin {
let speaker = e.msg.replace(regex, '').trim() || '随机'
switch (Config.ttsMode) {
case 'vits-uma-genshin-honkai': {
let userSetting = await redis.get(`CHATGPT:USER:${e.sender.user_id}`)
if (!userSetting) {
userSetting = getDefaultReplySetting()
} else {
userSetting = JSON.parse(userSetting)
}
let userSetting = await getUserReplySetting(this.e)
userSetting.ttsRole = convertSpeaker(speaker)
if (speakers.indexOf(userSetting.ttsRole) >= 0) {
await redis.set(`CHATGPT:USER:${e.sender.user_id}`, JSON.stringify(userSetting))
await this.reply(`您的默认语音角色已被设置为”${userSetting.ttsRole}`)
await this.reply(`当前语音模式为${Config.ttsMode},您的默认语音角色已被设置为 "${userSetting.ttsRole}" `)
} else if (speaker === '随机') {
userSetting.ttsRole = '随机'
await redis.set(`CHATGPT:USER:${e.sender.user_id}`, JSON.stringify(userSetting))
await this.reply(`当前语音模式为${Config.ttsMode},您的默认语音角色已被设置为 "随机" `)
} else {
await this.reply(`抱歉,"${userSetting.ttsRole}"我还不认识呢`)
}
break
}
case 'azure': {
let userSetting = await getUserReplySetting(this.e)
let chosen = AzureTTS.supportConfigurations.filter(s => s.name === speaker)
if (chosen.length === 0) {
if (speaker === '随机') {
userSetting.ttsRoleAzure = '随机'
await redis.set(`CHATGPT:USER:${e.sender.user_id}`, JSON.stringify(userSetting))
await this.reply(`当前语音模式为${Config.ttsMode},您的默认语音角色已被设置为 "随机" `)
} else if (chosen.length === 0) {
await this.reply(`抱歉,没有"${speaker}"这个角色目前azure模式下支持的角色有${AzureTTS.supportConfigurations.map(item => item.name).join('、')}`)
} else {
let userSetting = await redis.get(`CHATGPT:USER:${e.sender.user_id}`)
if (!userSetting) {
userSetting = getDefaultReplySetting()
} else {
userSetting = JSON.parse(userSetting)
}
userSetting.ttsRoleAzure = chosen[0].code
await redis.set(`CHATGPT:USER:${e.sender.user_id}`, JSON.stringify(userSetting))
// Config.azureTTSSpeaker = chosen[0].code
const supportEmotion = AzureTTS.supportConfigurations.find(config => config.name === speaker)?.emotion
await this.reply(`您的默认语音角色已被设置为 ${speaker}-${chosen[0].gender}-${chosen[0].languageDetail} ${supportEmotion && Config.azureTTSEmotion ? ',此角色支持多情绪配置,建议重新使用设定并结束对话以获得最佳体验!' : ''}`)
await this.reply(`当前语音模式为${Config.ttsMode},您的默认语音角色已被设置为 ${speaker}-${chosen[0].gender}-${chosen[0].languageDetail} ${supportEmotion && Config.azureTTSEmotion ? ',此角色支持多情绪配置,建议重新使用设定并结束对话以获得最佳体验!' : ''}`)
}
break
}
@ -671,6 +658,13 @@ export class chatgpt extends plugin {
speaker = match[1]
style = match[2]
}
let userSetting = await getUserReplySetting(e)
if (speaker === '随机') {
userSetting.ttsRoleVoiceVox = '随机'
await redis.set(`CHATGPT:USER:${e.sender.user_id}`, JSON.stringify(userSetting))
await this.reply(`当前语音模式为${Config.ttsMode},您的默认语音角色已被设置为 "随机" `)
break
}
let chosen = VoiceVoxTTS.supportConfigurations.filter(s => s.name === speaker)
if (chosen.length === 0) {
await this.reply(`抱歉,没有"${speaker}"这个角色目前voicevox模式下支持的角色有${VoiceVoxTTS.supportConfigurations.map(item => item.name).join('、')}`)
@ -680,15 +674,9 @@ export class chatgpt extends plugin {
await this.reply(`抱歉,"${speaker}"这个角色没有"${style}"这个风格,目前支持的风格有${chosen[0].styles.map(item => item.name).join('、')}`)
break
}
let userSetting = await redis.get(`CHATGPT:USER:${e.sender.user_id}`)
if (!userSetting) {
userSetting = getDefaultReplySetting()
} else {
userSetting = JSON.parse(userSetting)
}
userSetting.ttsRoleVoiceVox = chosen[0].name + (style ? `-${style}` : '')
await redis.set(`CHATGPT:USER:${e.sender.user_id}`, JSON.stringify(userSetting))
await this.reply(`您的默认语音角色已被设置为”${userSetting.ttsRoleVoiceVox}`)
await this.reply(`当前语音模式为${Config.ttsMode},您的默认语音角色已被设置为 "${userSetting.ttsRoleVoiceVox}" `)
break
}
}
@ -698,23 +686,6 @@ export class chatgpt extends plugin {
* #chatgpt
*/
async chatgpt (e) {
if (!e.isMaster && e.isPrivate && !Config.enablePrivateChat) {
// await this.reply('ChatGpt私聊通道已关闭。')
return false
}
if (e.isGroup) {
let cm = new ChatgptManagement()
let [groupWhitelist, groupBlacklist] = await cm.processList(Config.groupWhitelist, Config.groupBlacklist)
// logger.info('groupWhitelist:', Config.groupWhitelist, 'groupBlacklist', Config.groupBlacklist)
const whitelist = groupWhitelist.filter(group => group.trim())
if (whitelist.length > 0 && !whitelist.includes(e.group_id.toString())) {
return false
}
const blacklist = groupBlacklist.filter(group => group.trim())
if (blacklist.length > 0 && blacklist.includes(e.group_id.toString())) {
return false
}
}
let prompt
if (this.toggleMode === 'at') {
if (!e.raw_message || e.msg?.startsWith('#')) {
@ -781,15 +752,24 @@ export class chatgpt extends plugin {
}
async abstractChat (e, prompt, use) {
let userSetting = await redis.get(`CHATGPT:USER:${e.sender.user_id}`)
if (userSetting) {
userSetting = JSON.parse(userSetting)
if (Object.keys(userSetting).indexOf('useTTS') < 0) {
userSetting.useTTS = Config.defaultUseTTS
}
} else {
userSetting = getDefaultReplySetting()
// 关闭私聊通道后不回复
if (!e.isMaster && e.isPrivate && !Config.enablePrivateChat) {
return false
}
// 黑白名单过滤对话
let [whitelist, blacklist] = processList(Config.whitelist, Config.blacklist)
if (whitelist.length > 0) {
if (e.isGroup && !whitelist.includes(e.group_id.toString())) return false
const list = whitelist.filter(elem => elem.startsWith('^')).map(elem => elem.slice(1))
if (!list.includes(e.sender.user_id.toString())) return false
}
if (blacklist.length > 0) {
if (e.isGroup && blacklist.includes(e.group_id.toString())) return false
const list = blacklist.filter(elem => elem.startsWith('^')).map(elem => elem.slice(1))
if (list.includes(e.sender.user_id.toString())) return false
}
let userSetting = await getUserReplySetting(this.e)
let useTTS = !!userSetting.useTTS
let speaker
if (Config.ttsMode === 'vits-uma-genshin-honkai') {
@ -869,10 +849,7 @@ export class chatgpt extends plugin {
}
}
const emotionFlag = await redis.get(`CHATGPT:WRONG_EMOTION:${e.sender.user_id}`)
let userReplySetting = await redis.get(`CHATGPT:USER:${e.sender.user_id}`)
userReplySetting = !userReplySetting
? getDefaultReplySetting()
: JSON.parse(userReplySetting)
let userReplySetting = await getUserReplySetting(this.e)
// 图片模式就不管了,降低抱歉概率
if (Config.ttsMode === 'azure' && Config.enhanceAzureTTSEmotion && userReplySetting.useTTS === true && await AzureTTS.getEmotionPrompt(e)) {
switch (emotionFlag) {
@ -1163,10 +1140,23 @@ export class chatgpt extends plugin {
await this.reply('合成语音发生错误~')
}
} else if (Config.ttsMode === 'azure' && Config.azureTTSKey) {
const ttsRoleAzure = userReplySetting.ttsRoleAzure
const isEn = AzureTTS.supportConfigurations.find(config => config.code === ttsRoleAzure)?.language.includes('en')
if (isEn) {
ttsResponse = (await translate(ttsResponse, '英')).replace('\n', '')
if (speaker !== '随机') {
let languagePrefix = AzureTTS.supportConfigurations.find(config => config.code === speaker).languageDetail.charAt(0)
languagePrefix = languagePrefix.startsWith('E') ? '英' : languagePrefix
ttsResponse = (await translate(ttsResponse, languagePrefix)).replace('\n', '')
} else {
let role, languagePrefix
role = AzureTTS.supportConfigurations[Math.floor(Math.random() * supportConfigurations.length)]
speaker = role.code
languagePrefix = role.languageDetail.charAt(0).startsWith('E') ? '英' : role.languageDetail.charAt(0)
ttsResponse = (await translate(ttsResponse, languagePrefix)).replace('\n', '')
if (role?.emotion) {
const keys = Object.keys(role.emotion)
emotion = keys[Math.floor(Math.random() * keys.length)]
}
logger.info('using speaker: ' + speaker)
logger.info('using language: ' + languagePrefix)
logger.info('using emotion: ' + emotion)
}
let ssml = AzureTTS.generateSsml(ttsResponse, {
speaker,
@ -1177,6 +1167,7 @@ export class chatgpt extends plugin {
speaker
}, await ssml)
} else if (Config.ttsMode === 'voicevox' && Config.voicevoxSpace) {
ttsResponse = (await translate(ttsResponse, '日')).replace('\n', '')
wav = await VoiceVoxTTS.generateAudio(ttsResponse, {
speaker
})
@ -1267,10 +1258,6 @@ export class chatgpt extends plugin {
}
async chatgpt1 (e) {
if (!e.isMaster && e.isPrivate && !Config.enablePrivateChat) {
await this.reply('ChatGpt私聊通道已关闭。')
return false
}
if (!Config.allowOtherMode) {
return false
}
@ -1290,10 +1277,6 @@ export class chatgpt extends plugin {
}
async chatgpt3 (e) {
if (!e.isMaster && e.isPrivate && !Config.enablePrivateChat) {
await this.reply('ChatGpt私聊通道已关闭。')
return false
}
if (!Config.allowOtherMode) {
return false
}
@ -1332,10 +1315,6 @@ export class chatgpt extends plugin {
}
async bing (e) {
if (!e.isMaster && e.isPrivate && !Config.enablePrivateChat) {
await this.reply('ChatGpt私聊通道已关闭。')
return false
}
if (!Config.allowOtherMode) {
return false
}
@ -1355,10 +1334,6 @@ export class chatgpt extends plugin {
}
async claude (e) {
if (!e.isMaster && e.isPrivate && !Config.enablePrivateChat) {
// await this.reply('ChatGpt私聊通道已关闭。')
return false
}
if (!Config.allowOtherMode) {
return false
}
@ -1376,11 +1351,8 @@ export class chatgpt extends plugin {
await this.abstractChat(e, prompt, 'claude')
return true
}
async xh (e) {
if (!e.isMaster && e.isPrivate && !Config.enablePrivateChat) {
// await this.reply('ChatGpt私聊通道已关闭。')
return false
}
if (!Config.allowOtherMode) {
return false
}

View file

@ -5,10 +5,13 @@ import { generateAudio } from '../utils/tts.js'
import fs from 'fs'
import { emojiRegex, googleRequestUrl } from '../utils/emoj/index.js'
import fetch from 'node-fetch'
import { makeForwardMsg, mkdirs } from '../utils/common.js'
import { getImageOcrText, getImg, makeForwardMsg, mkdirs } from '../utils/common.js'
import uploadRecord from '../utils/uploadRecord.js'
import { makeWordcloud } from '../utils/wordcloud/wordcloud.js'
import { translate, translateLangSupports } from '../utils/translate.js'
import AzureTTS from '../utils/tts/microsoft-azure.js'
import VoiceVoxTTS from '../utils/tts/voicevox.js'
let useSilk = false
try {
await import('node-silk')
@ -17,7 +20,7 @@ try {
useSilk = false
}
export class Entertainment extends plugin {
constructor(e) {
constructor (e) {
super({
name: 'ChatGPT-Plugin 娱乐小功能',
dsc: '让你的聊天更有趣现已支持主动打招呼、表情合成、群聊词云统计、文本翻译与图片ocr小功能',
@ -60,12 +63,13 @@ export class Entertainment extends plugin {
{
// 设置十分钟左右的浮动
cron: '0 ' + Math.ceil(Math.random() * 10) + ' 7-23/' + Config.helloInterval + ' * * ?',
// cron: '0 ' + '*/' + Config.helloInterval + ' * * * ?',
// cron: '*/2 * * * *',
name: 'ChatGPT主动随机说话',
fnc: this.sendRandomMessage.bind(this)
}
]
}
async ocr (e) {
let replyMsg
let imgOcrText = await getImageOcrText(e)
@ -76,7 +80,8 @@ export class Entertainment extends plugin {
replyMsg = await makeForwardMsg(e, imgOcrText, 'OCR结果')
await this.reply(replyMsg, e.isGroup)
}
async translate(e) {
async translate (e) {
const translateLangLabels = translateLangSupports.map(item => item.label).join('')
const translateLangLabelAbbrS = translateLangSupports.map(item => item.abbr).join('')
if (e.msg.trim() === '#chatgpt翻译帮助') {
@ -171,7 +176,8 @@ ${translateLangLabels}
await this.reply(result, e.isGroup)
return true
}
async wordcloud(e) {
async wordcloud (e) {
if (e.isGroup) {
let groupId = e.group_id
let lock = await redis.get(`CHATGPT:WORDCLOUD:${groupId}`)
@ -180,7 +186,7 @@ ${translateLangLabels}
return true
}
await e.reply('在统计啦,请稍等...')
await redis.set(`CHATGPT:WORDCLOUD:${groupId}`, '1', {EX: 600})
await redis.set(`CHATGPT:WORDCLOUD:${groupId}`, '1', { EX: 600 })
try {
await makeWordcloud(e, e.group_id)
} catch (err) {
@ -224,7 +230,7 @@ ${translateLangLabels}
}
}
async combineEmoj(e) {
async combineEmoj (e) {
let left = e.msg.codePointAt(0).toString(16).toLowerCase()
let right = e.msg.codePointAt(2).toString(16).toLowerCase()
if (left === right) {
@ -272,7 +278,7 @@ ${translateLangLabels}
return true
}
async sendMessage(e) {
async sendMessage (e) {
if (e.msg.match(/^#chatgpt打招呼帮助/) !== null) {
await this.reply('设置主动打招呼的群聊名单,群号之间以,隔开,参数之间空格隔开\n' +
'#chatgpt打招呼+群号:立即在指定群聊发起打招呼' +
@ -303,23 +309,71 @@ ${translateLangLabels}
}
}
async sendRandomMessage() {
async sendRandomMessage () {
if (Config.debug) {
logger.info('开始处理ChatGPT随机打招呼。')
}
let toSend = Config.initiativeChatGroups || []
for (let i = 0; i < toSend.length; i++) {
if (!toSend[i]) {
for (const element of toSend) {
if (!element) {
continue
}
let groupId = parseInt(toSend[i])
let groupId = parseInt(element)
if (Bot.getGroupList().get(groupId)) {
// 打招呼概率
if (Math.floor(Math.random() * 100) < Config.helloProbability) {
let message = await generateHello()
logger.info(`打招呼给群聊${groupId}` + message)
if (Config.defaultUseTTS) {
let audio = await generateAudio(message, Config.defaultTTSRole)
let audio
const [defaultVitsTTSRole, defaultAzureTTSRole, defaultVoxTTSRole] = [Config.defaultTTSRole, Config.azureTTSSpeaker, Config.voicevoxTTSSpeaker]
let ttsSupportKinds = []
if (Config.azureTTSKey) ttsSupportKinds.push(1)
if (Config.ttsSpace) ttsSupportKinds.push(2)
if (Config.voicevoxSpace) ttsSupportKinds.push(3)
if (!ttsSupportKinds.length) {
logger.warn('没有配置任何语音服务!')
return false
}
const randomIndex = Math.floor(Math.random() * ttsSupportKinds.length)
switch (ttsSupportKinds[randomIndex]) {
case 1 : {
const isEn = AzureTTS.supportConfigurations.find(config => config.code === defaultAzureTTSRole)?.language.includes('en')
if (isEn) {
message = (await translate(message, '英')).replace('\n', '')
}
audio = await AzureTTS.generateAudio(message, {
defaultAzureTTSRole
})
break
}
case 2 : {
if (Config.autoJapanese) {
try {
message = await translate(message, '日')
} catch (err) {
logger.error(err)
}
}
try {
audio = await generateAudio(message, defaultVitsTTSRole, '中日混合(中文用[ZH][ZH]包裹起来,日文用[JA][JA]包裹起来)')
} catch (err) {
logger.error(err)
}
break
}
case 3 : {
message = (await translate(message, '日')).replace('\n', '')
try {
audio = await VoiceVoxTTS.generateAudio(message, {
speaker: defaultVoxTTSRole
})
} catch (err) {
logger.error(err)
}
break
}
}
if (useSilk) {
await Bot.sendGroupMsg(groupId, await uploadRecord(audio))
} else {
@ -337,7 +391,7 @@ ${translateLangLabels}
}
}
async handleSentMessage(e) {
async handleSentMessage (e) {
const addReg = /^#chatgpt设置打招呼[:]?\s?(\S+)(?:\s+(\d+))?(?:\s+(\d+))?$/
const delReg = /^#chatgpt删除打招呼[:\s]?(\S+)/
const checkReg = /^#chatgpt查看打招呼$/
@ -395,8 +449,8 @@ ${translateLangLabels}
return false
} else {
Config.initiativeChatGroups = Config.initiativeChatGroups
.filter(group => group.trim() !== '')
.concat(validGroups)
.filter(group => group.trim() !== '')
.concat(validGroups)
}
if (typeof paramArray[2] === 'undefined' && typeof paramArray[3] === 'undefined') {
replyMsg = `已更新打招呼设置:\n${!e.isGroup ? '群号:' + Config.initiativeChatGroups.join(', ') + '\n' : ''}间隔时间:${Config.helloInterval}小时\n触发概率:${Config.helloProbability}%`
@ -413,51 +467,3 @@ ${translateLangLabels}
return false
}
}
export async function getImg (e) {
// 取消息中的图片、at的头像、回复的图片放入e.img
if (e.at && !e.source) {
e.img = [`https://q1.qlogo.cn/g?b=qq&s=0&nk=${e.at}`]
}
if (e.source) {
let reply
if (e.isGroup) {
reply = (await e.group.getChatHistory(e.source.seq, 1)).pop()?.message
} else {
reply = (await e.friend.getChatHistory(e.source.time, 1)).pop()?.message
}
if (reply) {
let i = []
for (let val of reply) {
if (val.type === 'image') {
i.push(val.url)
}
}
e.img = i
}
}
return e.img
}
export async function getImageOcrText (e) {
const img = await getImg(e)
if (img) {
try {
let resultArr = []
let eachImgRes = ''
for (let i in img) {
const imgOCR = await Bot.imageOcr(img[i])
for (let text of imgOCR.wordslist) {
eachImgRes += (`${text?.words} \n`)
}
if (eachImgRes) resultArr.push(eachImgRes)
eachImgRes = ''
}
// logger.warn('resultArr', resultArr)
return resultArr
} catch (err) {
return false
// logger.error(err)
}
} else {
return false
}
}

View file

@ -4,11 +4,14 @@ import { exec } from 'child_process'
import {
checkPnpm,
formatDuration,
parseDuration,
getAzureRoleList,
getPublicIP,
renderUrl,
getUserReplySetting,
getVitsRoleList,
getVoicevoxRoleList,
makeForwardMsg,
getDefaultReplySetting
parseDuration, processList,
renderUrl
} from '../utils/common.js'
import SydneyAIClient from '../utils/SydneyAIClient.js'
import { convertSpeaker, speakers as vitsRoleList } from '../utils/tts.js'
@ -16,9 +19,11 @@ import md5 from 'md5'
import path from 'path'
import fs from 'fs'
import loader from '../../../lib/plugins/loader.js'
import { supportConfigurations as voxRoleList } from '../utils/tts/voicevox.js'
import VoiceVoxTTS, { supportConfigurations as voxRoleList } from '../utils/tts/voicevox.js'
import { supportConfigurations as azureRoleList } from '../utils/tts/microsoft-azure.js'
let isWhiteList = true
let isSetGroup = true
export class ChatgptManagement extends plugin {
constructor (e) {
super({
@ -135,7 +140,7 @@ export class ChatgptManagement extends plugin {
permission: 'master'
},
{
reg: '^#chatgpt(本群)?(群\\d+)?(打开|开启|启动|激活|张嘴|开口|说话|上班)',
reg: '^#chatgpt(本群)?(群\\d+)?(开启|启动|激活|张嘴|开口|说话|上班)',
fnc: 'openMouth',
permission: 'master'
},
@ -180,7 +185,7 @@ export class ChatgptManagement extends plugin {
permission: 'master'
},
{
reg: '^#chatgpt(打开|关闭|设置)?全局((图片模式|语音模式|(语音角色|角色语音|角色).*)|回复帮助)$',
reg: '^#chatgpt(打开|关闭|设置)?全局((文本模式|图片模式|语音模式|((azure|vits|vox)?语音角色|角色语音|角色).*)|回复帮助)$',
fnc: 'setDefaultReplySetting',
permission: 'master'
},
@ -197,18 +202,18 @@ export class ChatgptManagement extends plugin {
permission: 'master'
},
{
reg: '^#chatgpt(设置|添加)群聊[白黑]名单$',
reg: '^#chatgpt(设置|添加)对话[白黑]名单$',
fnc: 'setList',
permission: 'master'
},
{
reg: '^#chatgpt查看群聊[白黑]名单$',
fnc: 'checkGroupList',
reg: '^#chatgpt(查看)?对话[白黑]名单(帮助)?$',
fnc: 'checkList',
permission: 'master'
},
{
reg: '^#chatgpt(删除|移除)群聊[白黑]名单$',
fnc: 'delGroupList',
reg: '^#chatgpt(删除|移除)对话[白黑]名单$',
fnc: 'delList',
permission: 'master'
},
{
@ -239,49 +244,89 @@ export class ChatgptManagement extends plugin {
permission: 'master'
},
{
reg: '^#chatgpt角色列表$',
reg: '^#(chatgpt)?(vits|azure|vox)?语音(角色列表|服务)$',
fnc: 'getTTSRoleList'
},
{
reg: '^#chatgpt设置后台(刷新|refresh)(t|T)oken$',
fnc: 'setOpenAIPlatformToken'
},
{
reg: '^#(chatgpt)?查看回复设置$',
fnc: 'viewUserSetting'
}
]
})
}
async viewUserSetting (e) {
const userSetting = await getUserReplySetting(this.e)
const replyMsg = `${this.e.sender.user_id}的回复设置:
图片模式: ${userSetting.usePicture === true ? '开启' : '关闭'}
语音模式: ${userSetting.useTTS === true ? '开启' : '关闭'}
Vits语音角色: ${userSetting.ttsRole}
Azure语音角色: ${userSetting.ttsRoleAzure}
VoiceVox语音角色: ${userSetting.ttsRoleVoiceVox}
${userSetting.useTTS === true ? '当前语音模式为' + Config.ttsMode : ''}`
await this.reply(replyMsg.replace(/\n\s*$/, ''), e.isGroup)
return true
}
async getTTSRoleList (e) {
let userReplySetting = await redis.get(`CHATGPT:USER:${e.sender.user_id}`)
userReplySetting = !userReplySetting
? getDefaultReplySetting()
: JSON.parse(userReplySetting)
if (!userReplySetting.useTTS) return
const matchCommand = e.msg.match(/^#(chatgpt)?(vits|azure|vox)?语音(服务|角色列表)/)
if (matchCommand[3] === '服务') {
await this.reply(`当前支持vox、vits、azure语音服务可使用'#(vox|azure|vits)语音角色列表'查看支持的语音角色。
vits语音主要有赛马娘原神中文原神日语崩坏 3 的音色结果有随机性语调可能很奇怪
vox语音Voicevox 是一款由日本 DeNA 开发的语音合成软件它可以将文本转换为自然流畅的语音Voicevox 支持多种语言和声音可以用于制作各种语音内容如动画游戏广告等Voicevox 还提供了丰富的调整选项可以调整声音的音调速度音量等参数以满足不同需求除了桌面版软件外Voicevox 还提供了 Web 版本和 API 接口方便开发者在各种平台上使用
azure语音Azure 语音是微软 Azure 平台提供的一项语音服务它可以帮助开发者将语音转换为文本将文本转换为语音实现自然语言理解和对话等功能Azure 语音支持多种语言和声音可以用于构建各种语音应用程序如智能客服语音助手自动化电话系统等Azure 语音还提供了丰富的 API SDK方便开发者在各种平台上集成使用
`)
return true
}
let userReplySetting = await getUserReplySetting(this.e)
if (!userReplySetting.useTTS && matchCommand[2] === undefined) {
await this.reply('当前不是语音模式,如果想查看不同语音模式下支持的角色列表,可使用"#(vox|azure|vits)语音角色列表"查看')
return false
}
let ttsMode = Config.ttsMode
let roleList = []
if (ttsMode === 'vits-uma-genshin-honkai') {
const [firstHalf, secondHalf] = [vitsRoleList.slice(0, Math.floor(vitsRoleList.length / 2)).join('、'), vitsRoleList.slice(Math.floor(vitsRoleList.length / 2)).join('、')]
const [chunk1, chunk2] = [firstHalf.match(/[^、]+(?:、[^、]+){0,30}/g), secondHalf.match(/[^、]+(?:、[^、]+){0,30}/g)]
const list = [await makeForwardMsg(e, chunk1, `${Config.ttsMode}角色列表1`), await makeForwardMsg(e, chunk2, `${Config.ttsMode}角色列表2`)]
roleList = await makeForwardMsg(e, list, `${Config.ttsMode}角色列表`)
await this.reply(roleList)
return
} else if (ttsMode === 'voicevox') {
roleList = voxRoleList.map(item => item.name).join('、')
} else if (ttsMode === 'azure') {
roleList = azureRoleList.map(item => item.name).join('、')
if (matchCommand[2] === 'vits') {
roleList = getVitsRoleList(this.e)
} else if (matchCommand[2] === 'vox') {
roleList = getVoicevoxRoleList()
} else if (matchCommand[2] === 'azure') {
roleList = getAzureRoleList()
} else if (matchCommand[2] === undefined) {
switch (ttsMode) {
case 'vits-uma-genshin-honkai':
roleList = getVitsRoleList(this.e)
break
case 'voicevox':
roleList = getVoicevoxRoleList()
break
case 'azure':
if (matchCommand[2] === 'azure') {
roleList = getAzureRoleList()
}
break
default:
break
}
} else {
await this.reply('设置错误,请使用"#chatgpt语音服务"查看支持的语音配置')
return false
}
if (roleList.length > 300) {
let chunks = roleList.match(/[^、]+(?:、[^、]+){0,30}/g)
roleList = await makeForwardMsg(e, chunks, `${Config.ttsMode}角色列表`)
roleList = await makeForwardMsg(e, chunks, `${Config.ttsMode}语音角色列表`)
}
await this.reply(roleList)
}
async ttsSwitch (e) {
let userReplySetting = await redis.get(`CHATGPT:USER:${e.sender.user_id}`)
userReplySetting = !userReplySetting
? getDefaultReplySetting()
: JSON.parse(userReplySetting)
let userReplySetting = await getUserReplySetting(this.e)
if (!userReplySetting.useTTS) {
let replyMsg
if (userReplySetting.usePicture) {
@ -308,7 +353,6 @@ export class ChatgptManagement extends plugin {
}
async commandHelp (e) {
if (!this.e.isMaster) { return this.reply('你没有权限') }
if (/^#(chatgpt)?指令表帮助$/.exec(e.msg.trim())) {
await this.reply('#chatgpt指令表: 查看本插件的所有指令\n' +
'#chatgpt(对话|管理|娱乐|绘图|人物设定|聊天记录)指令表: 查看对应功能分类的指令表\n' +
@ -342,8 +386,8 @@ export class ChatgptManagement extends plugin {
commandSet.push({ name, dsc: plugin.dsc, rule })
}
}
if (e.msg.includes('搜索')) {
let cmd = e.msg.trim().match(/^#(chatgpt)?(对话|管理|娱乐|绘图|人物设定|聊天记录)?指令表(帮助|搜索(.+))?/)[4]
if (/^#(chatgpt)?指令表搜索(.+)/.test(e.msg.trim())) {
let cmd = e.msg.trim().match(/#(chatgpt)?指令表搜索(.+)/)[2]
if (!cmd) {
await this.reply('(⊙ˍ⊙)')
return 0
@ -389,134 +433,124 @@ export class ChatgptManagement extends plugin {
return true
}
/**
* 对原始黑白名单进行去重和去除无效群号处理
* @param whitelist
* @param blacklist
* @returns {Promise<any[][]>}
*/
async processList (whitelist, blacklist) {
let groupWhitelist = Array.isArray(whitelist)
? whitelist
: String(whitelist).split(/[,]/)
let groupBlacklist = !Array.isArray(blacklist)
? blacklist
: String(blacklist).split(/[,]/)
groupWhitelist = Array.from(new Set(groupWhitelist)).filter(value => /^[1-9]\d{8,9}$/.test(value))
groupBlacklist = Array.from(new Set(groupBlacklist)).filter(value => /^[1-9]\d{8,9}$/.test(value))
return [groupWhitelist, groupBlacklist]
}
async setList (e) {
this.setContext('saveList')
isWhiteList = e.msg.includes('白')
const listType = isWhiteList ? '白名单' : '黑名单'
await this.reply(`请发送需要设置的群聊${listType},群号间使用,隔开`, e.isGroup)
const listType = isWhiteList ? '对话白名单' : '对话黑名单'
await this.reply(`请发送需要添加的${listType}号码默认设置为添加群号需要添加QQ号时在前面添加^(例如:^123456)。`, e.isGroup)
return false
}
async saveList (e) {
if (!this.e.msg) return
const listType = isWhiteList ? '白名单' : '黑名单'
const inputMatch = this.e.msg.match(/\d+/g)
let [groupWhitelist, groupBlacklist] = await this.processList(Config.groupWhitelist, Config.groupBlacklist)
let inputList = Array.isArray(inputMatch) ? this.e.msg.match(/\d+/g).filter(value => /^[1-9]\d{8,9}$/.test(value)) : []
const listType = isWhiteList ? '对话白名单' : '对话黑名单'
const regex = /^\^?[1-9]\d{5,9}$/
const wrongInput = []
const inputSet = new Set()
const inputList = this.e.msg.split(/[,]/).reduce((acc, value) => {
if (value.length > 11 || !regex.test(value)) {
wrongInput.push(value)
} else if (!inputSet.has(value)) {
inputSet.add(value)
acc.push(value)
}
return acc
}, [])
if (!inputList.length) {
await this.reply('无效输入,请在检查群号是否正确后重新输入', e.isGroup)
let replyMsg = '名单更新失败,请在检查输入是否正确后重新输入。'
if (wrongInput.length) replyMsg += `\n${wrongInput.length ? '检测到以下错误输入:"' + wrongInput.join('') + '",已自动忽略。' : ''}`
await this.reply(replyMsg, e.isGroup)
return false
}
inputList = Array.from(new Set(inputList))
let whitelist = []
let blacklist = []
for (const element of inputList) {
if (listType === '白名单') {
groupWhitelist = groupWhitelist.filter(item => item !== element)
whitelist.push(element)
} else {
groupBlacklist = groupBlacklist.filter(item => item !== element)
blacklist.push(element)
}
}
if (!(whitelist.length || blacklist.length)) {
await this.reply('无效输入,请在检查群号是否正确或重复添加后重新输入', e.isGroup)
return false
let [whitelist, blacklist] = processList(Config.whitelist, Config.blacklist)
whitelist = [...inputList, ...whitelist]
blacklist = [...inputList, ...blacklist]
if (listType === '对话白名单') {
Config.whitelist = Array.from(new Set(whitelist))
} else {
if (listType === '白名单') {
Config.groupWhitelist = groupWhitelist
.filter(group => group !== '')
.concat(whitelist)
} else {
Config.groupBlacklist = groupBlacklist
.filter(group => group !== '')
.concat(blacklist)
}
Config.blacklist = Array.from(new Set(blacklist))
}
let replyMsg = `群聊${listType}已更新,可通过\n'#chatgpt查看群聊${listType}'查看最新名单\n'#chatgpt移除群聊${listType}'管理名单`
let replyMsg = `${listType}已更新,可通过\n"#chatgpt查看${listType}" 查看最新名单\n"#chatgpt移除${listType}" 管理名单${wrongInput.length ? '\n检测到以下错误输入"' + wrongInput.join('') + '",已自动忽略。' : ''}`
if (e.isPrivate) {
replyMsg += `\n当前群聊${listType}为:${listType === '白名单' ? Config.groupWhitelist : Config.groupBlacklist}`
replyMsg += `\n当前${listType}为:${listType === '对话白名单' ? Config.whitelist : Config.blacklist}`
}
await this.reply(replyMsg, e.isGroup)
this.finish('saveList')
}
async checkGroupList (e) {
async checkList (e) {
if (e.msg.includes('帮助')) {
await this.reply('默认设置为添加群号需要拉黑QQ号时在前面添加^(例如:^123456),可一次性混合输入多个配置号码,错误项会自动忽略。具体使用指令可通过 "#指令表搜索名单" 查看,白名单优先级高于黑名单。')
return true
}
isWhiteList = e.msg.includes('白')
const list = isWhiteList ? Config.groupWhitelist : Config.groupBlacklist
const list = isWhiteList ? Config.whitelist : Config.blacklist
const listType = isWhiteList ? '白名单' : '黑名单'
const replyMsg = list.length ? `当前群聊${listType}为:${list}` : `当前没有设置任何群聊${listType}`
const replyMsg = list.length ? `当前${listType}为:${list}` : `当前没有设置任何${listType}`
await this.reply(replyMsg, e.isGroup)
return false
}
async delGroupList (e) {
async delList (e) {
isWhiteList = e.msg.includes('白')
const listType = isWhiteList ? '白名单' : '黑名单'
const listType = isWhiteList ? '对话白名单' : '对话黑名单'
let replyMsg = ''
if (Config.groupWhitelist.length === 0 && Config.groupBlacklist.length === 0) {
replyMsg = `当前群聊(白|黑)名单为空,请先添加${listType}吧~`
} else if ((listType === '白名单' && !Config.groupWhitelist.length) || (listType === '黑名单' && !Config.groupBlacklist.length)) {
replyMsg = `当前群聊${listType}为空,请先添加吧~`
if (Config.whitelist.length === 0 && Config.blacklist.length === 0) {
replyMsg = '当前对话(白|黑)名单都是空哒,请先添加吧~'
} else if ((listType === '对话白名单' && !Config.whitelist.length) || (listType === '对话黑名单' && !Config.blacklist.length)) {
replyMsg = `当前${listType}为空,请先添加吧~`
}
if (replyMsg) {
await this.reply(replyMsg, e.isGroup)
return false
}
this.setContext('confirmDelGroup')
await this.reply(`请发送需要删除的群聊${listType},群号间使用,隔开。输入‘全部删除’清空${listType}`, e.isGroup)
this.setContext('confirmDelList')
await this.reply(`请发送需要删除的${listType}码,号码间使用,隔开。输入‘全部删除’清空${listType}${e.isPrivate ? '\n当前' + listType + '为:' + (listType === '对话白名单' ? Config.whitelist : Config.blacklist) : ''}`, e.isGroup)
return false
}
async confirmDelGroup (e) {
async confirmDelList (e) {
if (!this.e.msg) return
const isAllDeleted = this.e.msg.trim() === '全部删除'
const groupNumRegex = /^[1-9]\d{8,9}$/
const inputMatch = this.e.msg.match(/\d+/g)
const validGroups = Array.isArray(inputMatch) ? inputMatch.filter(groupNum => groupNumRegex.test(groupNum)) : []
let [groupWhitelist, groupBlacklist] = await this.processList(Config.groupWhitelist, Config.groupBlacklist)
const regex = /^\^?[1-9]\d{5,9}$/
const wrongInput = []
const inputSet = new Set()
const inputList = this.e.msg.split(/[,]/).reduce((acc, value) => {
if (value.length > 11 || !regex.test(value)) {
wrongInput.push(value)
} else if (!inputSet.has(value)) {
inputSet.add(value)
acc.push(value)
}
return acc
}, [])
if (!inputList.length && !isAllDeleted) {
let replyMsg = '名单更新失败,请在检查输入是否正确后重新输入。'
if (wrongInput.length) replyMsg += `${wrongInput.length ? '\n检测到以下错误输入"' + wrongInput.join('') + '",已自动忽略。' : ''}`
await this.reply(replyMsg, e.isGroup)
return false
}
let [whitelist, blacklist] = processList(Config.whitelist, Config.blacklist)
if (isAllDeleted) {
Config.groupWhitelist = isWhiteList ? [] : groupWhitelist
Config.groupBlacklist = !isWhiteList ? [] : groupBlacklist
Config.whitelist = isWhiteList ? [] : whitelist
Config.blacklist = !isWhiteList ? [] : blacklist
} else {
if (!validGroups.length) {
await this.reply('无效输入,请在检查群号是否正确后重新输入', e.isGroup)
return false
} else {
for (const element of validGroups) {
if (isWhiteList) {
Config.groupWhitelist = groupWhitelist.filter(item => item !== element)
} else {
Config.groupBlacklist = groupBlacklist.filter(item => item !== element)
}
for (const element of inputList) {
if (isWhiteList) {
Config.whitelist = whitelist.filter(item => item !== element)
} else {
Config.blacklist = blacklist.filter(item => item !== element)
}
}
}
const listType = isWhiteList ? '白名单' : '黑名单'
let replyMsg = `群聊${listType}已更新,可通过'#chatgpt查看群聊${listType}'命令查看最新名单`
const listType = isWhiteList ? '对话白名单' : '对话黑名单'
let replyMsg = `${listType}已更新,可通过 "#chatgpt查看${listType}" 命令查看最新名单${wrongInput.length ? '\n检测到以下错误输入"' + wrongInput.join('') + '",已自动忽略。' : ''}`
if (e.isPrivate) {
replyMsg += `\n当前群聊${listType}为:${listType === '白名单' ? Config.groupWhitelist : Config.groupBlacklist}`
const list = isWhiteList ? Config.whitelist : Config.blacklist
replyMsg = list.length ? `\n当前${listType}为:${list}` : `当前没有设置任何${listType}`
}
await this.reply(replyMsg, e.isGroup)
this.finish('confirmDelGroup')
this.finish('confirmDelList')
}
async enablePrivateChat (e) {
@ -542,10 +576,14 @@ export class ChatgptManagement extends plugin {
}
async setDefaultReplySetting (e) {
const reg = /^#chatgpt(打开|关闭|设置)?全局((文本模式|图片模式|语音模式|(语音角色|角色语音|角色).*)|回复帮助)/
const reg = /^#chatgpt(打开|关闭|设置)?全局((文本模式|图片模式|语音模式|((azure|vits|vox)?语音角色|角色语音|角色)(.*))|回复帮助)/
const matchCommand = e.msg.match(reg)
const settingType = matchCommand[2]
let replyMsg = ''
let ttsSupportKinds = []
if (Config.azureTTSKey) ttsSupportKinds.push(1)
if (Config.ttsSpace) ttsSupportKinds.push(2)
if (Config.voicevoxSpace) ttsSupportKinds.push(3)
switch (settingType) {
case '图片模式':
if (matchCommand[1] === '打开') {
@ -580,8 +618,8 @@ export class ChatgptManagement extends plugin {
replyMsg = '请使用“#chatgpt打开全局文本模式”或“#chatgpt关闭全局文本模式”命令来设置回复模式'
} break
case '语音模式':
if (!Config.ttsSpace) {
replyMsg = '您没有配置VITS API,请前往锅巴面板进行配置'
if (!ttsSupportKinds.length) {
replyMsg = '您没有配置任何语音服务,请前往锅巴面板进行配置'
break
}
if (matchCommand[1] === '打开') {
@ -599,25 +637,68 @@ export class ChatgptManagement extends plugin {
replyMsg = '请使用“#chatgpt打开全局语音模式”或“#chatgpt关闭全局语音模式”命令来设置回复模式'
} break
case '回复帮助':
replyMsg = '可使用以下命令配置全局回复:\n#chatgpt(打开/关闭)全局(语音/图片/文本)模式\n#chatgpt设置全局(语音角色|角色语音|角色)+角色名称(留空则为随机)'
replyMsg = '可使用以下命令配置全局回复:\n#chatgpt(打开/关闭)全局(语音/图片/文本)模式\n#chatgpt设置全局(vox|azure|vits)语音角色+角色名称(留空则为随机)\n'
break
default:
if (!Config.ttsSpace) {
replyMsg = '您没有配置VITS API,请前往锅巴面板进行配置'
if (!ttsSupportKinds) {
replyMsg = '您没有配置任何语音服务,请前往锅巴面板进行配置'
break
}
if (settingType.match(/(语音角色|角色语音|角色)/)) {
const speaker = matchCommand[2].replace(/(语音角色|角色语音|角色)/, '').trim() || ''
if (!speaker.length) {
replyMsg = 'ChatGpt将随机挑选角色回复'
Config.defaultTTSRole = ''
const voiceKind = matchCommand[5]
let speaker = matchCommand[6] || ''
if (voiceKind === undefined) {
await this.reply('请选择需要设置的语音类型。使用"#chatgpt语音服务"查看支持的语音类型')
return false
}
if (!speaker.length || speaker === '随机') {
replyMsg = `设置成功,ChatGpt将在${voiceKind}语音模式下随机挑选角色进行回复`
if (voiceKind === 'vits') Config.defaultTTSRole = '随机'
if (voiceKind === 'azure') Config.azureTTSSpeaker = '随机'
if (voiceKind === 'vox') Config.voicevoxTTSSpeaker = '随机'
} else {
const ttsRole = convertSpeaker(speaker)
if (vitsRoleList.includes(ttsRole)) {
Config.defaultTTSRole = ttsRole
replyMsg = `ChatGPT默认语音角色已被设置为“${ttsRole}`
if (ttsSupportKinds.includes(1) && voiceKind === 'azure') {
if (getAzureRoleList().includes(speaker)) {
Config.defaultUseTTS = azureRoleList.filter(s => s.name === speaker)[0].code
replyMsg = `ChatGPT默认语音角色已被设置为“${speaker}`
} else {
await this.reply(`抱歉,没有"${speaker}"这个角色目前azure模式下支持的角色有${azureRoleList.map(item => item.name).join('、')}`)
return false
}
} else if (ttsSupportKinds.includes(2) && voiceKind === 'vits') {
const ttsRole = convertSpeaker(speaker)
if (vitsRoleList.includes(ttsRole)) {
Config.defaultTTSRole = ttsRole
replyMsg = `ChatGPT默认语音角色已被设置为“${ttsRole}`
} else {
replyMsg = `抱歉,我还不认识“${ttsRole}”这个语音角色,可使用'#vits角色列表'查看可配置的角色`
}
} else if (ttsSupportKinds.includes(3) && voiceKind === 'vox') {
if (getVoicevoxRoleList().includes(speaker)) {
let regex = /^(.*?)-(.*)$/
let match = regex.exec(speaker)
let style = null
if (match) {
speaker = match[1]
style = match[2]
}
let chosen = VoiceVoxTTS.supportConfigurations.filter(s => s.name === speaker)
if (chosen.length === 0) {
await this.reply(`抱歉,没有"${speaker}"这个角色目前voicevox模式下支持的角色有${VoiceVoxTTS.supportConfigurations.map(item => item.name).join('、')}`)
break
}
if (style && !chosen[0].styles.find(item => item.name === style)) {
await this.reply(`抱歉,"${speaker}"这个角色没有"${style}"这个风格,目前支持的风格有${chosen[0].styles.map(item => item.name).join('、')}`)
break
}
Config.ttsRoleVoiceVox = chosen[0].name + (style ? `-${style}` : '')
replyMsg = `ChatGPT默认语音角色已被设置为“${speaker}`
} else {
await this.reply(`抱歉,没有"${speaker}"这个角色目前voicevox模式下支持的角色有${voxRoleList.map(item => item.name).join('、')}`)
return false
}
} else {
replyMsg = `抱歉,我还不认识“${ttsRole}”这个语音角色`
replyMsg = `${voiceKind}语音角色设置错误,请检查语音配置~`
}
}
} else {
@ -993,27 +1074,27 @@ export class ChatgptManagement extends plugin {
poe: 'Poe'
}
let modeText = modeMap[mode || 'api']
let message = ` API模式和浏览器模式如何选择
let message = `API模式和浏览器模式如何选择
// eslint-disable-next-line no-irregular-whitespace
API模式会调用OpenAI官方提供的gpt-3.5-turbo API只需要提供API Key一般情况下该种方式响应速度更快不会像chatGPT官网一样总出现不可用的现象但注意gpt-3.5-turbo的API调用是收费的新用户有18美元试用金可用于支付价格为$0.0020/1K tokens.(问题和回答加起来算token)
API3模式会调用官网反代API他会帮你绕过CF防护需要提供ChatGPT的Token效果与官网和浏览器一致设置token指令#chatgpt设置token
API模式会调用 OpenAI 官方提供的 gpt-3.5-turbo API只需要提供 API Key一般情况下该种方式响应速度更快不会像 chatGPT 官网一样总出现不可用的现象但要注意 gpt-3.5-turbo API 调用是收费的新用户有 $5 的试用金可用于支付价格为 $0.0020/1K tokens问题和回答加起来算 token
浏览器模式通过在本地启动Chrome等浏览器模拟用户访问ChatGPT网站使得获得和官方以及API2模式一模一样的回复质量同时保证安全性缺点是本方法对环境要求较高需要提供桌面环境和一个可用的代理能够访问ChatGPT的IP地址且响应速度不如API而且高峰期容易无法使用
API3 模式会调用官网反代 API它会帮你绕过 CF 防护需要提供 ChatGPT Token效果与官网和浏览器一致设置 Token 指令#chatgpt设置token
必应Bing将调用微软新必应接口进行对话需要在必应网页能够正常使用新必应且设置有效的Bing 登录Cookie方可使用#chatgpt设置必应token
自建ChatGLM模式会调用自建的ChatGLM-6B服务器API进行对话需要自建参考https://github.com/ikechan8370/SimpleChatGLM6BAPI
Claude模式会调用Slack中的Claude机器人进行对话与其他模式不同的是全局共享一个对话配置参考https://ikechan8370.com/archives/chatgpt-plugin-for-yunzaipei-zhi-slack-claude
Poe模式会调用Poe中的Claude-instant进行对话需要提供cookie#chatgpt设置PoeToken
浏览器模式通过在本地启动 Chrome 等浏览器模拟用户访问 ChatGPT 网站使得获得和官方以及 API2 模式一模一样的回复质量同时保证安全性缺点是本方法对环境要求较高需要提供桌面环境和一个可用的代理能够访问 ChatGPT IP 地址且响应速度不如 API而且高峰期容易无法使用
您可以使用#chatgpt切换浏览器/API/API3/Bing/ChatGLM/Claude/Poe来切换到指定模式
必应Bing将调用微软新必应接口进行对话需要在必应网页能够正常使用新必应且设置有效的 Bing 登录 Cookie 方可使用#chatgpt设置必应 Token
当前为${modeText}模式
`
自建 ChatGLM 模式会调用自建的 ChatGLM-6B 服务器 API 进行对话需要自建参考 https://github.com/ikechan8370/SimpleChatGLM6BAPI。
Claude 模式会调用 Slack 中的 Claude 机器人进行对话与其他模式不同的是全局共享一个对话配置参考 https://ikechan8370.com/archives/chatgpt-plugin-for-yunzaipei-zhi-slack-claude。
Poe 模式会调用 Poe 中的 Claude-instant 进行对话需要提供 Cookie#chatgpt设置 Poe Token
星火 模式会调用科大讯飞推出的新一代认知智能大模型 '星火认知大模型' 进行对话需要提供Cookie#chatgpt设置星火token
您可以使用 "#chatgpt切换浏览器/API/API3/Bing/ChatGLM/Claude/Poe/星火" 来切换到指定模式
当前为 ${modeText} 模式`
await this.reply(message)
}

View file

@ -70,8 +70,8 @@
"sydneyApologyIgnored": true,
"enforceMaster": false,
"enablePrivateChat": false,
"groupWhitelist": [],
"groupBlacklist": [],
"whitelist": [],
"blacklist": [],
"ttsRegex": "/匹配规则/匹配模式",
"baiduTranslateAppId": "",
"baiduTranslateSecret": "",

View file

@ -1,7 +1,7 @@
import { Config } from './utils/config.js'
import { speakers } from './utils/tts.js'
import AzureTTS from './utils/tts/microsoft-azure.js'
import VoiceVoxTTS from './utils/tts/voicevox.js'
import { supportConfigurations as azureRoleList } from './utils/tts/microsoft-azure.js'
import { supportConfigurations as voxRoleList } from './utils/tts/voicevox.js'
// 支持锅巴
export function supportGuoba () {
return {
@ -40,15 +40,15 @@ export function supportGuoba () {
component: 'InputTextArea'
},
{
field: 'groupWhitelist',
label: '群聊白名单',
bottomHelpMessage: '设置后只有白名单内的群可以使用本插件。用英文逗号隔开',
field: 'whitelist',
label: '对话白名单',
bottomHelpMessage: '只有在白名单内的QQ号或群组才能使用本插件进行对话。如果需要添加QQ号请在号码前面加上^符号(例如:^123456多个号码之间请用英文逗号(,)隔开。白名单优先级高于黑名单。',
component: 'Input'
},
{
field: 'groupBlacklist',
label: '群聊黑名单',
bottomHelpMessage: '设置后名单内的群禁止使用本插件。用英文逗号隔开',
field: 'blacklist',
label: '对话黑名单',
bottomHelpMessage: '名单内的群或QQ号将无法使用本插件进行对话。如果需要添加QQ号请在QQ号前面加上^符号(例如:^123456并用英文逗号,)将各个号码分隔开。',
component: 'Input'
},
{
@ -102,7 +102,10 @@ export function supportGuoba () {
bottomHelpMessage: 'vits-uma-genshin-honkai语音模式下未指定角色时使用的角色。若留空将使用随机角色回复。若用户通过指令指定了角色将忽略本设定',
component: 'Select',
componentProps: {
options: speakers.concat('随机').map(s => { return { label: s, value: s } })
options: [{
label: '随机',
value: '随机'
}].concat(speakers.map(s => { return { label: s, value: s } }))
}
},
{
@ -111,12 +114,16 @@ export function supportGuoba () {
bottomHelpMessage: '微软Azure语音模式下未指定角色时使用的角色。若用户通过指令指定了角色将忽略本设定',
component: 'Select',
componentProps: {
options: AzureTTS.supportConfigurations.map(item => {
return {
label: `${item.name}-${item.gender}-${item.languageDetail}`,
value: item.code
}
})
options: [{
label: '随机',
value: '随机'
},
...azureRoleList.flatMap(item => [
item.roleInfo
]).map(s => ({
label: s,
value: s
}))]
}
},
{
@ -125,11 +132,17 @@ export function supportGuoba () {
bottomHelpMessage: 'VoiceVox语音模式下未指定角色时使用的角色。若留空将使用随机角色回复。若用户通过指令指定了角色将忽略本设定',
component: 'Select',
componentProps: {
options: VoiceVoxTTS.supportConfigurations.map(item => {
return item.styles.map(style => {
return `${item.name}-${style.name}`
}).concat(item.name)
}).flat().concat('随机').map(s => { return { label: s, value: s } })
options: [{
label: '随机',
value: '随机'
},
...voxRoleList.flatMap(item => [
...item.styles.map(style => `${item.name}-${style.name}`),
item.name
]).map(s => ({
label: s,
value: s
}))]
}
},
{
@ -786,7 +799,19 @@ export function supportGuoba () {
for (let [keyPath, value] of Object.entries(data)) {
// 处理黑名单
if (keyPath === 'blockWords' || keyPath === 'promptBlockWords' || keyPath === 'initiativeChatGroups') { value = value.toString().split(/[,;\|]/) }
if (Config[keyPath] != value) { Config[keyPath] = value }
if (Config[keyPath] !== value) { Config[keyPath] = value }
}
// 正确储存azureRoleSelect结果
const azureSpeaker = azureRoleList.find(config => {
let i = config.roleInfo || config.code
if (i === data.azureTTSSpeaker) {
return config
} else {
return false
}
})
if (typeof azureSpeaker === 'object' && azureSpeaker !== null) {
Config.azureTTSSpeaker = azureSpeaker.code
}
return Result.ok({}, '保存成功~')
}

View file

@ -8,6 +8,9 @@ import buffer from 'buffer'
import yaml from 'yaml'
import puppeteer from '../../../lib/puppeteer/puppeteer.js'
import { Config } from './config.js'
import { speakers as vitsRoleList } from './tts.js'
import { supportConfigurations as voxRoleList } from './tts/voicevox.js'
import { supportConfigurations as azureRoleList } from './tts/microsoft-azure.js'
// export function markdownToText (markdown) {
// return remark()
// .use(stripMarkdown)
@ -19,7 +22,7 @@ let _puppeteer
try {
const Puppeteer = (await import('../../../renderers/puppeteer/lib/puppeteer.js')).default
let puppeteerCfg = {}
let configFile = `./renderers/puppeteer/config.yaml`
let configFile = './renderers/puppeteer/config.yaml'
if (fs.existsSync(configFile)) {
try {
puppeteerCfg = yaml.parse(fs.readFileSync(configFile, 'utf8'))
@ -335,7 +338,7 @@ export async function renderUrl (e, url, renderCfg = {}) {
'Content-Type': 'application/json'
},
body: JSON.stringify({
url: url,
url,
option: {
width: renderCfg.Viewport.width || 1280,
height: renderCfg.Viewport.height || 720,
@ -350,7 +353,7 @@ export async function renderUrl (e, url, renderCfg = {}) {
})
if (resultres.ok) {
const buff = Buffer.from(await resultres.arrayBuffer())
if(buff) {
if (buff) {
const base64 = segment.image(buff)
if (renderCfg.retType === 'base64') {
return base64
@ -363,7 +366,7 @@ export async function renderUrl (e, url, renderCfg = {}) {
}
}
}
await _puppeteer.browserInit()
const page = await _puppeteer.browser.newPage()
let base64
@ -401,7 +404,8 @@ export function getDefaultReplySetting () {
usePicture: Config.defaultUsePicture,
useTTS: Config.defaultUseTTS,
ttsRole: Config.defaultTTSRole,
ttsRoleAzure: Config.azureTTSSpeaker
ttsRoleAzure: Config.azureTTSSpeaker,
ttsRoleVoiceVox: Config.voicevoxTTSSpeaker
}
}
@ -679,16 +683,106 @@ export async function getUserData (user) {
return JSON.parse(data)
} catch (error) {
return {
user: user,
user,
passwd: '',
chat: [],
mode: '',
cast: {
api: '', //API设定
bing: '', //必应设定
bing_resource: '', //必应扩展资料
slack: '', //Slack设定
api: '', // API设定
bing: '', // 必应设定
bing_resource: '', // 必应扩展资料
slack: '' // Slack设定
}
}
}
}
}
export function getVoicevoxRoleList () {
return voxRoleList.map(item => item.name).join('、')
}
export function getAzureRoleList () {
return azureRoleList.map(item => item.name).join('、')
}
export async function getVitsRoleList (e) {
const [firstHalf, secondHalf] = [vitsRoleList.slice(0, Math.floor(vitsRoleList.length / 2)).join('、'), vitsRoleList.slice(Math.floor(vitsRoleList.length / 2)).join('、')]
const [chunk1, chunk2] = [firstHalf.match(/[^、]+(?:、[^、]+){0,30}/g), secondHalf.match(/[^、]+(?:、[^、]+){0,30}/g)]
const list = [await makeForwardMsg(e, chunk1, 'vits角色列表1'), await makeForwardMsg(e, chunk2, 'vits角色列表2')]
return await makeForwardMsg(e, list, 'vits角色列表')
}
export async function getUserReplySetting (e) {
let userSetting = await redis.get(`CHATGPT:USER:${e.sender.user_id}`)
if (userSetting) {
userSetting = JSON.parse(userSetting)
if (Object.keys(userSetting).indexOf('useTTS') < 0) {
userSetting.useTTS = Config.defaultUseTTS
}
} else {
userSetting = getDefaultReplySetting()
}
return userSetting
}
export async function getImg (e) {
// 取消息中的图片、at的头像、回复的图片放入e.img
if (e.at && !e.source) {
e.img = [`https://q1.qlogo.cn/g?b=qq&s=0&nk=${e.at}`]
}
if (e.source) {
let reply
if (e.isGroup) {
reply = (await e.group.getChatHistory(e.source.seq, 1)).pop()?.message
} else {
reply = (await e.friend.getChatHistory(e.source.time, 1)).pop()?.message
}
if (reply) {
let i = []
for (let val of reply) {
if (val.type === 'image') {
i.push(val.url)
}
}
e.img = i
}
}
return e.img
}
export async function getImageOcrText (e) {
const img = await getImg(e)
if (img) {
try {
let resultArr = []
let eachImgRes = ''
for (let i in img) {
const imgOCR = await Bot.imageOcr(img[i])
for (let text of imgOCR.wordslist) {
eachImgRes += (`${text?.words} \n`)
}
if (eachImgRes) resultArr.push(eachImgRes)
eachImgRes = ''
}
// logger.warn('resultArr', resultArr)
return resultArr
} catch (err) {
return false
// logger.error(err)
}
} else {
return false
}
}
// 对原始黑白名单进行去重和去除无效群号处理,并处理通过锅巴面板添加错误配置时可能导致的问题
export function processList (whitelist, blacklist) {
whitelist = Array.isArray(whitelist)
? whitelist
: String(whitelist).split(/[,]/)
blacklist = !Array.isArray(blacklist)
? blacklist
: String(blacklist).split(/[,]/)
whitelist = Array.from(new Set(whitelist)).filter(value => /^\^?[1-9]\d{5,9}$/.test(value))
blacklist = Array.from(new Set(blacklist)).filter(value => /^\^?[1-9]\d{5,9}$/.test(value))
return [whitelist, blacklist]
}

View file

@ -99,8 +99,8 @@ const defaultConfig = {
live2dOption_rotation: 0,
groupAdminPage: false,
enablePrivateChat: false,
groupWhitelist: [],
groupBlacklist: [],
whitelist: [],
blacklist: [],
ttsRegex: '/匹配规则/匹配模式',
slackUserToken: '',
slackBotUserToken: '',

View file

@ -1,6 +1,7 @@
import crypto from 'crypto'
import { getDefaultReplySetting, mkdirs } from '../common.js'
import { Config } from '../config.js'
import { translate } from '../translate.js'
let sdk
try {
@ -20,20 +21,29 @@ async function generateAudio (text, option = {}, ssml = '') {
let filename = `${_path}/data/chatgpt/tts/azure/${crypto.randomUUID()}.wav`
let audioConfig = sdk.AudioConfig.fromAudioFileOutput(filename)
let synthesizer
let speaker = option?.speaker || '随机'
let context = text
// 打招呼用
if (speaker === '随机') {
speaker = supportConfigurations[Math.floor(Math.random() * supportConfigurations.length)].code
let languagePrefix = supportConfigurations.find(config => config.code === speaker).languageDetail.charAt(0)
languagePrefix = languagePrefix.startsWith('E') ? '英' : languagePrefix
context = (await translate(context, languagePrefix)).replace('\n', '')
}
if (ssml) {
synthesizer = new sdk.SpeechSynthesizer(speechConfig, audioConfig)
await speakSsmlAsync(synthesizer, ssml)
} else {
speechConfig.speechSynthesisLanguage = option?.language || 'zh-CN'
logger.info('using speaker: ' + option?.speaker || 'zh-CN-YunyeNeural')
speechConfig.speechSynthesisVoiceName = option?.speaker || 'zh-CN-YunyeNeural'
} else { // 打招呼用
speechConfig.speechSynthesisLanguage = option?.language || supportConfigurations.find(config => config.code === speaker).language
speechConfig.speechSynthesisVoiceName = speaker
logger.info('using speaker: ' + speaker)
logger.info('using language: ' + speechConfig.speechSynthesisLanguage)
synthesizer = new sdk.SpeechSynthesizer(speechConfig, audioConfig)
await speakTextAsync(synthesizer, text)
await speakTextAsync(synthesizer, context)
}
console.log('synthesis finished.')
synthesizer.close()
synthesizer = undefined
return filename
}
@ -73,11 +83,27 @@ async function speakSsmlAsync (synthesizer, ssml) {
})
}
async function generateSsml (text, option = {}) {
const voiceName = option.speaker || 'zh-CN-YunyeNeural'
const expressAs = option.emotion ? `<mstts:express-as style="${option.emotion}" styledegree="${option.emotionDegree || 1}">` : ''
let speaker = option?.speaker || '随机'
let emotionDegree, role, emotion
// 打招呼用
if (speaker === '随机') {
role = supportConfigurations[Math.floor(Math.random() * supportConfigurations.length)]
speaker = role.code
if (role?.emotion) {
const keys = Object.keys(role.emotion)
emotion = keys[Math.floor(Math.random() * keys.length)]
}
logger.info('using speaker: ' + speaker)
logger.info('using emotion: ' + emotion)
emotionDegree = 2
} else {
emotion = option.emotion
emotionDegree = option.emotionDegree
}
const expressAs = emotion !== undefined ? `<mstts:express-as style="${emotion}" styledegree="${emotionDegree || 1}">` : ''
return `<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis"
xmlns:mstts="https://www.w3.org/2001/mstts" xml:lang="zh-CN">
<voice name="${voiceName}">
<voice name="${speaker}">
${expressAs}${text}${expressAs ? '</mstts:express-as>' : ''}
</voice>
</speak>`
@ -91,7 +117,7 @@ async function getEmotionPrompt (e) {
let emotionPrompt = ''
let ttsRoleAzure = userReplySetting.ttsRoleAzure
const configuration = Config.ttsMode === 'azure' ? supportConfigurations.find(config => config.code === ttsRoleAzure) : ''
if (configuration !== '' && configuration.emotion) {
if (configuration !== '' && configuration?.emotion) {
// 0-1 感觉没啥区别说实话只有1和2听得出差别。。
emotionPrompt = `\n在回复的最开始使用[]在其中表示你这次回复的情绪风格和程度(1-2)最小单位0.1
\n例如['angry',2]表示你极度愤怒
@ -110,28 +136,32 @@ export const supportConfigurations = [
name: '晓北',
language: 'zh-CN',
languageDetail: '中文(东北官话,简体)',
gender: '女'
gender: '女',
roleInfo: '晓北-女-中文(东北官话,简体)'
},
{
code: 'zh-CN-henan-YundengNeural',
name: '云登',
language: 'zh-CN',
languageDetail: '中文(中原官话河南,简体)',
gender: '男'
gender: '男',
roleInfo: '云登-男-中文(中原官话河南,简体)'
},
{
code: 'zh-CN-shaanxi-XiaoniNeural',
name: '晓妮',
language: 'zh-CN',
languageDetail: '中文(中原官话陕西,简体)',
gender: '女'
gender: '女',
roleInfo: '晓妮-女-中文(中原官话陕西,简体)'
},
{
code: 'zh-CN-henan-YundengNeural',
name: '云翔',
language: 'zh-CN',
languageDetail: '中文(冀鲁官话,简体)',
gender: '男'
gender: '男',
roleInfo: '云翔-男-中文(冀鲁官话,简体)'
},
{
code: 'zh-CN-XiaoxiaoNeural',
@ -157,7 +187,8 @@ export const supportConfigurations = [
'poetry-reading': '读诗时带情感和节奏的语气',
sad: '表达悲伤语气',
serious: '严肃、命令的语气'
}
},
roleInfo: '晓晓-女-中文(普通话,简体)'
},
{
code: 'zh-CN-YunxiNeural',
@ -178,7 +209,8 @@ export const supportConfigurations = [
newscast: '用于新闻播报,表现出庄重、严谨的语气',
sad: '表达悲伤、失落的语气',
serious: '表现出认真、严肃的语气'
}
},
roleInfo: '云希-男-中文 (普通话,简体)'
},
{
code: 'zh-CN-YunyangNeural',
@ -190,7 +222,8 @@ export const supportConfigurations = [
customerservice: '以亲切友好的语气为客户提供支持',
'narration-professional': '以专业、稳重的语气讲述',
'newscast-casual': '以轻松自然的语气播报新闻'
}
},
roleInfo: '云扬-男-中文 (普通话,简体)'
},
{
code: 'zh-CN-YunyeNeural',
@ -207,7 +240,8 @@ export const supportConfigurations = [
fearful: '表达害怕和不安的语气',
sad: '表达悲伤和失落的语气',
serious: '以认真和严肃的态度说话'
}
},
roleInfo: '云野-男-中文(普通话,简体)'
},
{
code: 'zh-CN-XiaoshuangNeural',
@ -215,37 +249,40 @@ export const supportConfigurations = [
language: 'zh-CN',
languageDetail: '中文(普通话,简体)',
gender: '女',
emotion: {
chat: '表达轻松随意的语气'
}
emotion: { chat: '表达轻松随意的语气' },
roleInfo: '晓双-女-中文(普通话,简体)'
},
{
code: 'zh-CN-XiaoyouNeural',
name: '晓悠',
language: 'zh-CN',
languageDetail: '中文(普通话,简体)',
gender: '女'
gender: '女',
roleInfo: '晓悠-女-中文(普通话,简体)'
},
{
code: 'zh-CN-XiaoqiuNeural',
name: '晓秋',
language: 'zh-CN',
languageDetail: '中文(普通话,简体)',
gender: '女'
gender: '女',
roleInfo: '晓秋-女-中文(普通话,简体)'
},
{
code: 'zh-CN-XiaochenNeural',
name: '晓辰',
language: 'zh-CN',
languageDetail: '中文(普通话,简体)',
gender: '女'
gender: '女',
roleInfo: '晓辰-女-中文(普通话,简体)'
},
{
code: 'zh-CN-XiaoyanNeural',
name: '晓颜',
language: 'zh-CN',
languageDetail: '中文(普通话,简体)',
gender: '女'
gender: '女',
roleInfo: '晓颜-女-中文(普通话,简体)'
},
{
code: 'zh-CN-XiaomoNeural',
@ -266,7 +303,8 @@ export const supportConfigurations = [
gentle: '温和、礼貌、愉快的语气,音调和音量较低',
sad: '表达悲伤语气',
serious: '严肃、命令的语气'
}
},
roleInfo: '晓墨-女-中文(普通话,简体)'
},
{
code: 'zh-CN-XiaoxuanNeural',
@ -283,7 +321,8 @@ export const supportConfigurations = [
fearful: '恐惧、紧张的语气,说话人处于紧张和不安的状态',
gentle: '温和、礼貌、愉快的语气,音调和音量较低',
serious: '严肃、命令的语气'
}
},
roleInfo: '晓萱-女-中文(普通话,简体)'
},
{
code: 'zh-CN-XiaohanNeural',
@ -302,7 +341,8 @@ export const supportConfigurations = [
gentle: '温和、礼貌、愉快的语气,音调和音量较低',
sad: '表达悲伤语气',
serious: '严肃、命令的语气'
}
},
roleInfo: '晓涵-女-中文(普通话,简体)'
},
{
code: 'zh-CN-XiaoruiNeural',
@ -315,7 +355,8 @@ export const supportConfigurations = [
calm: '沉着冷静的态度说话。语气、音调和韵律统一',
fearful: '恐惧、紧张的语气,说话人处于紧张和不安的状态',
sad: '表达悲伤语气'
}
},
roleInfo: '晓睿-女-中文(普通话,简体)'
},
{
code: 'zh-CN-XiaomengNeural',
@ -323,9 +364,8 @@ export const supportConfigurations = [
language: 'zh-CN',
languageDetail: '中文(普通话,简体)',
gender: '女',
emotion: {
chat: '表达轻松随意的语气'
}
emotion: { chat: '表达轻松随意的语气' },
roleInfo: '晓梦-女-中文(普通话,简体)'
},
{
code: 'zh-CN-XiaoyiNeural',
@ -340,7 +380,8 @@ export const supportConfigurations = [
gentle: '温和、礼貌、愉快的语气,音调和音量较低',
sad: '表达悲伤语气',
serious: '严肃、命令的语气'
}
},
roleInfo: '晓伊-女-中文(普通话,简体)'
},
{
code: 'zh-CN-XiaozhenNeural',
@ -355,7 +396,8 @@ export const supportConfigurations = [
fearful: '恐惧、紧张的语气,说话人处于紧张和不安的状态',
sad: '表达悲伤语气',
serious: '严肃、命令的语气'
}
},
roleInfo: '晓甄-女-中文(普通话,简体)'
},
{
code: 'zh-CN-YunfengNeural',
@ -371,14 +413,16 @@ export const supportConfigurations = [
fearful: '恐惧、紧张的语气,说话人处于紧张和不安的状态',
sad: '表达悲伤语气',
serious: '严肃、命令的语气'
}
},
roleInfo: '云枫-男-中文(普通话,简体)'
},
{
code: 'zh-CN-YunhaoNeural',
name: '云皓',
language: 'zh-CN',
languageDetail: '中文(普通话,简体)',
gender: '男'
gender: '男',
roleInfo: '云皓-男-中文(普通话,简体)'
},
{
code: 'zh-CN-YunjianNeural',
@ -390,7 +434,8 @@ export const supportConfigurations = [
'narration-relaxed': '以轻松、自然的语气进行叙述',
'sports-commentary': '在解说体育比赛时,使用专业而自信的语气',
'sports-commentary-excited': '在解说激动人心的体育比赛时,使用兴奋和激动的语气'
}
},
roleInfo: '云健-男-中文(普通话,简体)'
},
{
code: 'zh-CN-YunxiaNeural',
@ -404,7 +449,8 @@ export const supportConfigurations = [
cheerful: '表达积极愉快的语气',
fearful: '表达害怕、紧张的语气',
sad: '表达悲伤和失落的语气'
}
},
roleInfo: '云夏-男-中文 (普通话,简体)'
},
{
code: 'zh-CN-YunzeNeural',
@ -422,105 +468,120 @@ export const supportConfigurations = [
fearful: '表达害怕、不安的情绪',
sad: '用悲伤的语气表达悲伤和失落',
serious: '以严肃的语气和态度表现出对事情的重视和认真对待'
}
},
roleInfo: '云泽-男-中文 (普通话,简体)'
},
{
code: 'zh-HK-HiuGaaiNeural',
name: '曉佳',
language: 'zh-CN',
languageDetail: '中文(粤语,繁体)',
gender: '女'
gender: '女',
roleInfo: '曉佳-女-中文(粤语,繁体)'
},
{
code: 'zh-HK-HiuMaanNeural',
name: '曉曼',
language: 'zh-CN',
languageDetail: '中文(粤语,繁体)',
gender: '女'
gender: '女',
roleInfo: '曉曼-女-中文(粤语,繁体)'
},
{
code: 'zh-HK-WanLungNeural',
name: '雲龍',
language: 'zh-CN',
languageDetail: '中文(粤语,繁体)',
gender: '男'
gender: '男',
roleInfo: '雲龍-男-中文(粤语,繁体)'
},
{
code: 'en-GB-AbbiNeural',
name: 'Abbi',
language: 'en-GB',
languageDetail: '英语(英国)',
gender: 'female'
gender: 'female',
roleInfo: 'Abbi-女-英语(英国)'
},
{
code: 'en-GB-AlfieNeural',
name: 'Alfie',
language: 'en-GB',
languageDetail: '英语(英国)',
gender: 'male'
gender: 'male',
roleInfo: 'Alfie-男-英语(英国)'
},
{
code: 'en-GB-BellaNeural',
name: 'Bella',
language: 'en-GB',
languageDetail: '英语(英国)',
gender: 'female'
gender: 'female',
roleInfo: 'Bella-女-英语(英国)'
},
{
code: 'en-GB-ElliotNeural',
name: 'Elliot',
language: 'en-GB',
languageDetail: '英语(英国)',
gender: 'male'
gender: 'male',
roleInfo: 'Elliot-男-英语(英国)'
},
{
code: 'en-GB-EthanNeural',
name: 'Ethan',
language: 'en-GB',
languageDetail: '英语(英国)',
gender: 'male'
gender: 'male',
roleInfo: 'Ethan-男-英语(英国)'
},
{
code: 'en-GB-HollieNeural',
name: 'Hollie',
language: 'en-GB',
languageDetail: '英语(英国)',
gender: 'female'
gender: 'female',
roleInfo: 'Hollie-女-英语(英国)'
},
{
code: 'en-GB-LibbyNeural',
name: 'Libby',
language: 'en-GB',
languageDetail: '英语(英国)',
gender: 'female'
gender: 'female',
roleInfo: 'Libby-女-英语(英国)'
},
{
code: 'en-GB-MaisieNeural',
name: 'Maisie',
language: 'en-GB',
languageDetail: '英语(英国)',
gender: 'female'
gender: 'female',
roleInfo: 'Maisie-女-英语(英国)'
},
{
code: 'en-GB-NoahNeural',
name: 'Noah',
language: 'en-GB',
languageDetail: '英语(英国)',
gender: 'male'
gender: 'male',
roleInfo: 'Noah-男-英语(英国)'
},
{
code: 'en-GB-OliverNeural',
name: 'Oliver',
language: 'en-GB',
languageDetail: '英语(英国)',
gender: 'male'
gender: 'male',
roleInfo: 'Oliver-男-英语(英国)'
},
{
code: 'en-GB-OliviaNeural',
name: 'Olivia',
language: 'en-GB',
languageDetail: '英语(英国)',
gender: 'female'
gender: 'female',
roleInfo: 'Olivia-女-英语(英国)'
},
{
code: 'en-GB-RyanNeural',
@ -528,11 +589,8 @@ export const supportConfigurations = [
language: 'en-GB',
languageDetail: '英语(英国)',
gender: 'male',
emotion: {
chat: '表达轻松随意的语气',
cheerful: '表达积极愉快的语气'
}
emotion: { chat: '表达轻松随意的语气', cheerful: '表达积极愉快的语气' },
roleInfo: 'Ryan-男-英语(英国)'
},
{
code: 'en-GB-SoniaNeural',
@ -540,46 +598,48 @@ export const supportConfigurations = [
language: 'en-GB',
languageDetail: '英语(英国)',
gender: 'female',
emotion: {
cheerful: '表达积极愉快的语气',
sad: '表达悲伤语气'
}
emotion: { cheerful: '表达积极愉快的语气', sad: '表达悲伤语气' },
roleInfo: 'Sonia-女-英语(英国)'
},
{
code: 'en-GB-ThomasNeural',
name: 'Thomas',
language: 'en-GB',
languageDetail: '英语(英国)',
gender: 'male'
gender: 'male',
roleInfo: 'Thomas-男-英语(英国)'
},
{
code: 'ja-JP-AoiNeural',
name: '葵',
language: 'ja-JP',
languageDetail: '日语(日本)',
gender: '女'
gender: '女',
roleInfo: '葵-女-日语(日本)'
},
{
code: 'ja-JP-DaichiNeural',
name: '大地',
language: 'ja-JP',
languageDetail: '日语(日本)',
gender: '男'
gender: '男',
roleInfo: '大地-男-日语(日本)'
},
{
code: 'ja-JP-KeitaNeural',
name: '慶太',
language: 'ja-JP',
languageDetail: '日语(日本)',
gender: '男'
gender: '男',
roleInfo: '慶太-男-日语(日本)'
},
{
code: 'ja-JP-MayuNeural',
name: '真由',
language: 'ja-JP',
languageDetail: '日语(日本)',
gender: '女'
gender: '女',
roleInfo: '真由-女-日语(日本)'
},
{
code: 'ja-JP-NanamiNeural',
@ -591,49 +651,56 @@ export const supportConfigurations = [
chat: '表达轻松随意的语气',
cheerful: '表达积极愉快的语气',
customerservice: '以友好热情的语气为客户提供支持'
}
},
roleInfo: '七海-女-日语(日本)'
},
{
code: 'ja-JP-NaokiNeural',
name: '直樹',
language: 'ja-JP',
languageDetail: '日语(日本)',
gender: '男'
gender: '男',
roleInfo: '直樹-男-日语(日本)'
},
{
code: 'ja-JP-ShioriNeural',
name: '栞',
language: 'ja-JP',
languageDetail: '日语(日本)',
gender: '女'
gender: '女',
roleInfo: '栞-女-日语(日本)'
},
{
code: 'en-US-AIGenerate1Neural1',
name: 'AI Generate 1',
language: 'en-US',
languageDetail: 'English (United States)',
gender: '男'
gender: '男',
roleInfo: 'AI Generate 1-男-英语(美国)'
},
{
code: 'en-US-AIGenerate2Neural1',
name: 'AI Generate 2',
language: 'en-US',
languageDetail: 'English (United States)',
gender: '女'
gender: '女',
roleInfo: 'AI Generate 2-女-英语(美国)'
},
{
code: 'en-US-AmberNeural',
name: 'Amber',
language: 'en-US',
languageDetail: 'English (United States)',
gender: '女'
gender: '女',
roleInfo: 'Amber-女-英语(美国)'
},
{
code: 'en-US-AnaNeural',
name: 'Ana',
language: 'en-US',
languageDetail: 'English (United States)',
gender: '女性、儿童'
gender: '女性、儿童',
roleInfo: 'Ana-女性、儿童-英语(美国)'
},
{
code: 'en-US-AriaNeural',
@ -658,35 +725,40 @@ export const supportConfigurations = [
'narration-professional': '以专业、客观的语气朗读内容',
'newscast-casual': '以通用、随意的语气发布一般新闻',
'newscast-formal': '以正式、自信和权威的语气发布新闻'
}
},
roleInfo: 'Aria-女-英语(美国)'
},
{
code: 'en-US-AshleyNeural',
name: 'Ashley',
language: 'en-US',
languageDetail: 'English (United States)',
gender: '女'
gender: '女',
roleInfo: 'Ashley-女-英语(美国)'
},
{
code: 'en-US-BrandonNeural',
name: 'Brandon',
language: 'en-US',
languageDetail: 'English (United States)',
gender: '男'
gender: '男',
roleInfo: 'Brandon-男-英语(美国)'
},
{
code: 'en-US-ChristopherNeural',
name: 'Christopher',
language: 'en-US',
languageDetail: 'English (United States)',
gender: '男'
gender: '男',
roleInfo: 'Christopher-男-英语(美国)'
},
{
code: 'en-US-CoraNeural',
name: 'Cora',
language: 'en-US',
languageDetail: 'English (United States)',
gender: '女'
gender: '女',
roleInfo: 'Cora-女-英语(美国)'
},
{
code: 'en-US-DavisNeural',
@ -705,21 +777,24 @@ export const supportConfigurations = [
terrified: '非常害怕的语气,语速快且声音颤抖。不稳定的疯狂状态',
unfriendly: '表达一种冷淡无情的语气',
whispering: '说话非常柔和,发出的声音小且温柔'
}
},
roleInfo: 'Davis-男-英语(美国)'
},
{
code: 'en-US-ElizabethNeural',
name: 'Elizabeth',
language: 'en-US',
languageDetail: 'English (United States)',
gender: '女'
gender: '女',
roleInfo: 'Elizabeth-女-英语(美国)'
},
{
code: 'en-US-EricNeural',
name: 'Eric',
language: 'en-US',
languageDetail: 'English (United States)',
gender: '男'
gender: '男',
roleInfo: 'Eric-男-英语(美国)'
},
{
code: 'en-US-GuyNeural',
@ -739,15 +814,16 @@ export const supportConfigurations = [
unfriendly: '表达一种冷淡无情的语气',
whispering: '说话非常柔和,发出的声音小且温柔',
newscast: '以正式专业的语气叙述新闻'
}
},
roleInfo: 'Guy-男-英语(美国)'
},
{
code: 'en-US-JacobNeural',
name: 'Jacob',
language: 'en-US',
languageDetail: 'English (United States)',
gender: '男'
gender: '男',
roleInfo: 'Jacob-男-英语(美国)'
},
{
code: 'en-US-JaneNeural',
@ -766,7 +842,8 @@ export const supportConfigurations = [
terrified: '非常害怕的语气,语速快且声音颤抖。不稳定的疯狂状态',
unfriendly: '表达一种冷淡无情的语气',
whispering: '说话非常柔和,发出的声音小且温柔'
}
},
roleInfo: 'Jane-女-英语(美国)'
},
{
code: 'en-US-JasonNeural',
@ -785,14 +862,8 @@ export const supportConfigurations = [
terrified: '非常害怕的语气,语速快且声音颤抖。不稳定的疯狂状态',
unfriendly: '表达一种冷淡无情的语气',
whispering: '说话非常柔和,发出的声音小且温柔'
}
},
{
code: 'en-US-JennyMultilingualNeural3',
name: 'Jenny',
language: 'en-US',
languageDetail: '英语(美国)',
gender: 'female'
},
roleInfo: 'Jason-男-英语(美国)'
},
{
code: 'en-US-JennyNeural',
@ -815,22 +886,24 @@ export const supportConfigurations = [
chat: '表达轻松随意的语气',
customerservice: '以友好热情的语气为客户提供支持',
newscast: '以正式专业的语气叙述新闻'
}
},
roleInfo: 'Jenny-女-英语(美国)'
},
{
code: 'en-US-MichelleNeural',
name: 'Michelle',
language: 'en-US',
languageDetail: '英语(美国)',
gender: 'female'
gender: 'female',
roleInfo: 'Michelle-女-英语(美国)'
},
{
code: 'en-US-MonicaNeural',
name: 'Monica',
language: 'en-US',
languageDetail: '英语(美国)',
gender: 'female'
gender: 'female',
roleInfo: 'Monica-女-英语(美国)'
},
{
code: 'en-US-NancyNeural',
@ -849,14 +922,16 @@ export const supportConfigurations = [
terrified: '非常害怕的语气,语速快且声音颤抖。不稳定的疯狂状态',
unfriendly: '表达一种冷淡无情的语气',
whispering: '说话非常柔和,发出的声音小且温柔'
}
},
roleInfo: 'Nancy-女-英语(美国)'
},
{
code: 'en-US-RogerNeural',
name: 'Roger',
language: 'en-US',
languageDetail: '英语(美国)',
gender: 'male'
gender: 'male',
roleInfo: 'Roger-男-英语(美国)'
},
{
code: 'en-US-SaraNeural',
@ -875,15 +950,16 @@ export const supportConfigurations = [
terrified: '非常害怕的语气,语速快且声音颤抖。不稳定的疯狂状态',
unfriendly: '表达一种冷淡无情的语气',
whispering: '说话非常柔和,发出的声音小且温柔'
}
},
roleInfo: 'Sara-女-英语(美国)'
},
{
code: 'en-US-SteffanNeural',
name: 'Steffan',
language: 'en-US',
languageDetail: '英语(美国)',
gender: 'male'
gender: 'male',
roleInfo: 'Steffan-男-英语(美国)'
},
{
code: 'en-US-TonyNeural',
@ -902,21 +978,24 @@ export const supportConfigurations = [
terrified: '非常害怕的语气,语速快且声音颤抖。不稳定的疯狂状态',
unfriendly: '表达一种冷淡无情的语气',
whispering: '说话非常柔和,发出的声音小且温柔'
}
},
roleInfo: 'Tony-男-英语(美国)'
},
{
code: 'en-IN-NeerjaNeural',
name: 'Neerja',
language: 'en',
languageDetail: '英语(印度)',
gender: 'female'
gender: 'female',
roleInfo: 'Neerja-女-英语(印度)'
},
{
code: 'en-IN-PrabhatNeural',
name: 'Prabhat',
language: 'en',
languageDetail: '英语(印度)',
gender: 'male'
gender: 'male',
roleInfo: 'Prabhat-男-英语(印度)'
}
]