Forum² Admin AWS Posted August 7 Forum² Admin Posted August 7 Today we are announcing good bye to the Discourse AI - Toxicity module in favor of Discourse AI Post Classifier - Automation rules, leveraging the power of Large Languge Models (LLMs) to provide a superior experience. This will be a beta experience so expect changes to occur to the feature. Why are we doing this? Previously using the Toxicity module meant… You were stuck using a single pre-defined model No customization for your community-specific needs Confusing threshold metrics Subpar performance LLMs have come a long way and can now provide a better performing and customizable experience. Whats new? Discourse AI Post Classifier - Automation rule can be used to triage posts for Toxicity (amongst other ways) and enforce communities to specific code of conducts. This means… Multiple LLMs supported for different performance requirements Easy to define what and how content should be treated Customizable prompts for community-specific needs Flag content for review and much more. To assist with the transition we have already written out 2 guides Setting up toxicity/code of conduct detection in your community Setting up spam detection in your community What happens to Toxicity? This announcement should be considered very early, until we are ready to decommission you can continue to use Toxicity. When we do it, we will be decommissioning the module and removing all code from the Discourse AI plugin and associated services from our servers. 7 posts - 4 participants Read full topic View the full article Quote IPB Webmaster - For Invision Community Enthusiasts - SEO Help Forum
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.