Jump to content
Forum²

Recommended Posts

  • Forum² Admin
Posted

Today we are announcing good bye to the Discourse AI - Toxicity module :wave: in favor of Discourse AI Post Classifier - Automation rules, leveraging the power of Large Languge Models (LLMs) to provide a superior experience.

:information_source: This will be a beta experience so expect changes to occur to the feature.

Why are we doing this?

Previously using the Toxicity module meant…

  • You were stuck using a single pre-defined model
  • No customization for your community-specific needs
  • Confusing threshold metrics
  • Subpar performance

LLMs have come a long way and can now provide a better performing and customizable experience.

Whats new?

Discourse AI Post Classifier - Automation rule can be used to triage posts for Toxicity (amongst other ways) and enforce communities to specific code of conducts. This means…

  • Multiple LLMs supported for different performance requirements
  • Easy to define what and how content should be treated
  • Customizable prompts for community-specific needs
  • Flag content for review

and much more.

To assist with the transition we have already written out 2 guides

What happens to Toxicity?

This announcement should be considered very early, until we are ready to decommission you can continue to use Toxicity. When we do it, we will be decommissioning the module and removing all code from the Discourse AI plugin and associated services from our servers.

8 posts - 4 participants

Read full topic

Guest
This topic is now closed to further replies.
×
×
  • Create New...