Protection of the first-amendment right to "freedom of expression" is one of the leading controversies of the digital millennium. It is crucial that the operator of a public forum, (for example, Facebook Inc. and its respective Facebook platform), remain translucent in the responsibility for content on the site. By allowing non-regulated (i.e., user-submitted) content to be displayed on a site, the operator is opened up to infinite risk, with comparably nil reward.
In the United States, it is the general understanding that so long as a site operator provides a policy against hosting illegal content, and there exists a documented means of contact to remove illegal content (such as a reporting system), then the site will remain immune from legal consequence. (Bear in mind, though, that this stance is quickly changing as the pro-censorship crowd spins up the political machine to "break up big tech"; in Europe, states such as Germany have already set forward more authoritarian policies requiring site operators to remove illegal content autonomously, within as little as two hours).
Where this understanding begins to haze even more, though, is in the context of holistic platforms – in the case of Discord, independent bots which serve as a "platform-within-a-platform." As a private platform, Discord reserves the right to terminate any user, bots inclusive, for any reason at any time. From the perspective of Discord's legal person*, a bot posting illegal content, intentionally or not, is a highly compelling reason to terminate their contract with the platform. For the recent ban of NotSoBot, this appears to be the case – a user-created tag contained some type of ToS-breaking content, and the bot was terminated.
Keep in mind, Discord's Trust and Safety division has not been cooperative in helping to address any illegal content. A single e-mail was received on a non-business day, containing little more information than a message ID. Remember: to delete a message via the API, you also need a channel ID. Without maintaining a local log of all outgoing messages, bot operators have no ability to comply with deletion requests such as this one.
Connecting back to the earlier point, it becomes clear that accepting user input is becoming more and more dangerous as a bot operator. Historically, if a tag was known to be malicious, a moderator of the bot was able to remove it before it became problematic. At the same time, Discord had never been overly eager about striking down bots without warning. The risk for accepting tags was relatively low: just make sure you filter the content, and remove any bad tags if they ever came to light.
The same moderation standard was applied to public servers; if a user posted illegal content in your server, you were expected to ban the user and usually contact support. Leaving the content intact did not guarantee that the server owner would be banned, though in the case of partnered servers, there was the threat of departnering. Encouraging content like this was obviously not permitted.
With the recent suspension of NotSoBot, it has become evident that the former, relatively passive "deal with it as you see it" style of tag moderation is no longer acceptable to Discord. Should a message sent by a bot ever be discovered by a Trust and Security employee as unlawful, the bot operator will have no means of rectification. At this time, our recommendation is that bot operators actively moderate tags created by the bot – for instance, have your bot log all created tags to a "moderation queue"-esque channel, where your moderators will have to comb through all created tags and determine whether or not they are acceptable content. Whether or not this is a feasible solution for most bots and their (likely unpaid) moderation staff is still left to be found.
As always, remember: whatever you want your bot to do is probably "not a supported use case."
*person is used here, as the individual representing Discord has no published credentials nor gender. Appropriating them as a "lawyer" could be disingenuous to those who proudly display their hard-earned credentials; and we certainly strive to avoid any misgendering controversy.