Sentiment-sensing Slackbot

Build your own Modzy-powered Slackbot

:rocket: What's it do?

Sad bot is a Slack bot that will respond when mentioned in a slack channel or conversation with a short message. If your message seems very positive, Sad Bot will respond positively. If your message seems negative, then Sad Bot will share an encouraging quote.

A short conversation with sad-bot, a bot that senses and reacts to the sentiment of your messages.A short conversation with sad-bot, a bot that senses and reacts to the sentiment of your messages.

A short conversation with sad-bot, a bot that senses and reacts to the sentiment of your messages.

:fork-and-knife: Ingredients

  • A Slack account
  • Python, Flask, and a few other python libraries
  • Modzy's Python SDK
  • For local development you can use ngrok to redirect event information to your localhost
  • An Open Source Sentiment Analysis model running on Modzy

:books: Instructions

Follow this tutorial but instead of using the bot.py and requirements.txt files provided, use the code found on the Sad Bot Github Repo.

:cake: Final Dish

My Name 4:29pm

I'm on top of the world today! @Sad Bot

Sad Bot APP 4:29pm

You seem happy! :smile:

My Name 4:30pm

I have a lot of errands to run today @Sad Bot

Sad Bot APP 4:30pm

You seem to be a bit bleh :neutral_face:

How about some coffee :coffee: ?

My Name 4:31pm

I'm very very sad @Sad Bot

Sad Bot APP 4:31pm

You seem sad :slightly_frowning_face:

:butterfly: How wonderful it is that nobody need wait a single moment before starting to improve the world.

:8ball: Models Used

The model does not use training data, but is constructed from a generalizable, valence-based, human-curated sentiment lexicon. The lexicon is sensitive to both the polarity and the intensity of sentiments expressed in social media but applicable to other text sources. The rule base captures conventional uses of grammatical and syntactical aspects of text for assessing sentiment intensity. This model performs well on social media, where the correlation to ground truth (judgement of human reviewers) was 0.88, with precision of 0.99, recall of 0.94, and F1 score of 0.96. For the other types of data, these metrics were lower, but higher than for other lexicon-based tools.


Did this page help you?