RSS Feed Summarizer

Build your own RSS feed pipeline with Modzy!

:rocket: What's it do?

Modzy includes a variety of natural language processing (NLP) models such as sentiment analysis, summarization, named entity recognition, machine translation, etc. to get you started. In this simple example, we apply three models – Named Entity Recognition, Text Topic Modeling and Text Summarization – to an RSS feed to create a summary of the articles and a characterization of the feed.

:fork-and-knife: Ingredients

:books: Instructions

:cake: Final Dish

Data Drift Detection
Thu, 08 Jul 2021
If a model asked to make a prediction based upon drifted data, the model is unlikely to achieve its reported performance. During training, a model attempts to learn the most pertinent features to the train dataset. Most important features of the training Dataset are not universal to all data. Even in situations where objects of interest remain the same, data drift can occur.
['data', 'software', 'systems', 'information', 'user', 'web', 'file', 'version', 'application', 'code']

Got Trust Issues? Explain, protect, and trust AI with Modzy
Tue, 06 Jul 2021
explainable AI is the key to trustworthy AI, which ensures AI decisions are transparent, accountable. Ai Framework is inserted into AI pipeline, enabling engineers, data scientists. Image of a cat was fed to modzy's image classification model. MODZY AI framework produces explanations five times faster than open source methods. The two images below demonstrate the solution in action.
['data', 'software', 'systems', 'information', 'user', 'web', 'file', 'version', 'application', 'code']

Crossing the AI Valley of Death: Deploying and Monitoring Models in Production at Scale
Mon, 28 Jun 2021
You built another Ai model that will never see the light of day because it won't make it "Valley of Death" The between data science and engineering teams is fraught with friction, outstanding questions around governance and accountability. The Patchwork approach leaves many organizations open to risks. You'll discover why it's never too early to plan for operationalization of Models.
['data', 'software', 'systems', 'information', 'user', 'web', 'file', 'version', 'application', 'code']

Models Used

Also known as entity extraction, this model detects and classifies named entities in an English text into four categories: persons, locations, organizations, and miscellaneous. This bidirectional, pre-trained model is based on Google's BERT (Bidirectional Encoder Representations from Transformers) architecture and uses the TensorFlow deep learning framework. It was trained on the CoNLL-2003 training dataset of news wire articles from the Reuters Corpus and has a precision of 98.15%, recall of 90.61%, and F1 score of 89.72%.

This model, based on the unsupervized Latent Dirichlet Allocation (LDA) algorithm trained on the entire English Wikipedia corpus, takes unstructured text as input and returns the top ten topics.

This model, derived from the Fast Abstractive Summarization-RL, summarizes a text document model. It was trained on over 300,000 news articles from CNN and Daily Mail and their human drafted summaries, and achieves a ROUGE score of 0.33. Our implementation using a GPU and parallel decoding results in 10-20x performance increase over the previous best neural abstractive summarization system.


Did this page help you?