I was at a company that tried "sentiment analysis" 10 or 15 years ago. It was impossible to get right. The results were useless. It's funny how nothing seems to have changed. There's always some executive who's heard of "sentiment" and thinks they can spend a week shoving a bunch of text into a database, and then have it tell you whether the words are perceived as good or bad.
- I typed "Chevrolet" and it says "mpg" and "venerable" have negative sentiment, while "underachieving", "supposedly", "kilometers", and "cars" have positive sentiment. "Stunning" and "awkward" are both neutral.
- I tried "Mazda" and it was only slightly better. "Praise" is neutral, "regret" is positive, and "touring" is negative.
- I tried "Petzl" and only shows 16 words, all neutral, and that includes stop words like "had" and "and" and "etc".
Those are at least companies with unique names. For companies with less common names, you might be lucky to get keywords related to the right company at all.
I think there can be good uses for word clouds, but they are few and far between, and this isn't one. Just make 3 lists, side by side, and title them "positive", "neutral", and "negative". Instead of font size, put the more common words higher on their respective list. The only reason I can see to use a word cloud here is to hide how bad the analysis is.
Let me elaborate a bit more on how the app computes sentiment. For a particular word, its sentiment is the average of the sentences' sentiments which contain both the word and the brandname (in order to identify the sentiment targeted at the brand, not just the overall sentiment).
For example, in the case of Mazda where you say that "regret" is classified as positive, if you look into which message it comes from you can see the original sentence: "Buy a Mazda, you won't regret it :)"
I agree with you that the word cloud is not useful on its own, and this is why you can click on a word to see the actual messages. Think of the word cloud as merely an entry point into a more detailed analysis by a human.
This is a really cool idea, there are other products out there that do the same thing, but there is definitely room to differentiate yourself.
One interesting issue, you might need to read the context a bit more to gauge if the post is actually talking about the Brand. For example Target is a common english word, and if you look at the results, only 1 usage was actually referring to the store Target.
One nice bonus that would be useful, I'd like to be able to compare sentiment by Subreddit, if i'm marketing on Reddit I would not target "Reddit" as a whole.
Thanks for the feedback! It's a great idea, I'll definitely implement some Named Entity Recognition or other contextual model in the future to be able to distinguish from common words.
As for the subreddit, it's already on my next features list :)
That's a fair criticism. Sentiment analysis is quite hard to get right on social media messages because of diversity, subtlety, and many other aspects. From my experience with similar commercial and (very!) expensive products, their accuracy is far from perfect too.
Also consider the lack of labeled data for HN and Reddit messages: I had to use Twitter messages to train the classifiers.
This is the reason why I tried to play with BERT to see if I could get a model to generalize well from only Twitter messages. From my experiments, if you activate BERT (which makes the app much slower), you should be able to get 60~70% accuracy.
It's not perfect, but not too bad as well if you are getting averages over a large amount of messages.
Overall it's still a work in progress, I expect to greatly improve the accuracy over the following weeks!
I came here to say the same thing as the GP. I don't understand why some words are red or green.
For example, you can type in non-brand words as well. I typed in "houses" and the word "homeless" came up in green!
With a brand, facebook, I got this word "amiriteguyze" in red and clicking on it
Negative
11/19/2019, 12:13:31 PM
facebook is bad amiriteguyze?!?!?!?
Why is that even a word that would show up in the word cloud? I can't imagine it was entered a bunch of times. I can't intuit any correlation between the colors, sizes, or words themselves that show up in the clouds.
The algorithm will try to give more importance to words which appear rarely and are only used with the chosen brandname (similar to TF-IDF). This is why sometimes weird words can surface to the wordcloud, especially when the sample size of messages is small.
To prevent those words from appearing, I was thinking to implement some dictionary-check to only allow for meaningful words. However this approach also have drawback as you restrict people's words and can miss important new concepts.
To be clear you made something that doesn't work, posted it and got attention because you asserted that it worked, and when people point out it doesn't work, you say 'it is hard and other people's software also doesn't work'.
It's one thing if companies want to find out what people are saying about them so they can improve. However, I'd be surprised if that's how it is used.
This feels more like a way to measure the effectiveness of their astroturfing or low-key marketing efforts.
Good job. Certainly see some potential. Our company used Clara in the past. I actually wrote my own in xls if you can believe it but it was highly vertical and tight topic so it was relatively easy. Where are you physically located?
Thanks! The basic (default) version for the sentiment analysis is based on TextBlob library, but you can choose to activate deep learning to analyze sentiment with Google AI's BERT (trained on Twitter messages), though it is quite slow at the moment because inferences are made on a CPU and not a GPU.
The back-end is just Python/Flask and I use the free Algolia and Pushshift.io APIs to source the messages from HN and Reddit (big thanks to them!)
This looks really great with such a simple UI.last year I tried to do a realtime sentiment analysis on twitter messages using TextBlob.It was fast but not very accurate. Can you suggest any other library which might works fast enough on realtime messages.
For inference speed I recommend a Naive Bayes model. I've tried this on Twitter messages and got near ~90% accuracy with 3-class (positive, negative, neutral).
- I typed "Chevrolet" and it says "mpg" and "venerable" have negative sentiment, while "underachieving", "supposedly", "kilometers", and "cars" have positive sentiment. "Stunning" and "awkward" are both neutral.
- I tried "Mazda" and it was only slightly better. "Praise" is neutral, "regret" is positive, and "touring" is negative.
- I tried "Petzl" and only shows 16 words, all neutral, and that includes stop words like "had" and "and" and "etc".
Those are at least companies with unique names. For companies with less common names, you might be lucky to get keywords related to the right company at all.
I think there can be good uses for word clouds, but they are few and far between, and this isn't one. Just make 3 lists, side by side, and title them "positive", "neutral", and "negative". Instead of font size, put the more common words higher on their respective list. The only reason I can see to use a word cloud here is to hide how bad the analysis is.