At Pulsar, we remain committed to providing our clients with best-in-class insights through the use of advanced AI and smart algorithms. We are therefore excited to announce a significant enhancement to our Topics and Entities analysis. This update represents a leap forward in how we extract Topics and Entities; moving from traditional taxonomy-driven extraction to LLM-inferred analysis. This transition will result in improved accuracy, relevance, and linguistic diversity in our Topic and Entity analysis capabilities.
More meaningful topics, smarter entity recognition, and support for a broader range of languages will lead to more relevant insights, regardless of where the conversation takes place.
Here’s what you can expect
Enhanced Topics: Topics will now be more granular and dynamic, uncovering emerging themes and contextual nuances beyond the limits of a predefined taxonomy.
Smarter Entities: Entities extracted from content in your searches will be more relevant and detailed, as we’re transitioning from a fixed taxonomy structure to AI-based inference.
Why are we doing this?
This is a natural evolution of our current capabilities on TRAC, designed to help you stay ahead in a fast-changing digital world. With this update, you’ll benefit from:
Superior Contextual Understanding: LLMs are trained on huge and regularly updated datasets, giving them a stronger grasp of context. They can handle longer and more complex text, which means they are better at understanding the meaning and relationships between words, making the insights extracted more precise.
Better Accuracy and Relevance: By leveraging the advanced natural language understanding of LLMs, we can achieve a deeper semantic understanding of the content, allowing you to identify nuanced mentions and implicit references that traditional methods might miss.
Improved Language Support: With this update, all languages detected on Pulsar will now be supported equally, so you get consistent quality insights, no matter the language. This means the Topics and Entities seen on Pulsar will now be in the native language helping to localise the insights.
Future-Proof Intelligence: By moving to LLMs, we’re laying the foundation for continuous improvements in language understanding and content classification, ensuring your analysis evolves as the online world does.
Where do I find Topic and Entity insights on Pulsar?
Topics and Entities have been integrated across the platform's key touch points. This ensures users can easily identify popular topics and entities at dashboard level and feed level.
Dashboard level insights
To explore the most common Topics and Entities in your search, go to Content Insights. There, you can view the Topics or Entities in a search, with some insightful visualizations that give you a distinct view of the main Topics and Entities in your search.
Below is a brief overview of the insights you get from each visualisation:
Treemap chart: This chart shows the most common topics or entities found in a search, split by channel. The tile size represents the volume of posts related to that topic or entity, and from that data source. Use the treemap graph to see the most common topics and entities plus the frequency of their use within specific social channels.
Word Cloud chart: This chart shows the most common topics or entities we've identified, and the sentiment associated with them. The size of the word represents the volume of posts about that topic or entity. Our Sentiment model looks for words that carry an explicit positive or negative meaning and then figures out which person or place those words are referring to.
Segments chart: This chart shows the relationships between topics or entities that tend to frequently appear in the same posts, and these are grouped into distinct segments. Use segments for content recommendations, for example as guidance on producing content around the key subject areas identified in the network graph.
Stream chart: This chart shows how the topics or entities identified in the search emerge, remain consistent, or fade over time. Use the Stream graph to find emerging topics or entities, and see how their frequency changes over time.
Bundle chart: This chart displays the associations between the topics or entities identified in a search and how they appear within the same context. Use the Bundle graph to uncover the relationships between topics or between entities, and how they are frequently used in conjunction with each other.
Feed level insights
On the Results page, the content card displays all the Topics and Entities that have been extracted from a single post or article.
Now let’s take a look at a couple of comparisons between Taxonomy-led vs AI-inferred analysis.. The comparison below relates to a search about UK supermarkets, and demonstrates the level of granularity available from applying LLM inferred analysis.
LLM-inferred Topics:
Taxonomy-led Topics:
Filtering for Topics & Entities
As a part of this enhancement, we’ve enhanced Topic and Entities filtering, significantly improving user experience. New Topic filtering allows for granular exploration of specific themes, cutting through noise for focused insights. The revamped Entities filter offers categorical organization (People, Organizations), streamlining filtering.
Expanded Language Support
This enhancement significantly expands language support for both Topics and Entities, increasing from 16 to 68 languages for Topics, and extending Entity support beyond English to 68 languages. This empowers Pulsar to recognize a wider range of entities across all supported languages. In addition to this, following feedback from clients, Topic insights will now be available in the native language of the post or article that has been analysed. An illustrative example of this is shown below:
LLM-inferred Topics:
Taxonomy-led Topics:
This is great, do I need to do anything?
No action is required from you. The new Topics and Entities analysis will be available in your TRAC searches by default, so you will automatically see enhanced insights on newly ingested content from today onwards.
Reach out to support if you need to make updates to historical content within a search, we’ll do our best to help!