Delivering Text Analytics in Enterprise Data Portal

Project Background


  • Siloed data with little visibility within enterprise
  • Lack of consistent data resulting in different analytical outcomes
  • Duplication of data due to human error
  • Time wasted on gathering data from multiple sources


As more organisations move away from manual data extraction into cloud-based solutions, designers face an uphill battle against traditional organisational behaviour. Multiple layers of technical guidance are often required to bring data access to users with or without coding capabilities. More crucially, investment opportunities are time-sensitive and subject to constant market fluctuations.


To create a data platform for discovery, access and extraction of clean, well-structured and linked data in a timely manner via web browser and APIs.


Through user interviews, we discovered three main personas of our target users.

Our primary persona consists of junior business users, especially in Investment groups. They often have to trawl through large amount of raw data from different sources to consolidate findings and present to their leads. They have some data literacy but lack programming skills.

Our secondary persona consists of senior members of certain teams that perform routine tasks and ad-hoc reporting so access to data is important and critical to their work. On top of that, there is also a need for them to share sensitive data related to the company’s portfolio which could be used by selected teams in senior management.

This group often have urgency in ad-hoc data use but lack the tools and skills to manipulate data for modelling.

Our third persona is made up of Data Analysts with advanced data and coding capabilities. They have the knowledge and skills to predict and observe market abnormalities but often struggle with the process of accessing clean data.

In our interviews, they mentioned that they waste a lot of time sourcing, accessing, and cleaning data whilst wishing more time could be spent on the analysis itself.

A key challenge for our product is to accommodate varying levels of data competency and skillset. Proof of concepts were conducted to ensure our platform has integrated advanced tools for the needs of this persona.


The purpose of this Analytics feature is to allow user to find trends in a particular keyword or phrase in the industry and observe how they change over time.

A use case for this Talent Acquisition dataset (prototype below) is understanding how Snowflake as a skillset trends outside of Snowflake company. We may want to know who is hiring and how in demand Snowflake as a skillset is over the years. The implementation of this text analytics should also be scalable to any datasets that contain free text or transcript-type data.


Image showing the transition into Text Analytics page. All sensitive data have been removed.

Taking the requirement and enormous amount of data into consideration, we often have to push interaction design to the limits, ensuring the UI alongside UX writing are intuitive and that the user are not overwhelmed by too much information.

That sometimes means re-organisation the way data are aggregated on the back-end. Once I make sure the front-end is able to support the design, a few rounds of discussion with Data Engineers and Product Manager were held to decide how best to implement the breakdown of keyword matches by time.

In this data table, the user can observe how the number of keyword occurrence changes over time.

They can narrow further to the company and see how often that keyword appears within the company in that year/quarter/month.

This helps our business group capitalise on investment opportunities based on keyword trends in the industry or company.

Image showing the data table with keyword occurrence collated by year
Image showing a transcript modal opening from a data cell

Finally, we want users to be able to deep-dive into individual text transcript by clicking on a data cell. The existing ingested model does not allow user to view the full text unless they download it into a spreadsheet.

This enhancement will allow them to view each transcript right from our platform and also see the number of keyword occurrence with the keyword highlighted in yellow.

Final Thoughts

Working on this project has taught me a great deal of the role of UX and its importance in data management. It may be my first end-to-end delivery (ie, research to production) on this scale but thanks to the wonderful and collaborative team of Data Engineers and Full-stack Developers, I was able to quickly harness the technology we use within this project.

Working under Agile framework in a data project this scale also pushes me out of my comfort zone. With the helpful guidance of the Product Manager, senior UX Designer and Front-end Developers, I was able to own this feature from research to development handoff. It wasn’t an easy journey but it was extremely satisfying to see it go live in the product.

Moving forward, I hope to take on more complex projects like this to improve my skills as a UX Designer. What I learned most of all under Agile environment is that not only technology can be iterated but also processes and even people should continue to improve based on evolving needs and context.