NLU Optimization



Natural Language Recognition Framework

Scope • Discover • Architect • Curate • Train • Test • Optimize

The framework for natural language recognition I follow provides a well-structured & organized intent recognition strategy from start to go-live and beyond. I use it to organize the information and disambiguation guidelines, enabling me to build and maintain NLP models more effectively and efficiently.



Intent scope

In defining the intent recognition scope for the virtual assistant, I define what knowledge we want the virtual assistant to have and how we want our virtual assistant to act on that knowledge. Scoping the intents helps formulate the distinction between use cases that are in-scope v/s out-of-scope. In defining the scope for the project and the intents, I end up with clear boundaries for each use case.



Natural language discovery

In uncovering natural language trends of the user group our solution caters to, I uncover our users’ mental model and natural language tendencies. The more nuanced that understanding is, the better our training of the virtual assistant would be. Discovering how intents are (likely to be) expressed has a direct impact on the virtual assistant’s ability to learn and perform.



Intent Recognition Architecture

In shaping intents and forming strategies for disambiguation, I create a matrix of use-cases and intents, offering a high-level overview of intents and use-case mapping, expected volume, priority, and NLP layers aiding recognition. The architecture helps keep all stakeholders on the same page around the NLP strategy and helps maintain control as use-cases are added/removed/changed.



A sneak peak into the intent matrix for use cases developed on a proprietary
NLP tool.

The matrix is designed to cover three main elements: Structure, scope, and
detection mechanisms .

Data curation

In sourcing, cleaning, modelling, and annotating language data, I curate what we need to train, validate, and test classifier models with. I use a variety of sources to collect the data — transcripts, recordings, subject matter experts, agents, crowdsourcing platforms, etc. The labelled corpus is created as per the intent recognition architecture.



Classifier training

In getting the training data ready, I take into consideration multiple aspects like linguistic analysis of the data, preprocessing, feature engineering, and choosing the right algorithm to train the data with. The classification model that gets created depends heavily on the algorithm, training parameters, type & quality of data.



Testing & optimization

In deploying the model, trained against a fixed validation set, I work towards improving intent recognition and avoiding regression to attain a model that is well fit and balanced. I work towards optimizing the model’s performance by using unique test data to measure it against and analyze gaps. This is done consistently from the development stage to go-live and beyond.

Say hi!

Looking to hire a designer, discuss design needs, or just chat about AI disruptions and innovative solutions?

Let’s start a conversation.

Drop a message here and I’ll get back to you. Or you can get in touch with me at chadharaagini@gmail.com.

 

Contact Form