skip to content

What are the 5 Components of NLP?

by Yashaswini S. P.

Last updated : October 10, 2025

What are the 5 Components of NLP?

Natural Language Processing (NLP) is a key area of artificial intelligence that empowers machines to process human language, enabling applications like chatbots, voice assistants, and text analysis. Natural Language Processing powers technologies such as voice assistants, search engines, and customer service chatbots, serving as the underlying framework for human-computer language interaction in daily applications.

But language is not straightforward. It is full of rules, exceptions, cultural nuances, and even ambiguities. Language can be tricky. When someone says “I saw her duck,” do they mean she’s got a pet waddling nearby, or that she ducked to dodge a ball? Both are valid. Machines need a structured way to break down such complexities, and that is exactly what the five key components of NLP do.

You might wonder, how does a computer understand the different forms and shapes of the words we use every day?

It does so by breaking language down into manageable parts, analysing each layer of meaning step by step. This article will walk you through the five fundamental building blocks of NLP. We’ll start with the smallest pieces of language and gradually build up to understanding context and hidden intent, just like a human would. By the end, you will have a clear and practical understanding of how NLP bridges the gap between human communication and machine intelligence.

For a quick answer, the 5 core components of NLP that work together to enable this understanding are:

  1. Morphological and Lexical Analysis: Breaking text down into words and their root forms.
  2. Syntactic Analysis: Analysing grammar and sentence structure.
  3. Semantic Analysis: The process involves extracting the straightforward, literal meaning from individual words and full sentences before moving to deeper interpretation.
  4. Discourse Integration: Figuring out the context from surrounding sentences.
  5. Pragmatic Analysis: Figuring out what the person really means or wants to achieve.

Now, let’s explore what each of these components really involves.

What is Natural Language Processing (NLP)? A Quick Refresher

Before we dismantle the engine, let’s be certain we all know what we’re looking at. Sure, defining NLP is the easy part. The challenge comes when you start unpacking what it actually means for technology and people.

More Than Just Code and Language

At its heart, NLP is a specific area of AI and computer science. It’s all about getting computers to understand, and even create, human language. It’s the technology that lets us talk to a machine like we would to another person. Think of NLP’s mission as giving computers a voice and an ear, so they can process, interpret, and create language in ways that feel meaningful to people.

Think about how you use language. Human language is inherently complex and filled with ambiguity, nuance, slang, cultural references, and implicit meaning. This complexity is what makes it so challenging for a machine to process and understand without a deep, contextual analysis. Imagine a system that only understands yes-or-no instructions. For such a machine, making sense of our flexible, messy language is a huge hurdle. Traditional programming is rule-based. You write explicit if-then statements. For example, to filter spam, you might write rules like if email contains “free money” then mark as spam. This approach is brittle and easily bypassed.

NLP, particularly in its modern form, is different. It is largely statistical and probabilistic. Instead of being told what to look for, NLP models are trained on colossal amounts of text data, books, articles, websites, and conversations. From this corpus of data, these systems acquire the statistical patterns, probabilistic relationships, and structural properties of human language, thereby enabling them to perform advanced linguistic tasks. They learn that the phrase “free money” is very often associated with spam, not because a programmer told them so, but because they have analysed millions of examples and calculated the probability.

The Two Sides of NLP

NLP’s overarching goal is communication, but this breaks down into two distinct, though complementary, subfields:

  • Natural Language Understanding (NLU): This is the focus of our journey today. Think of this as the point where a computer stops just reading and starts to truly understand. This is the stage where the system analyzes the intent behind a query, identifies the most salient information, and interprets the underlying sentiment to gain a comprehensive understanding of the message. It’s about taking the messy input from a human and turning it into a structured format that a computer can work with.
  • Natural Language Generation (NLG): This is the reverse process. NLG involves creating human-like text from structured data. When you ask a weather app for a forecast and it replies, “Today will be sunny with a high of 22 degrees,” it has generated that sentence from raw data points (weather condition: sunny, temperature: 22). NLG is what powers chatbots, automated report summaries, and AI assistants’ conversational replies.

This article will focus on the Natural Language Understanding (NLU) pipeline, which is the systematic sequence of steps that enables a machine to comprehend human language.

Why a Structured Process is Essential

It’s similar to receiving a shipment of furniture where all the pieces are mixed in one box, leaving you to sort, identify, and organise before assembly. It would be chaos. You’d spend more time sorting screws from bolts than actually building. The same principle applies to language. Raw text is like a box full of words, phrases, and sentences scattered around without a pattern, waiting to be sorted into something meaningful.

To make sense of it, NLP employs a pipeline, also known as an architecture or a processing chain. Picture it as a smooth relay race – one stage hands over its results like a baton, and the next stage carries it forward. Each component performs a specific, specialised task, progressively cleaning, structuring, and enriching the text. This systematic approach is what transforms incomprehensible noise into coherent understanding. The five components we are about to explore form the classic, foundational pipeline for this process.

What Are the 5 Components of NLP? A Quick Overview

Before we examine each component in detail, it helps to have a simple analogy. Think of how a person learns to read and comprehend a story.

First, they learn to recognise individual words and what they mean on their own (Lexical Analysis). Then, they learn how these words must be arranged to form a grammatically correct sentence (Syntactic Analysis). After that, they learn to combine the words and grammar to understand the sentence’s literal meaning (Semantic Analysis). As they read on, they connect the meaning of one sentence to the next to follow the plot (Discourse Integration). Finally, it’s as if they start “reading between the lines,” catching the writer’s tone, the deeper themes, and the hidden meaning behind a character’s words (Pragmatic Analysis).

NLP models follow a very similar, layered approach. Each stage picks up where the last one left off, creating a smooth flow of progress. This whole process is often called the “NLP pipeline.”

For clarity, here is a simple breakdown of what each component focuses on:

ComponentWhat it focuses onTypical outputsExample tasks
Morphological and lexicalWords, subwords, vocabularyTokens, lemmas, stems, normalised textTokenisation, lemmatisation, spell correction
SyntacticSentence structurePOS tags, dependency treesParsing, grammar checking, and information extraction
SemanticMeaning and referenceEntity types, senses, embeddings, rolesNER, entity linking, classification, QA
DiscourseMulti‑sentence contextCoreference chains, discourse relationsSummarisation, dialogue systems, topic tracking
PragmaticIntended use and social meaningIntent labels, sentiment, toneSentiment analysis, moderation, customer intent
Core Components of Natural Language Processing. A Complete Guide

1. Morphological and Lexical Analysis: Understanding the Words

Every great structure begins with a single brick. In NLP, that brick is the word, or more accurately, the token. The initial stage, morphological and lexical analysis, is responsible for dividing raw text into its fundamental components. It’s the lexicographical eye of the NLP storm, surveying the text and identifying the vocabulary it contains.

What is Lexical Analysis?

Lexical Analysis, sometimes called tokenisation, is the process of scanning a stream of text and chopping it up into pieces called tokens. These tokens are the meaningful elements that the rest of the NLP pipeline will work with. Tokens are usually words, but they can also be punctuation, numbers, or even single letters, depending on what is needed.

This step is governed by a simple but critical principle in computing: “garbage in, garbage out.” If the initial breakdown of the text is flawed, every subsequent analysis will be built on a faulty foundation. Imagine trying to understand a sentence if “can’t” was treated as a single, meaningless token instead of “ca” and “n’t”. Getting the words right is paramount.

Key tasks in this stage include:

Tokenisation: 

This is the primary task of lexical analysis. It’s the act of segmenting text into tokens. While it sounds simple in English, it can be surprisingly complex.

  • Standard Case: “NLP is fascinating!” becomes the tokens [“NLP”, “is”, “fascinating”, “!”].
  • Punctuation: Punctuation marks are typically separated into their own tokens. This is important because a question mark carries a different semantic weight than a full stop.
  • Contractions: In English, we combine words. A robust tokenizer is required to evaluate and determine whether segmentation of tokens should occur. “Don’t” might become “do” and “n’t”, where “n’t” is a token representing negation. “I’ll” might become “I” and “ll”.
  • Hyphenation: What about “state-of-the-art”? Should this be one token, three separate tokens ([“state”, “of”, “the”, “art”]), or something in between? Deciding which to use depends on both the context and the specific NLP objective.
  • Modern Approaches: Newer models, especially transformer-based ones like Gemini, often use a technique called sub-word tokenisation (like Byte-Pair Encoding). This breaks rare or complex words into smaller, known sub-word units. The word “unbelievably” might become [“un”, “believe”, “ably”]. This approach is powerful because it can handle virtually any word, even typos or neologisms, by breaking it down into familiar pieces.

Morphological Analysis

Once we have our tokens, we need to understand their structure. Within linguistic studies, morphological analysis is defined as the structured examination of a word’s internal elements and the principles governing its construction. Morphological analysis strives to comprehend the principles of word formation and the interrelations among words within a linguistic system. This process is vital for tasks like search engines, where a search for “running shoes” should ideally also match documents containing the word “ran”.

It primarily involves two techniques:

  • Stemming: This is a crude, fast, and often rule-based approach to reducing a word to its root form. It simply chops off prefixes and suffixes. For example, “studies”, “studying”, and “study” would all be reduced to “studi”. The problem is that “studi” is not a real word, which can be problematic for later stages of analysis. It’s a blunt instrument.
  • Lemmatisation: This demonstrates a more linguistically sensitive and theoretically grounded approach. Lemmatisation functions as a formal procedure in linguistics, seeking to normalise word forms through reduction to their dictionary base. To do this correctly, a lemmatiser needs two things: a vocabulary of words and their lemmas, and an understanding of the word’s part of speech (e.g., is it a noun or a verb?). For instance:
    • The word “better” (an adjective) has the lemma “good”.
    • The word “was” (a verb) has the lemma “be”.
    • The word “meeting” could have the lemma “meeting” (if it’s a noun) or “meet” (if it’s a verb).

Compared to stemming, lemmatisation produces proper words, making it significantly more valuable for NLP workflows.

Why is Morphological Analysis Important?

Morphological analysis matters because language is dynamic. Without it, machines would treat “run” and “running” as completely unrelated words.

Applications include:

  • Search engines – Matching results for all word variations.
  • Spell-checkers – Detecting typos by recognising word roots.
  • Lemmatisation and stemming – Simplifying words to their base forms for efficient processing.

Real-World Examples

  • Google search shows results for “run” when you type “running.”
  • Sentiment analysis tools accurately detect the tone of the text, whether you write “happy,” “happiest,” or “unhappy.”

Morphological analysis is the starting point for understanding words. It’s how a computer figures out what a word is made of, so it doesn’t just see “unbreakable” as a single word but understands it’s made up of “un-,” “break,” and “-able,” giving it a richer meaning.

2. Syntactic Analysis (Understanding the Grammar)

Now that we have a collection of clean, individual words (tokens), we need to figure out how they fit together. Syntactic analysis, commonly referred to as parsing, interprets a sentence’s underlying grammatical structure. It builds a grammatical hierarchy, determining how words depend on one another and cluster into larger linguistic units. Serving as the grammarian of the NLP process, it validates that structures comply with linguistic norms.

Arranging the Words: The Role of Grammar

Human languages have intricate sets of rules governing how sentences can be constructed. These rules define things like subject-verb agreement (“He runs” vs. “They run”) and word order (in English, sentences generally follow Subject-Verb-Object).

Through formal grammar rules, syntactic analysis produces a representation of the sentence’s structure.

This step is crucial because it helps clarify meaning. Consider the sentence “The old man took the boat.” While grammatically unusual, a syntactic parser can identify “man” as a verb (a process called anthimeria), giving the sentence a strange but valid structure. Without syntax, the sentence would be incomprehensible.

Key Concepts in Syntactic Analysis

Computers don’t “feel” grammar; they apply formal rules to text. This is typically done using two main frameworks.

Grammar and Parse Trees

A parser examines a sequence of symbols to determine its structure according to formal grammar rules. For natural language processing, the parser receives tokens and constructs a parse tree. A parse tree is a hierarchical structure showing how words in a sentence relate grammatically.

Let’s create a visual representation of the parse tree for “The cat sat on the mat.”

    S (Sentence)

          / \

         /   \

      NP    VP (Verb Phrase)

    / \    / | \

  /   \  /  |  \

DT    N V   PP   (Prepositional Phrase)

    |    | |  /  \

   The  cat sat P  NP

              |  /  \

            on DT    N

                |    |

               the  mat

Reading this tree from the bottom up:

  • “The” (DT – Determiner) and “cat” (N – Noun) combine to form a Noun Phrase (NP), which is the subject of the sentence.
  • “sat” (V – Verb), “on” (P – Preposition), and “the mat” (another NP) form a Verb Phrase (VP), which is the predicate of the sentence.
  • The entire structure forms a complete Sentence (S).

This tree structure makes the relationships between words explicit and unambiguous.

Dependency Grammar

An alternative to phrase-structure trees is dependency grammar. Instead of building phrases, dependency grammar captures the syntactic relations between individual words. It draws arcs from a “head” word to its “dependent” word. For “The cat sat on the mat”:

  • sat is the ROOT of the sentence.
  • cat serves as the nsubj (nominal subject) linked to sat.
  • sat has a prep (prepositional) dependency on on.
  • mat holds a pobj (prepositional object) relationship with on.

Both strategies are designed to yield the same result, namely a structured representation of the sentence’s grammar.

Ambiguity Resolution

Syntactic analysis is especially valuable for resolving structural ambiguity, which occurs when a sentence allows multiple grammatical interpretations. A standard example is, “I saw the man with the telescope,” which has two possible meanings: either the speaker used the telescope, or the man possessed it.

  • Interpretation 1: I used a telescope to see the man. (The telescope is the instrument of seeing.)
  • Interpretation 2: I saw a man, and that man possessed a telescope. (The telescope is a feature of the man).

A syntactic parser can identify both possible structures. In the first way you can read that sentence, the phrase “with the telescope” describes how the person did the seeing. In the other reading, the phrase functions as a modifier of the noun “man”. Creating these possible parse trees allows the system to hand the ambiguity over to semantic analysis, where it is resolved using context and world knowledge.

Real-World Examples

  • Grammar Checkers: Tools like Grammarly or the built-in checker in Microsoft Word rely heavily on syntactic analysis. The system parses sentences to verify grammatical accuracy, identifying issues such as subject-verb disagreement or improper word order.
  • Question-Answering Systems: When you ask Google Assistant, “Who directed the movie Inception?”, it uses syntactic analysis to understand that “who” is the subject you’re asking about, “directed” is the action, and “the movie Inception” is the object of that action.

3. Semantic Analysis (Decoding the Meaning)

We’ve broken the text into words and arranged those words into a grammatical structure. Now for the big question: what does it all mean? This is the domain of Semantic Analysis. This component extends past grammatical analysis to capture the meaning of words, phrases, and complete sentences. It’s where the system starts to grasp the actual message being conveyed.

Moving Beyond Grammar to Intent

Syntax deals with the arrangement of words, whereas semantics deals with their meaning. It aims to answer the question, “What is being discussed here?” This process involves mapping the linguistic structures to real-world concepts and understanding the relationships between them. It’s the detective, taking the grammatically perfect transcript and figuring out who did what to whom, and why.

Core Tasks in Semantic Analysis

Achieving true machine understanding of meaning is a central challenge in AI. Semantic analysis tackles this with several key sub-tasks.

Word Sense Disambiguation (WSD)

Many words have multiple meanings, or senses. WSD involves identifying the particular sense a word carries in context. This is fundamental to an accurate understanding.

  • Example: Consider the word “bank”.
    1. “I deposited money in the bank.” (Financial Institution)
    2. “We sat on the riverbank.” (Geographical Slope)

A semantic analyser uses the surrounding words (“deposited money”, “river”) to correctly disambiguate and assign the appropriate sense. Approaches to WSD range from using dictionary definitions (the Lesk algorithm) to training machine learning models on large text corpora where human annotators have already identified the correct senses.

Relationship Extraction

This task focuses on identifying and classifying the semantic relationships between entities within a text. Entities are typically proper nouns like people, organisations, locations, and products. The relationships connect them.

  • Example: In the sentence, “Apple’s CEO Tim Cook announced the new iPhone in California,” a relationship extractor would identify:
    • (Apple, is_a, Organisation)
    • (Tim Cook, is_a, Person)
    • (iPhone, is_a, Product)
    • (Tim Cook, position_held, CEO of Apple)
    • (announced, product_announced, iPhone)
    • (announcement_made_in, California)

This is the same kind of tech that makes Google’s Knowledge Graph work. It’s how Google can give you a quick answer to a question like “Who is the CEO of Apple?. It structures the world’s information into a giant web of interconnected facts.

Semantic Role Labelling (SRL)

Also known as shallow semantic parsing, SRL aims to identify the semantic role of each word or phrase in a sentence relative to a specific verb or predicate. Think of it as answering the “who-did-what-to-whom” questions.

  • Example Sentence: “Mary sold the book to John for ten dollars.”
  • Predicate: sold
  • Semantic Roles:
    • Agent (Seller): Mary
    • Theme (Thing Sold): the book
    • Recipient (Buyer): John
    • Price: ten dollars

SRL provides a detailed, role-based summary of a sentence’s meaning, which is incredibly useful for question-answering systems and information extraction.

Real-World Examples

  • Information Extraction: Financial news analysis tools use NER to automatically extract company names, revenue figures, and executive names from articles to populate databases.
  • Content Recommendation: When you finish an article about “Apple’s new iPhone,” a recommendation engine uses semantic analysis to understand you are interested in the company Apple (an organisation), not the fruit, and suggests other articles about technology.

4. Discourse Analysis (Connecting the Sentences)

Our analysis has, up to this point, been focused exclusively on the examination of individual sentences. But language doesn’t exist in a vacuum. We communicate in paragraphs, chapters, and conversations. Discourse Analysis is the component that steps back and looks at the bigger picture, analysing how sentences connect to each other to form a coherent whole. It’s what allows us to understand pronouns and follow a complex argument.

Understanding the Bigger Picture

Discourse analysis examines the structure and organization of language in units larger than the single sentence. It seeks to model the flow of information and the relationships between different parts of a text. It answers questions like: What does “it” refer to in this paragraph? How does this sentence support the previous one?

Ensuring Coherence and Cohesion

For a text to be understandable, it must be both coherent (logically consistent in its meaning) and cohesive (grammatically and lexically connected). Discourse analysis works to ensure both.

Anaphora Resolution / Coreference Resolution

This is arguably the most important task within discourse analysis. Coreference resolution is the process of identifying and grouping all expressions in a text that refer to the same entity. The most common examples are pronouns (he, she, it, they), but it can also involve noun phrases.

  • Simple Example: “The robot picked up the ball because it was heavy.” Coreference resolution determines that “it” refers to “the ball” and not “the robot”. A misinterpretation of this would completely invert the meaning.
  • Complex Example: “Acme Corp announced its quarterly earnings today. When someone says, “The results were better than expected,” the phrase “The results” is a shortcut for something already talked about, like the “quarterly earnings.”

This is a notoriously difficult problem, especially across long documents, as the antecedent (the thing being referred to) can be pages earlier. Solving it requires a sophisticated understanding of the entire text so far.

Connecting Ideas

Beyond pronouns, discourse analysis also looks at discourse markers, words, and phrases like “however,” “therefore,” “in addition,” and “on the other hand.” These markers are signals that explicitly state the logical relationship between sentences. “Therefore” establishes a logical result or effect, whereas “however” introduces a contrasting or opposing point. By tracking these signals, the system can build a mental map of the text’s argumentative structure.

Real-World Examples

  • Chatbots and Virtual Assistants: Advanced chatbots rely heavily on discourse integration to maintain a coherent conversation, remember previous user statements, and provide contextually relevant answers.
  • Automatic Text Summarisation: To create a good summary, an NLP model must first understand the discourse structure of the document. It needs to identify the main topic sentences and the supporting sentences and understand how they all relate to form the core argument.

5: Pragmatic Analysis (Interpreting Context and Intent)

Alright, we’re at the very last and most complicated part of our work. This is where we connect all the dots to get the complete, deep meaning of what we’re reading. Lexical, syntactic, semantic, and discourse analysis have given us a structurally sound and semantically rich model of the text. But is this the full story? Consider the difference between reading a transcript of a conversation and actually being in the room. This necessitates an interpretative approach that transcends the explicit linguistic content, focusing instead on implicit contextual cues. You also notice how someone is saying something (their tone), how they’re acting (body language), and what you already know about them to understand what they really mean. Pragmatic Analysis is the NLP component that attempts to model this layer of real-world, situational understanding. It deals with what is meant, not just what is said.

The Final Layer: Real-World Context

Pragmatics examines how context shapes the meaning of language, going beyond the literal sense of words to interpret intended messages. This is how we figure out the full story. We take the literal words a person uses, and then we fill in the blanks using what we know about the situation to understand what they’re actually getting at. This is where NLP models try to become less like dictionaries and more like perceptive humans.

From Words to World Knowledge

Pragmatic analysis relies heavily on a vast, implicit model of the world and how it works. It involves several key inferential tasks.

Implicature and Inference

We often get what someone means even when they don’t spell it out. This unstated meaning, called an implicature, is something we figure out by using our knowledge and the situation we’re in.

  • Example: At a busy dinner table, one guest says to another, “Can you pass the salt?”
    • Literal Meaning (Semantics): The guest is asking about the other person’s physical ability to lift and move the salt shaker.
    • Intended Meaning (Pragmatics): The guest is making a request.

A pragmatic system understands that in the context of a meal, this question is almost never a literal inquiry about capability. It infers the speaker’s true intention. This requires a model of social conventions and typical scenarios.

Dealing with Sarcasm and Irony

Perhaps the greatest challenge in pragmatics is detecting non-literal language like sarcasm. We often use a specific tone to show we’re being sarcastic, saying the opposite of what we actually mean. But in a text message or an article, that tone disappears, making it hard for the reader to know our real intent.

  • Example: After a frustrating meeting, someone writes in a group chat: “Great. Another three hours wasted. I’m thrilled.”

On the surface, the words are positive (“Great,” “thrilled”). But we know that someone could say this to mean the opposite, especially if their voice or body language suggests otherwise. However, a pragmatic analyst, if they could access the context of the frustrating meeting, would recognise the mismatch between the positive words and the negative situation, correctly identifying the statement as sarcastic.

Contextual Word Embeddings (e.g., Gemini, GPT)

Big, new language models (LLMs) have totally changed the game for understanding context. Instead of being programmed with a bunch of strict rules, they’re smart enough to learn all the unspoken clues and subtle meanings in our language. Models like Gemini and GPT are trained on nearly the entire internet. They don’t analyse text in strict, isolated stages. Instead, they read entire sequences of text at once.

Because the system looks at a word and all the words around it, it gets a much deeper, more complete understanding of what that word means. It’s not just a single word; it’s a part of a bigger idea. This helps it understand a word in a much smarter, more human-like way. They inherently learn to handle ambiguity, inference, and even some forms of pragmatics. When Gemini processes “The bank can guarantee deposits will eventually cover future tuition costs because it invests in adjustable-rate mortgage securities,” it learns from the surrounding words that “bank” here means a financial institution, not a river edge. This implicit, end-to-end learning is why modern NLP feels so remarkably capable.

Real-World Examples

  • Virtual Assistants: When you tell Siri or Alexa, “I’m hungry,” they don’t reply, “That is an interesting biological state.” They use pragmatic analysis to infer that your intent is to find food and therefore show you a list of nearby restaurants.
  • Social Media Monitoring: Brands use sophisticated sentiment and intent analysis to understand what customers are really saying about them. They can differentiate between a genuine compliment and a sarcastic complaint, allowing them to respond more appropriately.

Putting It All Together: How the 5 Components Create Coherent Understanding

Now that we have explored each component individually, let’s see how they work together in a seamless pipeline. Imagine an NLP system processing this short paragraph:

“After she finished her report, Sarah emailed it to her manager, David, because he needed it for the quarterly review. He was impressed.”

A modern NLP system would perform these steps in rapid succession:

  1. Lexical Analysis: The text is tokenised into individual words and punctuation. Lemmatisation would reduce words like “finished” to “finish” and “needed” to “need” for internal analysis.
  2. Syntactic Analysis: Each sentence is parsed into a grammatical structure. The system identifies “Sarah” as the subject of “emailed,” “report” as the object, and so on. It builds a clear map of the grammar in both sentences.
  3. Semantic Analysis: The system identifies the key entities: Sarah (Person), David (Person), report (Document), quarterly review (Event). It pulls out connections from the words, like finding out that (Sarah, sent, report) and that she was also the (email_recipient, David). For the verb “emailed,” SRL would label Sarah as the Agent, the report as the Theme, and David as the Recipient.
  4. Discourse Analysis: This is where the magic of connection happens. The coreference resolution process kicks in:
    • In the first sentence, “she” refers to “Sarah”.
    • “it” refers to “her report”.
    • In the second sentence, “He” refers to “David”.
      The system now understands that the entire paragraph is about one continuous event involving these three entities.
  5. Pragmatic Analysis: Finally, the system infers the overall meaning and intent. It understands that emailing the report was done to fulfil a requirement (for the review). It infers a likely positive sentiment from “He was impressed.” It grasps the causal link (“because”) that connects David’s need for the report with Sarah’s action of sending it.

By the end of this pipeline, the computer hasn’t just processed words; it has constructed a detailed, multi-layered model of a real-world situation, capturing who did what, to whom, why, and how they felt about it.

ComponentTask Performed on Example ParagraphOutput/Data Produced
Lexical AnalysisTokenisation, Lemmatisation[“after”, “she”, “finish”, …]
Syntactic AnalysisParsing grammar, identifying structureTwo parse trees showing subjects, objects, and verbs for each sentence.
Semantic AnalysisIdentifying entities, relationships, rolesSarah (Person), David (Person), report (Document). (Sarah, sent, report). Roles: Sarah=Agent, report=Theme.
Discourse AnalysisCoreference ResolutionLinks: she -> Sarah, it -> report, He -> David.
Pragmatic AnalysisInferring Intent/ContextUnderstands this is a work-related communication. Infers David’s positive reaction.

Conclusion

It’s incredible how modern tech can take something as simple as a bunch of letters and words and turn it into a deep understanding of what a person is trying to say. That’s the power of NLP. So, we’ve seen how all these five pieces of the puzzle fit together to take our words and figure out what we really mean.

We started with Lexical Analysis, breaking text down into its core tokens. We then proceeded to Syntactic Analysis, which organized the tokens into a grammatical structure. With the rules of grammar established, Semantic Analysis decoded the meaning of words and their relationships. Discourse Analysis then zoomed out, connecting sentences into a coherent whole. Finally, Pragmatic Analysis added the crucial final layer, interpreting context and inferring the speaker’s true intent.

While modern end-to-end models like transformers have revolutionised the field, they do so by implicitly learning these very same principles on an unprecedented scale. A solid grasp of these five components is therefore not just an academic exercise. It is the blueprint that reveals how machines learn to read, the foundation upon which our digital communication is built, and the key to understanding the power and the promise of the artificial intelligence that shapes our world. This is also why any forward-looking AI development company, mobile app development company, or web app development company integrates NLP at the core of their solutions because language is the bridge that makes technology truly human.

More from