{"id":108939,"date":"2026-05-04T16:32:30","date_gmt":"2026-05-04T11:02:30","guid":{"rendered":"https:\/\/www.guvi.in\/blog\/?p=108939"},"modified":"2026-05-04T16:32:31","modified_gmt":"2026-05-04T11:02:31","slug":"transformer-ai-a-beginners-guide","status":"publish","type":"post","link":"https:\/\/www.guvi.in\/blog\/transformer-ai-a-beginners-guide\/","title":{"rendered":"Transformer AI: A Beginner&#8217;s Guide to the Engine Behind Modern AI"},"content":{"rendered":"\n<p>Think about the last time you used ChatGPT, Google Translate, or even the autocomplete on your&nbsp; phone. Behind all of these tools is a powerful idea called the Transformer \u2014 a type of AI&nbsp; architecture that completely changed the way machines understand and generate language. Since&nbsp; it was introduced in a landmark 2017 research paper titled &#8220;Attention Is All You Need&#8221; by&nbsp; researchers at Google, the Transformer has become the foundation for almost every major AI&nbsp; language model in use today.&nbsp;<\/p>\n\n\n\n<p>But why was a new architecture even needed? And what exactly makes the Transformer so special?&nbsp; In this blog, we will walk through these questions step by step \u2014 covering what Transformers are,&nbsp; how attention works, what the encoder and decoder do, and why this single idea triggered the&nbsp; modern AI revolution.&nbsp;<\/p>\n\n\n\n<p><strong>Quick Answer<\/strong><\/p>\n\n\n\n<p>Transformer AI is a deep learning architecture that understands and generates data by analyzing all parts of the input at once using a mechanism called attention. Unlike older models, it processes information in parallel, captures long-range relationships, and powers modern AI systems like chatbots, translation tools, and large language models.&nbsp;<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Before Transformers: The Problem with RNNs&nbsp;<\/strong><\/h2>\n\n\n\n<p>To appreciate why the Transformer was such a breakthrough, we need to understand what came&nbsp; before it. Earlier AI language models used Recurrent Neural Networks (RNNs) and Long Short Term Memory networks (LSTMs). These models read text sequentially \u2014 one word at a time,&nbsp; from left to right, like reading a sentence slowly out loud.&nbsp;<\/p>\n\n\n\n<p>This approach had two major problems. First, it was slow. Since each word depended on the&nbsp; previous one, you could not process words in parallel \u2014 you had to wait. Second, these models&nbsp; struggled with long-range dependencies. Imagine reading a paragraph where the subject of the&nbsp; very first sentence only becomes relevant again in the last sentence. By the time the model got to&nbsp; the end, it had often &#8220;forgotten&#8221; the important detail from the beginning. This is called the&nbsp; vanishing gradient problem.&nbsp;<\/p>\n\n\n\n<p>Think of it like trying to pass a message along a chain of 100 people by whispering. By the time it&nbsp; reaches the end, the message is distorted or lost. The Transformer was designed to let every word&nbsp; talk directly to every other word \u2014 no long chain needed.&nbsp;&nbsp;<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table><tbody><tr><td><strong>Feature&nbsp;<\/strong><\/td><td><strong>RNN \/ LSTM&nbsp;<\/strong><\/td><td><strong>Transformer<\/strong><\/td><\/tr><tr><td><strong>Processing&nbsp;<\/strong><\/td><td>Sequential (word by word)&nbsp;<\/td><td>Parallel (all at once)<\/td><\/tr><tr><td><strong>Long-range context&nbsp;<\/strong><\/td><td>Often forgets early words&nbsp;<\/td><td>Captures full context<\/td><\/tr><tr><td><strong>Training speed&nbsp;<\/strong><\/td><td>Slow, hard to parallelize&nbsp;<\/td><td>Fast, GPU-friendly<\/td><\/tr><tr><td><strong>Scalability&nbsp;<\/strong><\/td><td>Limited by sequence length&nbsp;<\/td><td>Scales with data &amp; compute<\/td><\/tr><tr><td><strong>Popular models&nbsp;<\/strong><\/td><td>Older chatbots, early MT&nbsp;<\/td><td>GPT-4, BERT, Claude, T5<\/td><\/tr><\/tbody><\/table><figcaption class=\"wp-element-caption\"><em>Table 1: A comparison of RNN\/LSTM vs Transformer across key dimensions&nbsp;<\/em><\/figcaption><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>What is a Transformer in AI?&nbsp;<\/strong><\/h2>\n\n\n\n<p>A Transformer is a type of deep learning model \u2014 a mathematical system trained on enormous&nbsp; amounts of text data so it can understand and generate human language. Unlike RNNs, the&nbsp; Transformer does not read text in order. Instead, it looks at the entire input at once and figures out&nbsp; how every word relates to every other word simultaneously. This is made possible through its core&nbsp; innovation: the Attention Mechanism.&nbsp;<\/p>\n\n\n\n<p>Transformers are not limited to text either. Today, they are used in image recognition (Vision&nbsp; Transformers), audio processing, protein structure prediction, and even robotics. But their original&nbsp; home \u2014 and the place where they changed everything \u2014 is Natural Language Processing (NLP).&nbsp;<\/p>\n\n\n\n<p>Do check out the HCL GUVI <a href=\"https:\/\/www.guvi.in\/zen-class\/artificial-intelligence-and-machine-learning-course\/?utm_source=blog&amp;utm_medium=hyperlink&amp;utm_campaign=transformer-ai:-a-beginner's-guide-to-the-engine-behind-modern-ai\" target=\"_blank\" rel=\"noreferrer noopener\">Artificial Intelligence and Machine Learning course<\/a> if you want to turn your understanding of concepts like Transformers into real-world skills. It offers a structured, hands-on learning experience with projects, mentor support, and industry-relevant tools to help you become job-ready in AI and ML.\u00a0<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>The Attention Mechanism: The Heart of the Transformer&nbsp;<\/strong><\/h2>\n\n\n\n<p>If you only remember one concept from this blog, let it be this: Attention is the ability to decide&nbsp; which parts of the input to focus on when processing each word. This is exactly what we do as&nbsp; humans when we read \u2014 our brain does not pay equal attention to every word. Some words are&nbsp; more relevant to understanding a particular part of the sentence.&nbsp;<\/p>\n\n\n\n<p>Consider the sentence: &#8220;The animal did not cross the street because it was too tired.&#8221; What does&nbsp; &#8220;it&#8221; refer to \u2014 the animal or the street? Your brain immediately knows it is the animal, because&nbsp; you paid more attention to &#8220;animal&#8221; and &#8220;tired&#8221; when processing &#8220;it&#8221;. The attention mechanism&nbsp; teaches the model to do the same.&nbsp;<\/p>\n\n\n\n<p><strong>Query, Key, and Value \u2014 The Building Blocks&nbsp;<\/strong><\/p>\n\n\n\n<p>For every word in the input, the Transformer creates three vectors (think of them as smart numeric&nbsp; labels):&nbsp;<\/p>\n\n\n\n<p>\u2022 Query (Q): What information is this word looking for?&nbsp;<\/p>\n\n\n\n<p>\u2022 Key (K): What information does this word offer to others?&nbsp;<\/p>\n\n\n\n<p>\u2022 Value (V): What is the actual content this word contributes?&nbsp;<\/p>\n\n\n\n<p>The model then calculates an attention score by comparing the Query of one word with the Keys&nbsp; of all other words. Higher scores mean the model should pay more attention to that word. The&nbsp;<\/p>\n\n\n\n<p>scores are passed through a softmax function (which converts them into percentages adding up to&nbsp; 100%) and multiplied by the Values to produce a final weighted representation.&nbsp;<\/p>\n\n\n\n<p>Here is a helpful real-world analogy: Imagine you walk into a library and search for a book about&nbsp; machine learning. You have a query (what you want). Each book on the shelf has a key (its title&nbsp; and description) and a value (what it actually contains). You compare your query against every&nbsp; key, find the best match, and read its value. That is self-attention \u2014 done for every word,&nbsp; simultaneously, thousands of times during training.&nbsp;<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Multi-Head Attention: Multiple Perspectives at Once&nbsp;<\/strong><\/h2>\n\n\n\n<p>The Transformer does not run at attention just once. It runs it multiple times in parallel using what is&nbsp; called Multi-Head Attention. Each &#8220;head&#8221; is an independent attention layer that looks at the&nbsp; sentence from a different angle. One head might focus on grammatical relationships (subject-verb&nbsp; agreement), another might focus on semantic meaning, and another on pronoun references.&nbsp;<\/p>\n\n\n\n<p>After all heads finish, their outputs are concatenated (joined together) and passed through one more&nbsp; layer to produce a single, rich representation. The result is a model that can simultaneously&nbsp; understand grammar, meaning, and context \u2014 all from the same input.&nbsp;<\/p>\n\n\n\n<p>Think of it like getting opinions from multiple subject experts before making a decision. A doctor,&nbsp; a lawyer, and an engineer all read the same paragraph \u2014 together, they catch things no single&nbsp; person could. That is multi-head attention in action.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>The Transformer Architecture: Encoder and Decoder<\/strong>&nbsp;<\/h2>\n\n\n\n<p>The original Transformer model from the 2017 paper has two main components that work together:&nbsp; the Encoder and the Decoder. Modern models like BERT use only the Encoder, while models like&nbsp; GPT use only the Decoder. Let us look at what each does.&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>The Encoder \u2014 Understanding the Input&nbsp;<\/strong><\/h3>\n\n\n\n<p>The Encoder reads the input text and converts it into a rich internal representation that captures the&nbsp; meaning of every word in context. It does this through a stack of identical layers (typically 6 to 12&nbsp; layers in modern models), each containing:&nbsp;<\/p>\n\n\n\n<p>\u2022 A Multi-Head Self-Attention sublayer \u2014 so every word can attend to all other words \u2022 An Add &amp; Normalize step \u2014 which stabilizes training by combining the original input&nbsp; with the attention output&nbsp;<\/p>\n\n\n\n<p>\u2022 A Feed-Forward Network \u2014 two simple linear layers with a non-linear activation, applied&nbsp; independently to each word position&nbsp;<\/p>\n\n\n\n<p>\u2022 Another Add &amp; Normalize step&nbsp;<\/p>\n\n\n\n<p>By the time the input passes through all encoder layers, the model has built a deep contextual&nbsp; understanding of the input sentence. Models like BERT (Bidirectional Encoder Representations&nbsp; from Transformers) by Google use only this part and are excellent at tasks like question answering,&nbsp; text classification, and named entity recognition.&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>The Decoder \u2014 Generating the Output<\/strong><\/h3>\n\n\n\n<p>The Decoder takes the encoder&#8217;s rich representation and uses it to generate output text, one token&nbsp; at a time. It has a similar structure to the encoder but with one important addition: Masked Multi Head Self-Attention. This masking prevents the decoder from looking at future words it has not&nbsp; yet generated \u2014 which would be &#8220;cheating&#8221; during training.&nbsp;<\/p>\n\n\n\n<p>The decoder also has a special Cross-Attention layer where it directly attends to the encoder&#8217;s&nbsp; output, ensuring that what it generates is informed by the full input context. Models like GPT&nbsp; (Generative Pre-trained Transformer) by OpenAI use only the decoder and excel at creative text&nbsp; generation, coding assistance, and conversation.&nbsp;<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Positional Encoding: Giving Words a Sense of Order&nbsp;<\/strong><\/h2>\n\n\n\n<p>Here is a subtle but important problem. Since the Transformer processes all words simultaneously,&nbsp; it has no built-in sense of word order. But word order matters enormously \u2014 &#8220;Dog bites man&#8221; and&nbsp; &#8220;Man bites dog&#8221; contain the same words but carry completely different meanings!&nbsp;<\/p>\n\n\n\n<p>To solve this, Transformers use Positional Encoding \u2014 a special numeric signal added to each&nbsp; word&#8217;s representation that encodes its position in the sequence. It is like adding numbered labels&nbsp; to the words before feeding them in. The model learns to interpret these labels and uses them to&nbsp; understand where in the sentence each word sits, without slowing down the parallel processing.&nbsp;<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Feed-Forward Networks and Layer Normalization&nbsp;<\/strong><\/h2>\n\n\n\n<p>After each attention layer, every word&#8217;s representation passes through a small Feed-Forward&nbsp; Network (FFN). This is two linear transformations with a ReLU activation in between. The role&nbsp; of the FFN is to further process and transform the representation, adding more depth and non linearity so the model can learn complex patterns.&nbsp;<\/p>\n\n\n\n<p>Layer <a href=\"https:\/\/www.guvi.in\/blog\/guide-on-normalization-in-dbms\/\" target=\"_blank\" rel=\"noreferrer noopener\">Normalization<\/a> (the &#8220;Add &amp; Norm&#8221; step) is applied after both the attention and FFN\u00a0 sublayers. It works by adjusting the scale of the outputs to prevent values from becoming too large\u00a0 or too small during training \u2014 a common cause of instability. This simple trick makes\u00a0 Transformers much more stable and easier to train on large datasets.\u00a0<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Famous Models Built on the Transformer&nbsp;<\/strong><\/h2>\n\n\n\n<p>The Transformer architecture laid the groundwork for an entire generation of powerful AI models.&nbsp; Here are some of the most significant:&nbsp;<\/p>\n\n\n\n<p><strong>\u2022 GPT-4 (<\/strong><a href=\"https:\/\/www.guvi.in\/blog\/getting-started-with-openai-models\/\" target=\"_blank\" rel=\"noreferrer noopener\"><strong>OpenAI<\/strong><\/a><strong>)<\/strong> \u2014 Decoder-only model powering ChatGPT; capable of writing, coding,\u00a0 reasoning, and more\u00a0<\/p>\n\n\n\n<p><strong>\u2022 BERT (Google)<\/strong> \u2014 Encoder-only model that revolutionized Google Search and NLP&nbsp; benchmarks&nbsp;<\/p>\n\n\n\n<p><strong>\u2022 T5 (Google) <\/strong>\u2014 Encoder-Decoder model that treats every <a href=\"https:\/\/www.guvi.in\/blog\/must-know-nlp-hacks-for-beginners\/\" target=\"_blank\" rel=\"noreferrer noopener\">NLP<\/a> task as a text-to-text\u00a0 problem\u00a0<\/p>\n\n\n\n<p><strong>\u2022 <\/strong><a href=\"https:\/\/www.guvi.in\/blog\/how-to-use-code-llama\/\" target=\"_blank\" rel=\"noreferrer noopener\"><strong>LLaMA<\/strong><\/a><strong> (Meta)<\/strong> \u2014 Open-source Transformer model enabling research and\u00a0 experimentation worldwide\u00a0<\/p>\n\n\n\n<p>\u00a0\u00a0\u00a0\u00a0<strong>\u00a0\u00a0\u2022 <\/strong><a href=\"https:\/\/www.guvi.in\/blog\/anthropic-claude-opus-4-6\/\" target=\"_blank\" rel=\"noreferrer noopener\"><strong>Claude (Anthropic<\/strong><\/a><strong>)<\/strong> \u2014 Safety-focused conversational AI built on Transformer principles<\/p>\n\n\n\n<p><strong>\u2022<\/strong><a href=\"https:\/\/www.guvi.in\/blog\/anthropic-claude-opus-4-6\/\" target=\"_blank\" rel=\"noreferrer noopener\"><strong> Gemini <\/strong><\/a><strong>(Google DeepMind)<\/strong> \u2014 Multimodal Transformer handling text, images, audio, and\u00a0 video\u00a0<\/p>\n\n\n\n<p><strong>\u2022 AlphaFold 2 (DeepMind)<\/strong> \u2014 Uses Transformer-like attention to predict 3D protein&nbsp; structures&nbsp;<\/p>\n\n\n\n<p>Each of these models follows the core Transformer blueprint but differs in scale, training data,&nbsp; fine-tuning approach, and specific architectural choices. The unifying principle \u2014 attention \u2014 remains at the core of all of them.&nbsp;<\/p>\n\n\n\n<p>Do check out the HCL GUVI <a href=\"https:\/\/www.guvi.in\/mlp\/AI-ML-Email-Course?utm_source=blog&amp;utm_medium=hyperlink&amp;utm_campaign=transformer-ai:-a-beginner's-guide-to-the-engine-behind-modern-ai\" target=\"_blank\" rel=\"noreferrer noopener\">AI &amp; ML Email Course<\/a> if you want a quick and beginner-friendly way to understand AI . It\u2019s a 5-day program that covers core concepts, real-world use cases, and career insights through simple, actionable lessons, helping you build a clear roadmap to start your AI journey confidently.\u00a0<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Why Transformers Changed Everything&nbsp;<\/strong><\/h2>\n\n\n\n<p>The Transformer was not just an incremental improvement over previous models \u2014 it was a&nbsp; paradigm shift. Three properties made it transformative:&nbsp;<\/p>\n\n\n\n<p><strong>\u2022 Parallelism:<\/strong> Unlike RNNs, Transformers can process all words simultaneously, making&nbsp; them dramatically faster to train on modern GPU\/TPU hardware&nbsp;<\/p>\n\n\n\n<p><strong>\u2022 Scalability:<\/strong> The more data and compute you give a Transformer, the better it gets \u2014 a&nbsp; property called &#8220;scaling laws&#8221; that no previous architecture exhibited so cleanly <strong>\u2022 Transfer learning:<\/strong> A Transformer pre-trained on billions of words can be fine-tuned for a&nbsp; specific task (like medical Q&amp;A or legal document review) with relatively little additional&nbsp; data&nbsp;<\/p>\n\n\n\n<p>This combination unlocked what researchers call Foundation Models \u2014 large, general-purpose&nbsp; models that serve as the base for hundreds of specialized AI applications. We are now seeing&nbsp; Transformers used in healthcare for drug discovery, in law for contract analysis, in education for&nbsp; personalized tutoring, and in science for climate modeling.&nbsp;<\/p>\n\n\n\n<div style=\"background-color: #099f4e; border: 3px solid #110053; border-radius: 12px; padding: 18px 22px; color: #FFFFFF; font-size: 18px; font-family: Montserrat, Helvetica, sans-serif; line-height: 1.6; box-shadow: 0 4px 12px rgba(0, 0, 0, 0.15); max-width: 750px; margin: 22px auto;\">\n  <h3 style=\"margin-top: 0; font-size: 22px; font-weight: 700; color: #ffffff;\">\ud83d\udca1 Did You Know?<\/h3>\n  <ul style=\"padding-left: 20px; margin: 10px 0;\">\n    <li>The Transformer was introduced in 2017 in the paper \u201cAttention Is All You Need\u201d by researchers at Google, and it completely changed the direction of AI research.<\/li>\n    <li>Popular AI models like ChatGPT, BERT, and GPT-4 are all built using Transformer architecture.<\/li>\n    <li>Transformers are not limited to text\u2014they are also used in image processing, speech recognition, and even protein structure prediction like AlphaFold.<\/li>\n  <\/ul>\n<\/div>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Conclusion&nbsp;<\/strong><\/h2>\n\n\n\n<p>The Transformer is one of the most consequential inventions in the history of artificial intelligence.&nbsp; At its core, it is powered by one elegant idea: paying attention to the right things, in context, all at&nbsp; once. By processing words in parallel and using multi-head self-attention to understand&nbsp; relationships across any distance in a sentence, the Transformer architecture overcame the&nbsp; fundamental limitations of earlier sequential models.&nbsp;<\/p>\n\n\n\n<p>Understanding how Transformers work \u2014 the Query-Key-Value attention mechanism, the roles&nbsp; of the Encoder and Decoder, positional encoding, and multi-head attention \u2014 gives you a solid&nbsp; foundation for understanding virtually all modern AI systems. Whether you are a student, a&nbsp; developer, a business professional, or simply someone curious about the AI tools you use every&nbsp; day, knowing the basics of Transformer AI helps you become a more informed participant in the&nbsp; world being shaped by these technologies.&nbsp;<\/p>\n\n\n\n<p>The next time you get a helpful reply from a chatbot, a surprising translation, or a useful code&nbsp; suggestion, you can think: somewhere inside, attention is being paid \u2014 to every word, from every&nbsp; word, all at once.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>FAQs<\/strong><\/h2>\n\n\n<div id=\"rank-math-faq\" class=\"rank-math-block\">\n<div class=\"rank-math-list \">\n<div id=\"faq-question-1777553281336\" class=\"rank-math-list-item\">\n<h3 class=\"rank-math-question \"><strong>Q1: Do I need to know programming to understand Transformer AI?\u00a0<\/strong><\/h3>\n<div class=\"rank-math-answer \">\n\n<p>Not at all. The core concepts of Transformers \u2014 attention, encoders, decoders, positional encoding\u00a0 \u2014 are fully understandable without writing a single line of code. If you do want to experiment\u00a0 hands-on, Python libraries like HuggingFace Transformers make it very accessible even for\u00a0 beginners, with pre-trained models available for free.<\/p>\n\n<\/div>\n<\/div>\n<div id=\"faq-question-1777553301588\" class=\"rank-math-list-item\">\n<h3 class=\"rank-math-question \"><strong>Q2: Is the Transformer only useful for language tasks?<\/strong><\/h3>\n<div class=\"rank-math-answer \">\n\n<p>Originally, yes \u2014 but the architecture proved remarkably flexible. Today, Vision Transformers\u00a0 (ViTs) apply the same mechanism to image patches for computer vision. Transformers are also\u00a0 used in audio processing, video understanding, and protein structure prediction (AlphaFold). The\u00a0 self-attention mechanism turned out to be a general-purpose tool for learning patterns in any kind\u00a0 of sequential or structured data.<\/p>\n\n<\/div>\n<\/div>\n<div id=\"faq-question-1777553319476\" class=\"rank-math-list-item\">\n<h3 class=\"rank-math-question \"><strong>Q3: What is the difference between GPT and BERT?<\/strong><\/h3>\n<div class=\"rank-math-answer \">\n\n<p>Both are built on the Transformer, but they use different parts of it. BERT uses only the Encoder\u00a0 and reads text bidirectionally (left-to-right and right-to-left simultaneously), making it excellent at\u00a0 understanding and analyzing text \u2014 great for search engines, classification, and Q&amp;A. GPT uses\u00a0 only the Decoder and generates text left-to-right one token at a time, making it ideal for creative\u00a0 writing, coding, and conversation. Think of BERT as a reader and GPT as a writer \u2014 both trained\u00a0 on the same foundational architecture.<\/p>\n\n<\/div>\n<\/div>\n<\/div>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>Think about the last time you used ChatGPT, Google Translate, or even the autocomplete on your&nbsp; phone. Behind all of these tools is a powerful idea called the Transformer \u2014 a type of AI&nbsp; architecture that completely changed the way machines understand and generate language. Since&nbsp; it was introduced in a landmark 2017 research paper [&hellip;]<\/p>\n","protected":false},"author":65,"featured_media":109512,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[933],"tags":[],"views":"27","authorinfo":{"name":"Jebasta","url":"https:\/\/www.guvi.in\/blog\/author\/jebasta\/"},"thumbnailURL":"https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2026\/04\/Transformer-AI-300x115.webp","jetpack_featured_media_url":"https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2026\/04\/Transformer-AI.webp","_links":{"self":[{"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/posts\/108939"}],"collection":[{"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/users\/65"}],"replies":[{"embeddable":true,"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/comments?post=108939"}],"version-history":[{"count":3,"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/posts\/108939\/revisions"}],"predecessor-version":[{"id":109517,"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/posts\/108939\/revisions\/109517"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/media\/109512"}],"wp:attachment":[{"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/media?parent=108939"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/categories?post=108939"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/tags?post=108939"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}