{"id":88743,"date":"2025-10-04T17:56:57","date_gmt":"2025-10-04T12:26:57","guid":{"rendered":"https:\/\/www.guvi.in\/blog\/?p=88743"},"modified":"2026-02-20T16:36:16","modified_gmt":"2026-02-20T11:06:16","slug":"ai-foundation-models","status":"publish","type":"post","link":"https:\/\/www.guvi.in\/blog\/ai-foundation-models\/","title":{"rendered":"Understanding AI Foundation Models: Everything You Need to Know in 2026"},"content":{"rendered":"\n<p>Remember those sci-fi movies, where the hero just talks to the computer? They\u2019d ask for complex data analysis, and in a calm voice, they would get an instant reply with the answer from the computer. They\u2019d sketch a rough idea, and the system would render a perfect 3D model. For decades, this was pure fantasy. But today, it\u2019s not. That futuristic computer isn&#8217;t a special effect; it has become real, and it\u2019s powered by something called an AI Foundation Model. This isn&#8217;t just a better app or a smarter algorithm. It\u2019s a fundamental shift in the creation of a digital, all-purpose brain that\u2019s learning to see, write, and reason. And understanding how it works is the key to understanding the world that\u2019s already unfolding around us.&nbsp;<\/p>\n\n\n\n<p>In this blog we\u201dll discuss everything you need to to know about the AI Foundation Model. And we\u2019ll explore what they are, how they work, why they\u2019re such a big deal, and the profound challenges and opportunities they present for our future.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>What Are AI Foundation Models?<\/strong><\/h2>\n\n\n\n<p>A Foundation Model is a large-scale artificial intelligence model trained on a vast and diverse corpus of data (often encompassing text, code, images, and more) using self-supervised learning. Because it is trained on such a broad set of information, it develops a general, foundational understanding that can be adapted (or &#8220;fine-tuned&#8221;) for a wide range of downstream tasks, often without needing to be built from scratch for each new job.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"1200\" height=\"630\" src=\"https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/10\/1-3-1.png\" alt=\"\" class=\"wp-image-91650\" srcset=\"https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/10\/1-3-1.png 1200w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/10\/1-3-1-300x158.png 300w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/10\/1-3-1-768x403.png 768w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/10\/1-3-1-150x79.png 150w\" sizes=\"(max-width: 1200px) 100vw, 1200px\" title=\"\"><\/figure>\n\n\n\n<p>Let&#8217;s break down the key parts of that definition:<\/p>\n\n\n\n<ul>\n<li><strong>Large-Scale:<\/strong> We&#8217;re talking about models with billions or even trillions of parameters (the internal variables the model adjusts during training). This immense size is what allows them to capture incredibly complex patterns.<\/li>\n\n\n\n<li><strong>Vast and Diverse Data:<\/strong> These models are trained on petabytes of data scraped from the internet, books, articles, Wikipedia, code repositories like GitHub, social media, and image databases. This is their &#8220;education.&#8221;<\/li>\n\n\n\n<li><strong>Self-Supervised Learning:<\/strong> This is the magic sauce. Instead of being trained on millions of painstakingly human-labeled examples (e.g., &#8220;this is a picture of a cat&#8221;), the model learns by finding patterns within the data itself. For text, it might be given a sentence with a word missing and learn to predict it. For an image, it might be shown with a portion obscured, and learn to reconstruct the missing part. Through trillions of these simple exercises, it builds a deep, internal representation of language, vision, and logic.<\/li>\n\n\n\n<li><strong>Adaptable (Fine-Tuned):<\/strong> This is the &#8220;foundation&#8221; part. Once this broad base model is built, it isn&#8217;t just for one thing. It can be specialized for specific applications with a relatively small amount of additional, task-specific data. The same model that learns general English from the web can be fine-tuned to serve as a legal contract reviewer, a customer service chatbot, or a creative writing partner.<\/li>\n<\/ul>\n\n\n\n<p>The most famous examples of AI Foundation Models are OpenAI&#8217;s GPT series (which powers ChatGPT), Google&#8217;s Gemini and PaLM models, and image generators like Stable Diffusion and DALL-E. They are the versatile engines driving the current AI revolution.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>How Do AI Foundation Models Work?<\/strong><\/h2>\n\n\n\n<p>AI Foundation Models operate via a mixture of extensive pre-training, fine-tuning, and prompting. The majority of these models operate on a transformer architecture, which makes them capable of understanding relationships between data points, whether words in a sentence, pixels in an image, or patterns in audio.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"1200\" height=\"630\" src=\"https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/10\/2-3-1.png\" alt=\"\" class=\"wp-image-91651\" srcset=\"https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/10\/2-3-1.png 1200w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/10\/2-3-1-300x158.png 300w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/10\/2-3-1-768x403.png 768w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/10\/2-3-1-150x79.png 150w\" sizes=\"(max-width: 1200px) 100vw, 1200px\" title=\"\"><\/figure>\n\n\n\n<p>So, how do these work in practice? Let&#8217;s break it down step by step:<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>1. Pre-Training on Massive Datasets<\/strong><\/h3>\n\n\n\n<ul>\n<li>AI Foundation Models are trained on massive datasets, such as billions of words, images, or code examples.<\/li>\n\n\n\n<li>Instead of learning a discrete task, Foundation Models learn generalized patterns: grammar, facts, reasoning, and so on.<\/li>\n\n\n\n<li>For example, GPT, the model, learns to predict the next word in a sentence; while doing this on an immense scale, it develops deep knowledge of language.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>2. Transformer Architecture and Attention<\/strong><\/h3>\n\n\n\n<ul>\n<li>The critical innovation behind foundation models is the transformer, which was introduced in 2017.<\/li>\n\n\n\n<li>The transformer uses a mechanism called attention to allow the model to focus on the most relevant aspects of the input data.<\/li>\n\n\n\n<li>Example: The impact of attention, in the sentence \u201cThe cat sat on the mat because it was tired,\u201d we use attention to work out that the word \u201cit\u201d refers to \u201cthe cat.\u201d<\/li>\n<\/ul>\n\n\n\n<p>The reliance on context makes foundation models more versatile and powerful than older AI systems.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>3. Fine-Tuning for Specific Tasks<\/strong><\/h3>\n\n\n\n<ul>\n<li>After pre-training, foundation models can be fine-tuned with smaller, specialized datasets.<\/li>\n\n\n\n<li>For example:\n<ul>\n<li>A general language model can be fine-tuned for medical diagnosis using healthcare data.<\/li>\n\n\n\n<li>A vision model can be fine-tuned for self-driving cars using road images.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n\n\n\n<p>This process saves time and resources because the heavy lifting was already done during pre-training.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>4. <\/strong><a href=\"https:\/\/www.guvi.in\/blog\/artificial-intelligence-llms-and-prompting\/\" target=\"_blank\" rel=\"noreferrer noopener\"><strong>Prompting <\/strong><\/a><strong>and In-Context Learning<\/strong><\/h3>\n\n\n\n<ul>\n<li>One of the most exciting features of AI Foundation Models is that they don\u2019t always need retraining.<\/li>\n\n\n\n<li>Instead, you can simply give them a prompt (instructions in natural language).<\/li>\n\n\n\n<li>Example: \u201cWrite a short poem about space exploration.\u201d The model instantly generates a poem without needing task-specific training.<\/li>\n\n\n\n<li>This ability to perform zero-shot (no examples given) or few-shot learning (a few examples given) makes them highly flexible.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>5. Scaling Up = Better Performance<\/strong><\/h3>\n\n\n\n<ul>\n<li>Research shows that as you increase the size of models (parameters) and training data, performance improves significantly.<\/li>\n\n\n\n<li>This is known as the scaling law of AI.<\/li>\n\n\n\n<li>That\u2019s why models like <a href=\"https:\/\/www.guvi.in\/blog\/chatgpt-3-5-vs-4-0\/\" target=\"_blank\" rel=\"noreferrer noopener\">GPT-4<\/a>, PaLM, and Gemini are much more capable than their earlier versions.<\/li>\n<\/ul>\n\n\n\n<p><em>Curious about how AI Foundation Models really work? Start your journey with HCL GUVI\u2019s Free 5-Day<\/em><a href=\"https:\/\/www.guvi.in\/mlp\/AI-ML-Email-Course?utm_source=blog&amp;utm_medium=hyperlink&amp;utm_campaign=AI+Foundation+Model\" target=\"_blank\" rel=\"noreferrer noopener\"><em> AI &amp; ML Email Course<\/em><\/a><em> and get practical lessons straight in your inbox.<\/em><\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Types of Foundation Models<\/strong><\/h2>\n\n\n\n<p>While the term &#8220;Foundation Model&#8221; often brings text generators to mind, the concept extends across several domains. They can be categorized by their primary input and output modalities.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>1. Language Foundation Models<\/strong><\/h3>\n\n\n\n<p>These are the most well-known AI Foundation Models. Trained primarily on text data, they excel at understanding and generating human language.<\/p>\n\n\n\n<p><strong>Examples:<\/strong> GPT-4, <a href=\"http:\/\/guvi.in\/blog\/what-is-google-gemini\/\" target=\"_blank\" rel=\"noreferrer noopener\">Gemini<\/a>, Claude, LLaMA.<\/p>\n\n\n\n<p><strong>Capabilities:<\/strong> Writing essays and emails, translating languages, answering questions, summarizing long documents, writing and debugging code, and engaging in open-ended conversation.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>2. Vision Foundation Models<\/strong><\/h3>\n\n\n\n<p>Trained on massive datasets of images and their associated text captions, these models understand the visual world.<\/p>\n\n\n\n<p><strong>Examples:<\/strong> <a href=\"https:\/\/www.guvi.in\/blog\/chatgpt-dall-e-and-generative-ai\/\" target=\"_blank\" rel=\"noreferrer noopener\">DALL-E 3<\/a>, Midjourney, Stable Diffusion, SAM (Segment Anything Model).<\/p>\n\n\n\n<p><strong>Capabilities:<\/strong> Generating realistic images from text prompts (text-to-image), editing existing images based on text instructions, identifying and segmenting objects within images, and classifying visual content.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"1200\" height=\"630\" src=\"https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/10\/3-3-1.png\" alt=\"\" class=\"wp-image-91652\" srcset=\"https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/10\/3-3-1.png 1200w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/10\/3-3-1-300x158.png 300w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/10\/3-3-1-768x403.png 768w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/10\/3-3-1-150x79.png 150w\" sizes=\"(max-width: 1200px) 100vw, 1200px\" title=\"\"><\/figure>\n\n\n\n<p><strong>3. Multimodal Foundation Models<\/strong><\/p>\n\n\n\n<p>These models are trained on multiple types of data simultaneously, text, images, audio, and sometimes even video. This allows them to understand and make connections across different modalities.<\/p>\n\n\n\n<p><strong>Examples:<\/strong> GPT-4V (which can see and analyze images), Gemini Native (designed from the ground up to be multimodal).<\/p>\n\n\n\n<p><strong>Capabilities: <\/strong>Answering questions about an image (&#8220;What&#8217;s funny about this meme?&#8221;), generating an image from a complex textual description, analyzing a graph and writing a summary about it, or even generating a video from a text script (a rapidly advancing field).<\/p>\n\n\n\n<p><strong>4. Scientific and Code Foundation Models<\/strong><\/p>\n\n\n\n<p>Some AI Foundation Models are trained on highly specialized data to power scientific discovery and software development.<\/p>\n\n\n\n<p><strong>Examples:<\/strong> AlphaFold (for predicting protein structures), Codex (the model that powers <a href=\"https:\/\/www.guvi.in\/blog\/guide-on-ai-agents-mcps-and-github-copilot\/\" target=\"_blank\" rel=\"noreferrer noopener\">GitHub Copilot<\/a>), AlphaMissense (for classifying genetic mutations).<\/p>\n\n\n\n<p><strong>Capabilities:<\/strong> Predicting complex 3D structures of proteins, suggesting and auto-completing code, explaining scientific papers, and accelerating drug discovery.<\/p>\n\n\n\n<div style=\"background-color: #099f4e; border: 3px solid #110053; border-radius: 12px; padding: 18px 22px; color: #FFFFFF; font-size: 18px; font-family: Montserrat, Helvetica, sans-serif; line-height: 1.6; box-shadow: 0 4px 12px rgba(0, 0, 0, 0.15); max-width: 750px; margin: 20px auto;\">\n  <ul style=\"margin: 0; padding-left: 22px;\">\n    <li>The term <b>\u201cFoundation Model\u201d<\/b> was first coined by Stanford researchers in 2021 \u2014 and within just three years, it became the backbone of tools like <b>ChatGPT<\/b> and <b>Gemini<\/b>!<\/li>\n    <li>Some AI foundation models are trained on data sets larger than the <b>entire Wikipedia multiplied by thousands<\/b>!<\/li>\n    <li>Models like <b>GPT-4<\/b> can understand text, images, and even humor \u2014 yes, they can actually <b>\u201cget the joke\u201d!<\/b><\/li>\n  <\/ul>\n<\/div>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Key Characteristics of AI Foundation Models<\/strong><\/h2>\n\n\n\n<p>AI Foundation Models are noteworthy due to a few key features.<\/p>\n\n\n\n<p><strong>Generalization:<\/strong> AI Foundation models differ from <a href=\"https:\/\/www.guvi.in\/blog\/generative-ai-vs-traditional-ai\/\" target=\"_blank\" rel=\"noreferrer noopener\">traditional AI<\/a> systems because they are not designed for a single task. A foundation model can summarize a text, translate a language, and write codes, all from a single model.<\/p>\n\n\n\n<p><strong>Scalability:<\/strong> AI Foundation models improve as they increase in size and in the amount of training data. Generative pre-trained transformer models such as <a href=\"https:\/\/openai.com\/index\/gpt-4-research\/\" target=\"_blank\" rel=\"noreferrer noopener\">GPT-4<\/a> often display new &#8220;emergent abilities&#8221; not observed in smaller models.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"1200\" height=\"630\" src=\"https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/10\/4-1-1.png\" alt=\"\" class=\"wp-image-91653\" srcset=\"https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/10\/4-1-1.png 1200w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/10\/4-1-1-300x158.png 300w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/10\/4-1-1-768x403.png 768w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/10\/4-1-1-150x79.png 150w\" sizes=\"(max-width: 1200px) 100vw, 1200px\" title=\"\"><\/figure>\n\n\n\n<p><strong>Adaptability:<\/strong>AI Foundation models can be fine-tuned for specific industries (e.g., <a href=\"https:\/\/www.guvi.in\/blog\/ai-in-healthcare-applications\/\" target=\"_blank\" rel=\"noreferrer noopener\">healthcare<\/a>, finance) with little data or guided with prompting strategies to achieve results without the need for fine-tuning.<\/p>\n\n\n\n<p><strong>Multimodality:<\/strong> Many AI foundation models are also multimodal, which allow them to work across different text, images, audio, and video. This lead to more interesting applications like captioning still images, or working with documents containing both text and images.<\/p>\n\n\n\n<p><strong>Zero-shot &amp; Few-shot Learning:<\/strong> Foundation models can perform new tasks without any prior training (zero-shot) or with only a few examples (few-shot), creating a workable solution that is both flexible and efficient.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Challenges and Limitations<\/strong><\/h2>\n\n\n\n<p>Despite their power, AI Foundation Models come with challenges:<\/p>\n\n\n\n<ul>\n<li><strong>Bias and Fairness:<\/strong> Models inherit biases from training data, which can lead to unfair outcomes.<\/li>\n\n\n\n<li><strong>Compute and Energy Costs:<\/strong> Training large models requires massive computational resources.<\/li>\n\n\n\n<li><strong>Lack of Interpretability:<\/strong> Foundation models often function as \u201cblack boxes.\u201d<\/li>\n\n\n\n<li><strong>Security Risks:<\/strong> Potential for misuse, such as generating misinformation or malicious code.<\/li>\n\n\n\n<li><strong>Data Privacy:<\/strong> Using large datasets raises privacy and copyright issues.<\/li>\n<\/ul>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"1200\" height=\"630\" src=\"https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/10\/5-2-1.png\" alt=\"\" class=\"wp-image-91654\" srcset=\"https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/10\/5-2-1.png 1200w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/10\/5-2-1-300x158.png 300w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/10\/5-2-1-768x403.png 768w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/10\/5-2-1-150x79.png 150w\" sizes=\"(max-width: 1200px) 100vw, 1200px\" title=\"\"><\/figure>\n\n\n\n<p>Addressing these challenges is crucial for building safe and responsible AI systems.<\/p>\n\n\n\n<p><em>Curious about AI Foundation Models and how they\u2019re transforming the world? Learn hands-on with HCL GUVI\u2019s IITM Pravartak &amp; Intel Certified <\/em><a href=\"https:\/\/www.guvi.in\/mlp\/artificial-intelligence-and-machine-learning\/?utm_source=blog&amp;utm_medium=hyperlink&amp;utm_campaign=AI+Foundation+Models\" target=\"_blank\" rel=\"noreferrer noopener\"><em>AI &amp; ML course<\/em><\/a><em>, designed to make you job-ready. From beginner-friendly projects to advanced concepts, you\u2019ll gain the skills to stand out in the fast-growing AI job market.<\/em><\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Wrapping it up\u2026<\/strong><\/h2>\n\n\n\n<p>AI Foundation Models represent a tectonic shift in our relationship with technology. They are not merely tools but partners and amplifiers of human intellect and creativity. They hold a mirror to our own world, reflecting both our collective knowledge and our deep-seated flaws.<\/p>\n\n\n\n<p>Understanding what they are, how they work, and the immense potential and pitfalls they carry is no longer a technical exercise; it is an essential form of modern literacy. As this technology continues to evolve at a staggering rate, our role as a society is to guide its development with wisdom, foresight, and a steadfast commitment to human-centric values. The foundation has been poured; it is now up to us to build a future upon it that is equitable, safe, and beneficial for all.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">FAQs<\/h2>\n\n\n<div id=\"rank-math-faq\" class=\"rank-math-block\">\n<div class=\"rank-math-list \">\n<div id=\"faq-question-1759568054916\" class=\"rank-math-list-item\">\n<h3 class=\"rank-math-question \"><strong>1. How are AI Foundation Models different from traditional AI?<\/strong><\/h3>\n<div class=\"rank-math-answer \">\n\n<p>Traditional AI models usually operate on one task, while foundation models can easily adjust to many tasks with the same training core.<\/p>\n\n<\/div>\n<\/div>\n<div id=\"faq-question-1759568078282\" class=\"rank-math-list-item\">\n<h3 class=\"rank-math-question \"><strong>2. Are small businesses able to utilize these models?<\/strong><\/h3>\n<div class=\"rank-math-answer \">\n\n<p>Yes. Most pre-trained models can be found in APIs and various cloud services, allowing for usage without needing very deep budgets.<\/p>\n\n<\/div>\n<\/div>\n<div id=\"faq-question-1759568100487\" class=\"rank-math-list-item\">\n<h3 class=\"rank-math-question \"><strong>3. Is fine-tuning necessary every single time?<\/strong><\/h3>\n<div class=\"rank-math-answer \">\n\n<p>No. Many tasks can be accomplished with prompts alone, but fine-tuning is beneficial, and essential, for certain tasks specific to a certain industry.<\/p>\n\n<\/div>\n<\/div>\n<div id=\"faq-question-1759568126505\" class=\"rank-math-list-item\">\n<h3 class=\"rank-math-question \"><strong>4. What types of skills will someone need to work with foundation models?<\/strong><\/h3>\n<div class=\"rank-math-answer \">\n\n<p>Having a basic knowledge of ML would be helpful, but even non-techies can begin using prompt engineering with various cloud AI tools.<\/p>\n\n<\/div>\n<\/div>\n<\/div>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>Remember those sci-fi movies, where the hero just talks to the computer? They\u2019d ask for complex data analysis, and in a calm voice, they would get an instant reply with the answer from the computer. They\u2019d sketch a rough idea, and the system would render a perfect 3D model. For decades, this was pure fantasy. [&hellip;]<\/p>\n","protected":false},"author":63,"featured_media":91646,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[933],"tags":[],"views":"2299","authorinfo":{"name":"Vishalini Devarajan","url":"https:\/\/www.guvi.in\/blog\/author\/vishalini\/"},"thumbnailURL":"https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/10\/Feature-image-10-300x116.png","jetpack_featured_media_url":"https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/10\/Feature-image-10.png","_links":{"self":[{"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/posts\/88743"}],"collection":[{"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/users\/63"}],"replies":[{"embeddable":true,"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/comments?post=88743"}],"version-history":[{"count":7,"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/posts\/88743\/revisions"}],"predecessor-version":[{"id":101848,"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/posts\/88743\/revisions\/101848"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/media\/91646"}],"wp:attachment":[{"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/media?parent=88743"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/categories?post=88743"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/tags?post=88743"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}