{"id":81369,"date":"2025-06-13T10:37:08","date_gmt":"2025-06-13T05:07:08","guid":{"rendered":"https:\/\/www.guvi.in\/blog\/?p=81369"},"modified":"2025-10-08T18:00:06","modified_gmt":"2025-10-08T12:30:06","slug":"association-rule-in-data-science","status":"publish","type":"post","link":"https:\/\/www.guvi.in\/blog\/association-rule-in-data-science\/","title":{"rendered":"Association Rule in Data Science: A Complete Guide"},"content":{"rendered":"\n<p>Have you ever wondered how online stores seem to \u201cread your mind\u201d by recommending the exact item you didn\u2019t know you needed? Do you like suggesting peanut butter when you add bread to your cart? It\u2019s data science at work.&nbsp;<\/p>\n\n\n\n<p>Specifically, it\u2019s the power of <strong>association rule in data science<\/strong>, a technique used to uncover relationships between items in large datasets. If it\u2019s market basket analysis in retail, product recommendations in e-commerce, or patient symptom analysis in healthcare, association rules help data scientists make sense of co-occurrence patterns.<\/p>\n\n\n\n<p>In this article, we\u2019ll explore how association rules work, evaluate their usefulness, and discuss their applications in real-world scenarios. So, without further ado, let\u2019s get started!<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>What is the Association Rule in Data Science?<\/strong><\/h2>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1200\" height=\"630\" src=\"https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/06\/2@2x-1-1200x630.webp\" alt=\"What is the Association Rule in Data Science?\" class=\"wp-image-81711\" srcset=\"https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/06\/2@2x-1-1200x630.webp 1200w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/06\/2@2x-1-300x158.webp 300w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/06\/2@2x-1-768x403.webp 768w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/06\/2@2x-1-1536x806.webp 1536w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/06\/2@2x-1-2048x1075.webp 2048w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/06\/2@2x-1-150x79.webp 150w\" sizes=\"(max-width: 1200px) 100vw, 1200px\" title=\"\"><\/figure>\n\n\n\n<p>Association rule in <a href=\"https:\/\/www.guvi.in\/blog\/what-is-data-science\/\" target=\"_blank\" rel=\"noreferrer noopener\">data science<\/a> is a rule-based <a href=\"https:\/\/www.guvi.in\/blog\/what-is-data-mining\/\" target=\"_blank\" rel=\"noreferrer noopener\">data mining<\/a> technique for discovering interesting relationships between variables in large datasets. It is best known for market basket analysis, where retailers look for items that are frequently bought together.&nbsp;<\/p>\n\n\n\n<p>At its core, association rule in data science works in two main steps: (1) find <em>frequent itemsets<\/em> that meet a minimum support threshold, and (2) generate \u201cif-then\u201d rules from those itemsets that meet a minimum confidence threshold. For example, given a transaction dataset, we might discover the frequent itemset {Milk, Bread, Butter} and then form the rule {Milk, Bread} \u21d2 {Butter}, meaning \u201ccustomers buying milk and bread often also buy butter.\u201d&nbsp;<\/p>\n\n\n\n<p>Each such rule is scored by measures like <em>support<\/em>, <em>confidence<\/em>, and <em>lift<\/em> (defined below) to gauge its strength and usefulness.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Key Concepts of <\/strong><strong>Association Rule in Data Science<\/strong><\/h2>\n\n\n\n<p>In association analysis, we treat each record (e.g., a shopping basket) as a transaction, which is a set of items. An itemset is any subset of items. An itemset is called <em>frequent<\/em> if it appears in at least a specified fraction (the minimum support) of all transactions.&nbsp;<\/p>\n\n\n\n<p>For example, if \u201cMilk\u201d appears in 30 out of 100 transactions, its support is 30%. If our minimum support is 20%, \u201cMilk\u201d would be considered a frequent 1-itemset.<\/p>\n\n\n\n<p>After identifying frequent itemsets, we form association rules of the form X \u21d2 Y, where X and Y are disjoint itemsets. This rule is interpreted as \u201cif a transaction contains X, it often contains Y too.\u201d In this rule, X is the <em>antecedent<\/em> (left-hand side) and Y is the <em>consequent<\/em> (right-hand side).<\/p>\n\n\n\n<p>For example, a rule could be {Milk, Bread} \u21d2 {Butter}. The task of association rule learning is to find all such high-quality rules that have sufficient support and confidence.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Support, Confidence, and Lift: Measures of Association Rule<\/strong><\/h2>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1200\" height=\"630\" src=\"https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/06\/3@2x-1-1200x630.webp\" alt=\"Support, Confidence, and Lift: Measures of Association Rule\" class=\"wp-image-81712\" srcset=\"https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/06\/3@2x-1-1200x630.webp 1200w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/06\/3@2x-1-300x158.webp 300w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/06\/3@2x-1-768x403.webp 768w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/06\/3@2x-1-1536x806.webp 1536w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/06\/3@2x-1-2048x1075.webp 2048w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/06\/3@2x-1-150x79.webp 150w\" sizes=\"(max-width: 1200px) 100vw, 1200px\" title=\"\"><\/figure>\n\n\n\n<p>Three key metrics measure the quality of an association rule:<\/p>\n\n\n\n<p><strong>Support <\/strong>of an itemset X is the fraction of all transactions that contain X. Equivalently,<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"394\" height=\"96\" src=\"https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/06\/image-4.png\" alt=\"Support, Confidence, and Lift: Measures of Association Rule\" class=\"wp-image-81371\" srcset=\"https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/06\/image-4.png 394w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/06\/image-4-300x73.png 300w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/06\/image-4-150x37.png 150w\" sizes=\"(max-width: 394px) 100vw, 394px\" title=\"\"><\/figure>\n\n\n\n<p><br>Higher support means X occurs frequently. For example, if 25 out of 200 sales include <em>Milk<\/em>, the support of {Milk} is 12.5%. We often use a minimum support threshold (say 5% or 10%) to focus on itemsets that are common enough to be interesting.<br><\/p>\n\n\n\n<p><strong>Confidence<\/strong> of a rule X\u21d2Y measures how often items in Y appear among transactions that contain X. It is defined as<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"313\" height=\"81\" src=\"https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/06\/image-5.png\" alt=\"Confidence of a rule X\u21d2Y measures\" class=\"wp-image-81372\" srcset=\"https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/06\/image-5.png 313w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/06\/image-5-300x78.png 300w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/06\/image-5-150x39.png 150w\" sizes=\"(max-width: 313px) 100vw, 313px\" title=\"\"><\/figure>\n\n\n\n<p>This is the conditional probability P(Y | X). For instance, if 15 transactions contain {Milk,Bread} and 12 of those also include <em>Butter<\/em>, then confidence({Milk,Bread}\u21d2{Butter}) = 12\/15 = 80%. We typically require rules to have a confidence above a certain threshold (e.g., 60% or 70%) to be considered strong.<br><\/p>\n\n\n\n<p><strong>Lift<\/strong> compares the observed co-occurrence of X and Y with what would be expected if they were statistically independent. Mathematically,<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"275\" height=\"71\" src=\"https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/06\/image-6.png\" alt=\"Lift compares the observed co-occurrence of X and Y\" class=\"wp-image-81373\" srcset=\"https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/06\/image-6.png 275w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/06\/image-6-150x39.png 150w\" sizes=\"(max-width: 275px) 100vw, 275px\" title=\"\"><\/figure>\n\n\n\n<p>A lift of 1.0 means X and Y are independent. A lift greater than 1 indicates a <em>positive<\/em> association (X and Y occur together more often than chance). For example, if support(X) = 0.4, support(Y)=0.2 and support(X\u222aY)=0.12, then lift = 0.12\/(0.4*0.2) = 1.5. This means X\u21d2Y is 1.5 times more likely than random.<\/p>\n\n\n\n<p>Together, support, confidence, and lift help us gauge rule strength: we keep rules with sufficient support, a high confidence (reliable rules), and often with lift &gt; 1 (rules that are truly interesting beyond random chance).<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Association Rule Mining Algorithms<\/strong><\/h2>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1200\" height=\"630\" src=\"https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/06\/4@2x-1-1200x630.webp\" alt=\"Association Rule Mining Algorithms\" class=\"wp-image-81713\" srcset=\"https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/06\/4@2x-1-1200x630.webp 1200w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/06\/4@2x-1-300x158.webp 300w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/06\/4@2x-1-768x403.webp 768w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/06\/4@2x-1-1536x806.webp 1536w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/06\/4@2x-1-2048x1075.webp 2048w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/06\/4@2x-1-150x79.webp 150w\" sizes=\"(max-width: 1200px) 100vw, 1200px\" title=\"\"><\/figure>\n\n\n\n<p>The process of finding association rules involves two main tasks: mining frequent itemsets and then generating rules from them. Two classic algorithms for this are Apriori and Eclat.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Apriori Algorithm<\/strong><\/h3>\n\n\n\n<p>The Apriori algorithm is the most famous method for association rule mining. Apriori uses a bottom-up, breadth-first approach that relies on the \u201cApriori property\u201d: <em>if an itemset is frequent, then all of its subsets must also be frequent<\/em>.&nbsp;<\/p>\n\n\n\n<p>Conversely, if an itemset is infrequent (below min support), none of its supersets can be frequent. This property allows the algorithm to prune the search space aggressively.<\/p>\n\n\n\n<p>The Apriori process works in iterations:<\/p>\n\n\n\n<ol>\n<li><strong>Generate frequent 1-itemsets:<\/strong> Scan the dataset and count the support of each item. Keep only those items whose support meets the minimum threshold.<br><\/li>\n\n\n\n<li><strong>Generate candidate 2-itemsets: <\/strong>Form all possible pairs from the frequent 1-itemsets, then scan the data to count their support. Discard any pair whose support is below the threshold.<br><\/li>\n\n\n\n<li><strong>Iterate:<\/strong> Use the Apriori property to generate candidate 3-itemsets from the frequent 2-itemsets, prune infrequent candidates, and so on. Each iteration k generates frequent k-itemsets by combining frequent (k\u20131)-itemsets. The process stops when no new frequent itemsets can be found.<br><\/li>\n\n\n\n<li><strong>Generate association rules:<\/strong> For each frequent itemset L and each non-empty subset X of L, form the rule X \u21d2 (L\u2013X) and compute its confidence. Keep rules whose confidence (and optionally lift) meets the given thresholds.<\/li>\n<\/ol>\n\n\n\n<p>To put it simply, Apriori repeatedly scans the database: first to find frequent single items, then frequent pairs, triples, etc., pruning at each step using the Apriori property. While simple and effective for moderate data, Apriori can become expensive on very large datasets because of many database passes and candidate generation.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Eclat Algorithm<\/strong><\/h3>\n\n\n\n<p><a href=\"https:\/\/iopscience.iop.org\/article\/10.1088\/1755-1315\/469\/1\/012036\/pdf\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Eclat (Equivalence Class Transformation)<\/a> is an alternative frequent itemset mining algorithm. Unlike Apriori, which scans the database repeatedly in a horizontal layout, Eclat works in a vertical format.&nbsp;<\/p>\n\n\n\n<p>In Eclat, each item is associated with a list of transaction IDs (TIDs) where it appears. The algorithm finds frequent itemsets by taking intersections of these TID lists. Key differences are:<\/p>\n\n\n\n<ul>\n<li>Apriori uses breadth-first search (BFS) on a horizontal dataset, repeatedly generating larger itemsets and scanning the whole database each time.<br><\/li>\n\n\n\n<li>Eclat uses depth-first search (DFS) on vertical representations, intersecting TID lists of smaller itemsets to quickly compute support of larger ones.<br><\/li>\n<\/ul>\n\n\n\n<p>Because Eclat often requires fewer scans, it can be more memory-efficient and faster for large data. In practice, Eclat is a powerful alternative to Apriori when the dataset is large and can fit these TID lists in memory.<\/p>\n\n\n\n<p>If you want to read more about how much association rule is important in Data Science, consider reading HCL GUVI\u2019s Free Ebook: <a href=\"https:\/\/www.guvi.in\/mlp\/data-science-ebook?utm_source=blog&amp;utm_medium=hyperlink&amp;utm_campaign=association-rule-in-data-science\" target=\"_blank\" rel=\"noreferrer noopener\">Master the Art of Data Science &#8211; A Complete Guide<\/a>, which covers the key <a href=\"https:\/\/www.guvi.in\/blog\/data-science-concepts\/\" target=\"_blank\" rel=\"noreferrer noopener\">concepts of Data Science,<\/a> including foundational concepts like statistics, probability, and linear algebra, along with essential tools.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Applications of Association Rule in Data Science<\/strong><\/h2>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1200\" height=\"630\" src=\"https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/06\/5@2x-1-1200x630.webp\" alt=\"Applications of Association Rule in Data Science\" class=\"wp-image-81715\" srcset=\"https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/06\/5@2x-1-1200x630.webp 1200w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/06\/5@2x-1-300x158.webp 300w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/06\/5@2x-1-768x403.webp 768w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/06\/5@2x-1-1536x806.webp 1536w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/06\/5@2x-1-2048x1075.webp 2048w, https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/06\/5@2x-1-150x79.webp 150w\" sizes=\"(max-width: 1200px) 100vw, 1200px\" title=\"\"><\/figure>\n\n\n\n<p>Association rule learning has many real-world applications, especially in retail and recommendation systems:<\/p>\n\n\n\n<ul>\n<li><strong>Market Basket Analysis:<\/strong> Retailers analyze transaction data to see which products are often bought together. For example, supermarkets might discover that {Bread, Butter} \u21d2 {Jam}. Such insights can drive promotions (e.g., bundle offers) and personalized recommendations.<br><\/li>\n\n\n\n<li><strong>Recommendation Engines:<\/strong> E-commerce sites (like Amazon) use association rules to suggest items. For example, a shopper who adds \u201cformal shoes\u201d to the cart may see a recommendation, \u201ccustomers who bought these shoes also bought socks.\u201d<br><\/li>\n\n\n\n<li><strong>Web Usage Mining: <\/strong>Association rules can find pages or links that are frequently visited together, helping to structure website navigation or target content.<br><\/li>\n\n\n\n<li><strong>Healthcare and Bioinformatics: <\/strong>Rules can uncover associations between medical symptoms, diagnoses, or genetic markers. For example, an association rule might reveal that patients with symptoms X and Y often have disease Z.<br><\/li>\n\n\n\n<li><strong>Intrusion Detection: <\/strong>In cybersecurity, association rules help detect patterns of system events that precede an attack, flagging unusual combinations of actions.<\/li>\n<\/ul>\n\n\n\n<p>These applications all rely on the core idea: find patterns in data, then form human-readable rules that can guide decisions or actions. In business, the insights from association rules must be validated by domain experts, but the method provides a powerful automated way to sift through large volumes of transactional data.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Interactive Challenge: Put Theory Into Practice<\/strong><\/h2>\n\n\n\n<p>Test your understanding with this quick exercise:<\/p>\n\n\n\n<ul>\n<li>Question 1: In a dataset of 100 transactions, item A appears in 40 transactions, and items A and B appear together in 10 transactions. What is the support of {A} and the confidence of the rule {A}\u21d2{B}?<br><\/li>\n\n\n\n<li>Question 2: Suppose support({A}) = 0.4, support({B}) = 0.5, and support({A,B}) = 0.1. Compute the lift of the rule {A}\u21d2{B}. Is this a positive association or independence?<\/li>\n<\/ul>\n\n\n\n<p><em>Try to answer before looking below.<\/em><\/p>\n\n\n\n<p><strong>Answers:<\/strong><\/p>\n\n\n\n<ol>\n<li>Support({A}) = 40\/100 = 0.40 (40%). Confidence({A}\u21d2{B}) = support({A,B}) \/ support({A}) = 10\/40 = 0.25 (25%).<br><\/li>\n\n\n\n<li>Lift({A}\u21d2{B}) = support({A,B}) \/ (support({A})\u00b7support({B})) = 0.1 \/ (0.4\u00b70.5) = 0.1 \/ 0.20 = 0.5. A lift of 0.5 (&lt;1) indicates a <em>negative<\/em> association (A and B occur together less often than if they were independent).<\/li>\n<\/ol>\n\n\n\n<p>If you want to learn more about how Association Rule is crucial for data science and how it is changing the world around us through a structured program that starts from scratch, consider enrolling in HCL GUVI\u2019s IIT-M Pravartak Certified <a href=\"https:\/\/www.guvi.in\/zen-class\/data-science-course\/?utm_source=blog&amp;utm_medium=hyperlink&amp;utm_campaign=association-rule-in-data-science\" target=\"_blank\" rel=\"noreferrer noopener\">Data Science Course<\/a>, which empowers you with the skills and guidance for a successful and rewarding <a href=\"https:\/\/www.guvi.in\/blog\/how-to-become-a-top-data-scientist\/\" target=\"_blank\" rel=\"noreferrer noopener\">data science career<\/a>.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Conclusion<\/strong><\/h2>\n\n\n\n<p>In conclusion, association rule in data science is more than just a technique; it\u2019s a foundational tool for understanding meaningful patterns in seemingly chaotic datasets. By understanding how items co-occur and evaluating their relationships through support, confidence, and lift, you can derive actionable insights that influence decision-making across industries.&nbsp;<\/p>\n\n\n\n<p>As you move forward in your data science journey, mastering association rules will give you a strong edge in solving real-world problems with confidence.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>FAQs<\/strong><\/h2>\n\n\n\n<p><strong>1. What is an association rule in data mining?<\/strong><\/p>\n\n\n\n<p>An association rule is a pattern that suggests if one item or group appears in a dataset, another is likely to appear too. It&#8217;s commonly written as &#8220;A \u21d2 B&#8221; and used to find item relationships in transactional data.<\/p>\n\n\n\n<p><strong>2. Why are association rules important?<\/strong><\/p>\n\n\n\n<p>They help businesses discover hidden patterns in user behavior, like which products are often bought together, enabling better product placement, cross-selling, and targeted recommendations.<\/p>\n\n\n\n<p><strong>3. What are support, confidence, and lift?<\/strong><\/p>\n\n\n\n<p>Support measures how frequently items appear together, confidence shows how often the rule holds, and lift indicates the strength of the rule compared to random chance.<\/p>\n\n\n\n<p><strong>4. How are association rules generated?<\/strong><\/p>\n\n\n\n<p>Rules are generated in two steps: first, finding frequent itemsets with enough support; second, creating rules from them and filtering by confidence and lift to keep only strong, useful rules.<\/p>\n\n\n\n<p><strong>5. What are the limitations of association rules?<\/strong><\/p>\n\n\n\n<p>They can produce a large number of rules, including irrelevant ones. Setting the right thresholds and scaling to large datasets can also be challenging without optimized algorithms.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Have you ever wondered how online stores seem to \u201cread your mind\u201d by recommending the exact item you didn\u2019t know you needed? Do you like suggesting peanut butter when you add bread to your cart? It\u2019s data science at work.&nbsp; Specifically, it\u2019s the power of association rule in data science, a technique used to uncover [&hellip;]<\/p>\n","protected":false},"author":22,"featured_media":81710,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[16],"tags":[],"views":"4546","authorinfo":{"name":"Lukesh S","url":"https:\/\/www.guvi.in\/blog\/author\/lukesh\/"},"thumbnailURL":"https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/06\/1-1-300x116.webp","jetpack_featured_media_url":"https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/06\/1-1.webp","_links":{"self":[{"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/posts\/81369"}],"collection":[{"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/users\/22"}],"replies":[{"embeddable":true,"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/comments?post=81369"}],"version-history":[{"count":8,"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/posts\/81369\/revisions"}],"predecessor-version":[{"id":89165,"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/posts\/81369\/revisions\/89165"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/media\/81710"}],"wp:attachment":[{"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/media?parent=81369"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/categories?post=81369"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/tags?post=81369"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}