{"id":109014,"date":"2026-05-02T12:55:33","date_gmt":"2026-05-02T07:25:33","guid":{"rendered":"https:\/\/www.guvi.in\/blog\/?p=109014"},"modified":"2026-05-02T12:55:35","modified_gmt":"2026-05-02T07:25:35","slug":"composer-building-a-fast-frontier-model-with-rl","status":"publish","type":"post","link":"https:\/\/www.guvi.in\/blog\/composer-building-a-fast-frontier-model-with-rl\/","title":{"rendered":"Composer: Building a fast frontier model with RL\u00a0"},"content":{"rendered":"\n<p>Training a large language model is one of the hardest engineering challenges in AI. You need massive compute, clean data, careful tuning, and months of iteration before you see results worth talking about.<\/p>\n\n\n\n<p>Most teams take the slow road. Bigger models. More parameters. More time. More money.<\/p>\n\n\n\n<p>Anthropic took a different approach with Composer. Instead of simply scaling up, they focused on making the training process itself smarter using reinforcement learning. The result is a frontier model that is not just capable but genuinely fast, efficient, and built for the demands of real-world use.<\/p>\n\n\n\n<p>This guide breaks down what Composer is, how reinforcement learning fits into the picture, and why this approach represents a meaningful shift in how frontier AI models get built.<\/p>\n\n\n\n<p><strong>Quick TL;DR Summary<\/strong><\/p>\n\n\n\n<ol>\n<li>This guide explains what Composer is and how Anthropic used reinforcement learning to build a fast frontier model.<br><\/li>\n\n\n\n<li>You will learn why traditional model training approaches hit a ceiling and what RL does differently.<br><\/li>\n\n\n\n<li>The guide covers the technical ideas behind Composer in plain language anyone can follow.<br><\/li>\n\n\n\n<li>Real comparisons show what makes Composer faster and more capable than models trained with conventional methods.<br><\/li>\n\n\n\n<li>You will understand what this means for the future of AI development and why the training method matters as much as the model size.<\/li>\n<\/ol>\n\n\n\n<h2 class=\"wp-block-heading\">What Is Composer and Why Does It Matter?<\/h2>\n\n\n\n<p>Composer is Anthropic&#8217;s approach to building a frontier AI model that uses reinforcement learning at its core to optimize not just what the model knows but how efficiently and accurately it reasons, making it faster and more capable without simply relying on scale alone.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Why Traditional Model Training Hits a Wall<\/strong><\/h2>\n\n\n\n<ol>\n<li><strong>Bigger is not always better&nbsp;<\/strong><\/li>\n<\/ol>\n\n\n\n<p>The default assumption in <a href=\"https:\/\/www.guvi.in\/blog\/what-is-artificial-intelligence\/\" target=\"_blank\" rel=\"noreferrer noopener\">AI<\/a> has been that more parameters equal better performance. This is true up to a point. But after a certain size, returns diminish. You spend exponentially more compute for marginal gains. The math stops making sense.<\/p>\n\n\n\n<ol start=\"2\">\n<li><strong>Supervised learning has a ceiling&nbsp;<\/strong><\/li>\n<\/ol>\n\n\n\n<p>Most models are trained by showing them examples and teaching them to mimic the correct answer. This works well for common patterns but breaks down on tasks that require genuine reasoning, planning, or working through novel problems step by step.<\/p>\n\n\n\n<ol start=\"3\">\n<li><strong>Static training data goes stale&nbsp;<\/strong><\/li>\n<\/ol>\n\n\n\n<p>Training on a fixed dataset means the model learns what was true at a point in time. It does not learn how to reason through new situations. The model memorizes rather than understands, which creates brittle performance on anything outside its training distribution.<\/p>\n\n\n\n<ol start=\"4\">\n<li><strong>Speed and capability rarely come together&nbsp;<\/strong><\/li>\n<\/ol>\n\n\n\n<p>Large frontier models are often slow. You get impressive outputs but you wait for them. For real-world applications where latency matters, this is a serious problem. Most teams accept the tradeoff. Composer was built to reject it.<\/p>\n\n\n\n<p><strong>Read More: <\/strong><a href=\"https:\/\/www.guvi.in\/blog\/build-ai-apps-with-claude\/\" target=\"_blank\" rel=\"noreferrer noopener\"><strong>How to Build AI Apps with Claude and Share Them Easily<\/strong><\/a><\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>How Anthropic Built Composer With Reinforcement Learning<\/strong><\/h2>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Step 1: Start with a strong base model&nbsp;<\/strong><\/h3>\n\n\n\n<p>Before RL enters the picture, you need a model that already understands language, reasoning, and context at a high level. Composer begins with a carefully pre-trained foundation that gives RL a strong starting point to work from.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Step 2: Define what good looks like&nbsp;<\/strong><\/h3>\n\n\n\n<p><a href=\"https:\/\/www.guvi.in\/blog\/what-is-reinforcement-learning\/\" target=\"_blank\" rel=\"noreferrer noopener\">Reinforcement learning<\/a> works by rewarding the model for doing the right thing and penalizing it for doing the wrong thing. Anthropic spent significant effort defining precise reward signals that capture not just correctness but quality of reasoning, efficiency, and safety.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Step 3: Let the model explore and improve&nbsp;<\/strong><\/h3>\n\n\n\n<p>Unlike <a href=\"https:\/\/www.guvi.in\/blog\/supervised-and-unsupervised-learning\/\" target=\"_blank\" rel=\"noreferrer noopener\">supervised learning<\/a> where answers are handed to the model, RL lets the model try different approaches and learn from outcomes. Composer generates responses, evaluates them against the reward signal, and updates its behavior accordingly. It learns what works through experience.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Step 4: Optimize for speed alongside quality&nbsp;<\/strong><\/h3>\n\n\n\n<p>A key design choice in Composer is that speed is treated as a first-class objective, not an afterthought. The RL process explicitly rewards efficient responses, training the model to reach correct answers faster rather than taking unnecessary reasoning steps.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Step 5: Iterate continuously with feedback&nbsp;<\/strong><\/h3>\n\n\n\n<p>The training loop does not stop after one pass. Composer improves through continuous cycles of generation, evaluation, and refinement. Each iteration makes the model sharper, faster, and more reliable across a wider range of tasks.<\/p>\n\n\n\n<div style=\"background-color: #099f4e; border: 3px solid #110053; border-radius: 12px; padding: 18px 22px; color: #FFFFFF; font-size: 18px; font-family: Montserrat, Helvetica, sans-serif; line-height: 1.7; box-shadow: 0 4px 12px rgba(0, 0, 0, 0.15); max-width: 750px;\">\n  <strong style=\"font-size: 22px; color: #FFFFFF;\">\ud83d\udca1 Did You Know?<\/strong>\n  <br \/><br \/>\n  <strong style=\"color: #110053;\">Reinforcement learning<\/strong> is the same technique behind <strong style=\"color: #110053;\">AlphaGo<\/strong>, the AI system that defeated a world champion in the game of Go.\n  <br \/><br \/>\n  When applied to <strong style=\"color: #110053;\">language models<\/strong>, it helps systems <strong style=\"color: #110053;\">reason through problems<\/strong> rather than simply recall patterns from training data, leading to more <strong style=\"color: #110053;\">structured and intelligent responses<\/strong>.\n  <br \/><br \/>\n<\/div>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>What Reinforcement Learning Actually Does Differently<\/strong><\/h2>\n\n\n\n<ol>\n<li><strong>It teaches reasoning, not just recall&nbsp;<\/strong><\/li>\n<\/ol>\n\n\n\n<p>Supervised learning teaches a model to reproduce answers it has seen before. RL teaches a model to work through problems it has never seen by rewarding good reasoning processes, not just correct final answers.<\/p>\n\n\n\n<ol start=\"2\">\n<li><strong>It optimizes for outcomes that matter&nbsp;<\/strong><\/li>\n<\/ol>\n\n\n\n<p>With RL, you can define success in terms that actually matter to real users. Speed, accuracy, safety, and helpfulness can all be encoded into the reward signal. The model learns to optimize for what you actually care about.<\/p>\n\n\n\n<ol start=\"3\">\n<li><strong>It improves through failure&nbsp;<\/strong><\/li>\n<\/ol>\n\n\n\n<p>When a supervised model gets something wrong, it is simply corrected. When an RL model gets something wrong, it learns from that failure and adjusts its approach. This makes RL models more robust in situations where the right answer is not obvious.<\/p>\n\n\n\n<ol start=\"4\">\n<li><strong>It scales more efficiently&nbsp;<\/strong><\/li>\n<\/ol>\n\n\n\n<p>Because RL directly optimizes for performance outcomes, you can get better results with a smaller model than you would need using pure scale. This is the core efficiency insight behind Composer. You do not need the biggest model. You need the best-trained one.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>What Composer Actually Delivers: A Practical Breakdown<\/strong><\/h2>\n\n\n\n<p>Here is what the Composer approach means in practical terms for developers, researchers, and anyone using frontier AI.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Capability 1: Faster Responses Without Sacrificing Quality<\/strong><\/h3>\n\n\n\n<p><strong>Speed as a design goal, not a compromise<\/strong><\/p>\n\n\n\n<p>Composer is built from the ground up to be fast. The RL training process explicitly rewards efficient reasoning, which means the model learns to reach correct answers in fewer steps. In real-world use, this translates to noticeably lower latency without a drop in output quality.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Capability 2: Stronger Reasoning on Hard Problems<\/strong><\/h3>\n\n\n\n<p><strong>Working through complexity, not around it<\/strong><\/p>\n\n\n\n<p>Because RL trains the model to reason through problems rather than recall surface-level patterns, Composer handles genuinely difficult tasks better. Multi-step reasoning, complex instructions, and novel problem types all benefit from this training approach.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Capability 3: More Consistent Safety Properties<\/strong><\/h3>\n\n\n\n<p><strong>Safety built into the reward signal<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/www.guvi.in\/blog\/anthropic-claude-opus-4-6\/\" target=\"_blank\" rel=\"noreferrer noopener\">Anthropic<\/a> baked safety directly into the reward signal used during RL training. This means safe behavior is not a layer added on top of the model after training. It is part of what the model learned to optimize for from the beginning.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Capability 4: Better Performance at Smaller Scale<\/strong><\/h3>\n\n\n\n<p><strong>Doing more with less compute<\/strong><\/p>\n\n\n\n<p>One of the most practically significant outcomes of the Composer approach is that it achieves frontier-level performance without requiring frontier-level model size. This makes deployment cheaper, faster, and more accessible for real applications.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Capability 5: Improved Instruction Following<\/strong><\/h3>\n\n\n\n<p><strong>Doing what you actually asked<\/strong><\/p>\n\n\n\n<p>RL training with precise reward signals teaches the model to follow instructions more accurately. The model learns the difference between technically answering a question and genuinely fulfilling what the user intended to ask.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Capability 6: Reliable Performance Across Domains<\/strong><\/h3>\n\n\n\n<p><strong>Consistent quality, not narrow specialization<\/strong><\/p>\n\n\n\n<p>Composer&#8217;s RL training spans a wide range of tasks and domains. This breadth means the model does not have obvious weak spots that appear when you move outside a narrow area of strength.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Capability 7: A Training Approach That Keeps Improving<\/strong><\/h3>\n\n\n\n<p><strong>Better over time by design<\/strong><\/p>\n\n\n\n<p>The RL loop is continuous. As new feedback comes in and reward signals are refined, Composer gets better. The training methodology is designed for ongoing improvement, not a single fixed release point.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Common Mistakes in Thinking About RL and Frontier Models<\/strong><\/h2>\n\n\n\n<ul>\n<li>Assuming bigger models always win over better-trained smaller ones<\/li>\n\n\n\n<li>Treating speed and quality as a fixed tradeoff rather than a design problem to solve<\/li>\n\n\n\n<li>Underestimating how much the reward signal definition shapes what a model actually learns<\/li>\n\n\n\n<li>Thinking RL is only useful for games and robotics rather than language reasoning<\/li>\n\n\n\n<li>Believing safety and capability are fundamentally in tension rather than jointly optimizable<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Getting the Most From a Model Built With Composer&#8217;s Approach<\/strong><\/h2>\n\n\n\n<ol>\n<li><strong>Trust the reasoning, not just the answer&nbsp;<\/strong><\/li>\n<\/ol>\n\n\n\n<p>Models trained with RL are stronger at working through problems step by step. When you use Composer-based models, ask for reasoning to be shown. The process is often as valuable as the conclusion.<\/p>\n\n\n\n<ol start=\"2\">\n<li><strong>Push it on hard problems&nbsp;<\/strong><\/li>\n<\/ol>\n\n\n\n<p>RL-trained models handle novel and complex tasks better than models that learned purely from examples. Do not limit your prompts to simple tasks. Test the edges and you will find the capability holds up.<\/p>\n\n\n\n<ol start=\"3\">\n<li><strong>Give precise instructions&nbsp;<\/strong><\/li>\n<\/ol>\n\n\n\n<p><a href=\"https:\/\/cursor.com\/blog\/composer\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Composer&#8217;s RL<\/a> training makes it highly responsive to exact instructions. The more specific and clear your prompt, the more accurately the model fulfills your actual intent rather than a generalized interpretation of it.<\/p>\n\n\n\n<ol start=\"4\">\n<li><strong>Use it where latency matters&nbsp;<\/strong><\/li>\n<\/ol>\n\n\n\n<p>If speed has been a bottleneck in your AI workflow, Composer&#8217;s optimized response time is directly relevant. Build applications where low latency was previously a barrier and see what becomes possible.<\/p>\n\n\n\n<ol start=\"5\">\n<li><strong>Expect safety to be consistent&nbsp;<\/strong><\/li>\n<\/ol>\n\n\n\n<p>Because safety is baked into the training process rather than applied as an afterthought, you can expect more consistent safety properties across a wider range of inputs. This matters for production deployments.<\/p>\n\n\n\n<div style=\"background-color: #099f4e; border: 3px solid #110053; border-radius: 12px; padding: 18px 22px; color: #FFFFFF; font-size: 18px; font-family: Montserrat, Helvetica, sans-serif; line-height: 1.7; box-shadow: 0 4px 12px rgba(0, 0, 0, 0.15); max-width: 750px;\">\n  <strong style=\"font-size: 22px; color: #FFFFFF;\">\ud83d\udca1 Did You Know?<\/strong>\n  <br \/><br \/>\n  The shift from <strong style=\"color: #110053;\">pure supervised learning<\/strong> to <strong style=\"color: #110053;\">reinforcement learning<\/strong> in training frontier AI models is considered one of the most <strong style=\"color: #110053;\">significant methodological changes<\/strong> since the introduction of the <strong style=\"color: #110053;\">transformer architecture<\/strong>.\n  <br \/><br \/>\n  This transition enables models to <strong style=\"color: #110053;\">reason, adapt,<\/strong> and improve through feedback, rather than relying solely on patterns learned from static datasets.\n  <br \/><br \/>\n<\/div>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>The Infrastructure Behind Training at This Scale<\/strong><\/h2>\n\n\n\n<ol>\n<li><strong>Massive parallel compute requirements&nbsp;<\/strong><\/li>\n<\/ol>\n\n\n\n<p>RL training for large language models requires enormous compute infrastructure. The model generates millions of responses, each of which gets evaluated and used to update training. This happens across thousands of parallel processes running simultaneously.<\/p>\n\n\n\n<ol start=\"2\">\n<li><strong>Reward model development&nbsp;<\/strong><\/li>\n<\/ol>\n\n\n\n<p>A separate model is trained specifically to evaluate the quality of Composer&#8217;s outputs. This reward model itself requires careful training and validation to ensure it is scoring responses on dimensions that actually matter.<\/p>\n\n\n\n<ol start=\"3\">\n<li><strong>Human feedback integration&nbsp;<\/strong><\/li>\n<\/ol>\n\n\n\n<p>Reinforcement learning from human feedback (RLHF) plays a key role. Human raters evaluate model outputs and their judgments are used to train the reward model, connecting human values directly to the optimization process.<\/p>\n\n\n\n<ol start=\"4\">\n<li><strong>Continuous evaluation pipelines&nbsp;<\/strong><\/li>\n<\/ol>\n\n\n\n<p>Throughout training, Composer is evaluated against hundreds of benchmarks and real-world task types. These evaluations guide decisions about when training is working and when the reward signal needs adjustment.<\/p>\n\n\n\n<ol start=\"5\">\n<li><strong>Safety evaluation at every stage&nbsp;<\/strong><\/li>\n<\/ol>\n\n\n\n<p>Safety testing is not saved for the end of training. It runs continuously throughout the RL process, catching regressions early and ensuring safety properties improve alongside capability rather than trading off against it.<\/p>\n\n\n\n<p>To learn more about Composer and building fast frontier models with RL, do not miss the chance to enroll in HCL GUVI&#8217;s <strong>Intel &amp; IITM Pravartak Certified <\/strong><a href=\"https:\/\/www.guvi.in\/mlp\/artificial-intelligence-and-machine-learning?utm_source=blog&amp;utm_medium=hyperlink&amp;utm_campaign=composer-building-a-fast-frontier-model-with-rl\" target=\"_blank\" rel=\"noreferrer noopener\"><strong>Artificial Intelligence &amp; Machine Learning course. <\/strong><\/a>Endorsed with <strong>Intel certification<\/strong>, this course adds a globally recognized credential to your resume, a powerful edge that sets you apart in the competitive AI job market.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Conclusion<\/strong><\/h2>\n\n\n\n<p>Building a fast frontier model is not just a compute problem. It is a training problem. Composer demonstrates that the method you use to teach a model matters as much as the size of the model itself.<\/p>\n\n\n\n<p>Reinforcement learning gives Anthropic a fundamentally more powerful tool for shaping model behavior. Instead of showing the model correct answers and hoping it generalizes, RL lets the model learn through experience what good reasoning, fast responses, and safe behavior actually look like in practice.<\/p>\n\n\n\n<p>The result is a model that is genuinely fast, genuinely capable, and genuinely safer by design. Not because those properties were bolted on after the fact, but because they were built into the objective from the very beginning.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>FAQs<\/strong><\/h2>\n\n\n<div id=\"rank-math-faq\" class=\"rank-math-block\">\n<div class=\"rank-math-list \">\n<div id=\"faq-question-1777661207661\" class=\"rank-math-list-item\">\n<h3 class=\"rank-math-question \"><strong>1. What makes Composer different from other frontier models?<\/strong>\u00a0<\/h3>\n<div class=\"rank-math-answer \">\n\n<p>Composer uses reinforcement learning as a core training method rather than relying primarily on supervised learning and scale. This allows it to optimize directly for speed, quality, and safety as explicit objectives rather than emergent properties.<\/p>\n\n<\/div>\n<\/div>\n<div id=\"faq-question-1777661214673\" class=\"rank-math-list-item\">\n<h3 class=\"rank-math-question \"><strong>2. Is reinforcement learning new in AI model training?<\/strong>\u00a0<\/h3>\n<div class=\"rank-math-answer \">\n\n<p>RL has been used in AI for decades, and RLHF has been part of language model training for several years. What makes Composer notable is the depth and centrality of RL in the training process rather than as a final fine-tuning step.<\/p>\n\n<\/div>\n<\/div>\n<div id=\"faq-question-1777661231429\" class=\"rank-math-list-item\">\n<h3 class=\"rank-math-question \"><strong>3. Does a smaller RL-trained model actually beat a larger supervised model?<\/strong>\u00a0<\/h3>\n<div class=\"rank-math-answer \">\n\n<p>In many benchmarks and real-world tasks, yes. The efficiency of RL training means you can achieve frontier performance at significantly smaller model sizes, which also translates to faster inference and lower deployment costs.<\/p>\n\n<\/div>\n<\/div>\n<div id=\"faq-question-1777661250283\" class=\"rank-math-list-item\">\n<h3 class=\"rank-math-question \"><strong>4. How does safety get built into RL training?<\/strong>\u00a0<\/h3>\n<div class=\"rank-math-answer \">\n\n<p>Safety is encoded into the reward signal that guides training. The model is rewarded for safe outputs and penalized for unsafe ones throughout the entire training process, making safety a core optimization target rather than a constraint added afterward.<\/p>\n\n<\/div>\n<\/div>\n<div id=\"faq-question-1777661269467\" class=\"rank-math-list-item\">\n<h3 class=\"rank-math-question \"><strong>5. What does this mean for future Anthropic models?<\/strong>\u00a0<\/h3>\n<div class=\"rank-math-answer \">\n\n<p>The Composer methodology is a foundation, not a one-time project. The insights and infrastructure built for Composer inform how future models get trained, meaning the benefits of this approach compound over time with each subsequent model generation.<\/p>\n\n<\/div>\n<\/div>\n<\/div>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>Training a large language model is one of the hardest engineering challenges in AI. You need massive compute, clean data, careful tuning, and months of iteration before you see results worth talking about. Most teams take the slow road. Bigger models. More parameters. More time. More money. Anthropic took a different approach with Composer. Instead [&hellip;]<\/p>\n","protected":false},"author":63,"featured_media":109202,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[933],"tags":[],"views":"29","authorinfo":{"name":"Vishalini Devarajan","url":"https:\/\/www.guvi.in\/blog\/author\/vishalini\/"},"thumbnailURL":"https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2026\/05\/Composer-300x115.webp","jetpack_featured_media_url":"https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2026\/05\/Composer-scaled.webp","_links":{"self":[{"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/posts\/109014"}],"collection":[{"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/users\/63"}],"replies":[{"embeddable":true,"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/comments?post=109014"}],"version-history":[{"count":3,"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/posts\/109014\/revisions"}],"predecessor-version":[{"id":109204,"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/posts\/109014\/revisions\/109204"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/media\/109202"}],"wp:attachment":[{"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/media?parent=109014"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/categories?post=109014"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/tags?post=109014"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}