{"id":71725,"date":"2025-01-31T16:01:16","date_gmt":"2025-01-31T10:31:16","guid":{"rendered":"https:\/\/www.guvi.in\/blog\/?p=71725"},"modified":"2026-04-24T19:31:42","modified_gmt":"2026-04-24T14:01:42","slug":"how-do-servers-handle-requests","status":"publish","type":"post","link":"https:\/\/www.guvi.in\/blog\/how-do-servers-handle-requests\/","title":{"rendered":"How Do Servers Handle Requests? A Comprehensive Guide"},"content":{"rendered":"\n<p><strong>Quick Answer: <\/strong>Servers handle requests through a structured pipeline that includes accepting connections, assigning resources (threads or event loops), processing the request, and sending back a response. Modern servers use efficient models like thread pooling and event-driven architectures to handle thousands of concurrent requests while maintaining performance and reliability.<\/p>\n\n\n\n<p>Every second, millions of server requests are processed across the internet, powering everything from simple web pages to real-time applications.<\/p>\n\n\n\n<p>Have you ever wondered what happens behind the scenes when you visit a website or send a request to a server? The answer lies in the process of request handling, where a server receives, processes, and responds to client requests efficiently.<\/p>\n\n\n\n<p>Understanding how servers handle requests gives you a clear view of the efficiency, scalability, and performance of modern web applications. This article breaks down how servers handle requests, step by step, and how it works in real-world systems. Let us get started.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>What is Server Request Handling?<\/strong><\/h2>\n\n\n\n<p>Server request handling is the process by which a server receives a client request (via HTTP\/HTTPS), processes it through an execution pipeline, and returns a response. It starts with TCP connection setup, followed by request parsing (headers, method, payload), and dispatch to application logic using concurrency models like thread pools or event loops. The server interacts with databases, caches, or APIs, applies business logic, and generates a response, which is then serialized, optionally compressed, and sent back. Efficient handling relies on non-blocking I\/O, proper resource management, and latency optimization to support high concurrency and fast response times.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>How do Servers Handle Requests?<\/strong><\/h2>\n\n\n\n<p>When a server handles a request, it undergoes several stages, from accepting the incoming connection to processing the request and sending the response. Each step involves components of the server like the operating system and the network stack. When a server receives a request (or multiple requests), the process of handling and responding to them depends on several key factors such as the server&#8217;s architecture, threading model, and load-balancing mechanisms.&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>1. Request Acceptance<\/strong><\/h3>\n\n\n\n<p>Once the <a href=\"https:\/\/www.guvi.in\/blog\/internet-protocol-and-transmission-control-protocol\/\" target=\"_blank\" rel=\"noreferrer noopener\">TCP connection<\/a> is established between the server and a client, the server starts accepting the requests.<\/p>\n\n\n\n<ul>\n<li>Servers typically have a <strong>request queue<\/strong> (accept queue) where incoming requests are temporarily held until they can be processed.<\/li>\n\n\n\n<li>The operating system handles this queue and passes each request to the server&#8217;s application when the server is ready to accept it. If the server is overwhelmed or the queue fills up, the server may start rejecting requests or timing out connections.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>2. Thread or Event Dispatching<\/strong><\/h3>\n\n\n\n<p>After the server accepts the request, it decides how to allocate resources (threads or event handlers) to process the requests.<\/p>\n\n\n\n<p><strong>2.1. Thread-Per-Request (Blocking I\/O)<\/strong><\/p>\n\n\n\n<ul>\n<li>Some older servers (e.g. <a href=\"https:\/\/www.apache.org\/\" target=\"_blank\" rel=\"noreferrer noopener\">Apache<\/a> in older configurations) create a new thread for each incoming request.<\/li>\n\n\n\n<li>The server receives the request, assigns it to the newly created thread, and everything from processing the request to sending the response is handled by it.<\/li>\n<\/ul>\n\n\n\n<p><strong>Advantages<\/strong>:<\/p>\n\n\n\n<ul>\n<li>Simple to implement.<\/li>\n\n\n\n<li>Handles requests in a straightforward, sequential manner.<\/li>\n<\/ul>\n\n\n\n<p><strong>Disadvantages:<\/strong><\/p>\n\n\n\n<ul>\n<li>This approach can cause inefficiency if there are too many requests because creating, managing, and destroying threads requires significant system resources (CPU, memory).<\/li>\n\n\n\n<li>There is also an overhead from frequent context switching between threads.<\/li>\n<\/ul>\n\n\n\n<p><strong>2.2. Thread Pooling (Most Common in Modern Servers)<\/strong><\/p>\n\n\n\n<ul>\n<li>Instead of creating a new thread for each request, most modern servers use <strong>thread pools<\/strong>.<\/li>\n\n\n\n<li>The server maintains a fixed number of threads in a pool. When a request is received, it is placed in a task queue, and an available worker thread from the pool picks it up for processing.<\/li>\n<\/ul>\n\n\n\n<p><strong>Advantages<\/strong>:<\/p>\n\n\n\n<ul>\n<li>Limits the number of threads, reducing resource consumption.<\/li>\n\n\n\n<li>Threads are reused, so thread creation and destruction overhead is minimized.<\/li>\n\n\n\n<li>Easy to scale better under heavy loads.<\/li>\n<\/ul>\n\n\n\n<p><strong>Disadvantage<\/strong>: If all threads are busy, incoming requests must wait in the queue until a thread becomes available.<\/p>\n\n\n\n<p><strong>Example<\/strong>: Most Java-based web servers (e.g., Apache Tomcat) use thread pools to handle HTTP requests efficiently.<\/p>\n\n\n\n<p><strong>2.3. <a href=\"https:\/\/www.guvi.in\/blog\/guide-for-events-in-javascript\/\" target=\"_blank\" rel=\"noreferrer noopener\">Event-Driven Model<\/a> (Non-Blocking I\/O)<\/strong><\/p>\n\n\n\n<ul>\n<li>In this model, the server uses an <strong>event loop<\/strong> to handle multiple connections without dedicating one thread per connection.<\/li>\n\n\n\n<li>When a request comes in, the server registers the request for processing and moves on to handle other tasks without waiting for the request to finish (i.e.non-blocking).<\/li>\n\n\n\n<li>Once the required I\/O (e.g. reading a file or accessing a database) is ready, the server processes the event and sends the response to the client.<\/li>\n<\/ul>\n\n\n\n<p><strong>Advantages<\/strong>:<\/p>\n\n\n\n<ul>\n<li>Extremely efficient in handling thousands of concurrent requests.<\/li>\n\n\n\n<li>Ideal for I\/O-bound applications where there are long waiting times (e.g. waiting for a database query).<\/li>\n<\/ul>\n\n\n\n<p><strong>Disadvantages<\/strong>:<\/p>\n\n\n\n<ul>\n<li>Requires more complex coding, especially when dealing with state and concurrency.<\/li>\n\n\n\n<li>Not ideal for CPU-bound tasks unless coupled with worker threads.<\/li>\n<\/ul>\n\n\n\n<p><strong>Example<\/strong>: Node.js, NGINX use event-driven models.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>3. Request Processing<\/strong><\/h3>\n\n\n\n<p>Once a thread (or event handler) picks up the request, the next phase is <strong>processing<\/strong>. This can involve several steps, depending on the application.<\/p>\n\n\n\n<p><strong>3.1. Parsing the Request<\/strong><\/p>\n\n\n\n<p>The server needs to parse the incoming request, which includes:<\/p>\n\n\n\n<ul>\n<li>Reading the <strong>HTTP headers<\/strong> (like Content-Type, Authorization).<\/li>\n\n\n\n<li>Parsing the <strong>request method<\/strong> (e.g. GET, POST, PUT).<\/li>\n\n\n\n<li>Extracting <strong>request parameters<\/strong> (e.g. query strings, body data).<\/li>\n\n\n\n<li>In a typical web server, this involves decoding HTTP messages, which might include reading JSON payloads or multipart form data.<\/li>\n<\/ul>\n\n\n\n<p><strong>3.2. Routing the Request<\/strong><\/p>\n\n\n\n<ul>\n<li>After parsing the request, the server routes it to the appropriate handler based on the request&#8217;s URL and HTTP method.<\/li>\n\n\n\n<li>The routing logic determines which function or controller to invoke to handle the specific request.<\/li>\n\n\n\n<li>Example: In a RESTful API, a GET \/user request might be routed to a function that retrieves a user&#8217;s data from the database.<\/li>\n<\/ul>\n\n\n\n<p><strong>3.3. Business Logic Execution<\/strong><\/p>\n\n\n\n<p>This phase involves executing the business logic for the request. In most cases, this is the stage where the server performs CPU or I\/O-intensive tasks, such as:<\/p>\n\n\n\n<ul>\n<li>Accessing databases.<\/li>\n\n\n\n<li>Performing complex calculations.<\/li>\n\n\n\n<li>Communicating with external services (APIs, microservices).<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>4. Generating the Response<\/strong><\/h3>\n\n\n\n<p>Once the request has been processed and the result is ready (e.g. database query results or the rendered <a href=\"https:\/\/www.guvi.in\/blog\/html-tutorial-guide-for-web-development\/\" target=\"_blank\" rel=\"noreferrer noopener\">HTML<\/a> page), the server prepares the response to send back to the client.<\/p>\n\n\n\n<p><strong>4.1. Generating the Response Data<\/strong><\/p>\n\n\n\n<p>This involves formatting the response based on the requested resource and the output format:<\/p>\n\n\n\n<ul>\n<li>For web servers, this could mean rendering an HTML page, returning JSON data, or sending a file.<\/li>\n\n\n\n<li>For RESTful APIs, it usually involves serializing data to JSON or XML format.<\/li>\n<\/ul>\n\n\n\n<p><strong>4.2. Adding HTTP Headers<\/strong><\/p>\n\n\n\n<ul>\n<li>The server also prepares HTTP response headers, such as:\n<ul>\n<li>Content-Type: The type of content (e.g. application\/json, text\/html).<\/li>\n\n\n\n<li>Status Code: Indicates the status of the response.<\/li>\n\n\n\n<li>Content-Length: The size of the response body.<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li>Additional headers like caching directives (Cache-Control), cookies, or security headers (e.g. CORS, HSTS) may also be added.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>5. Sending the Response to the Client<\/strong><\/h3>\n\n\n\n<p>Once the response is fully constructed, the server sends the response back to the client<\/p>\n\n\n\n<ul>\n<li>The response is typically sent over the same TCP connection established during the request.<\/li>\n\n\n\n<li>In some cases, the server might compress the response data using gzip or another algorithm to reduce bandwidth usage, especially for large payloads like HTML or JSON.<\/li>\n<\/ul>\n\n\n\n<p>After sending the response:<\/p>\n\n\n\n<ul>\n<li>In a <strong>blocking model<\/strong> (thread-per-request), the thread finishes its work and terminates (or goes back to the pool in the case of thread pooling).<\/li>\n\n\n\n<li>In an <strong>event-driven model<\/strong>, the server registers the completion of the event and continues to handle other requests.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>6. Handling Multiple Concurrent Requests<\/strong><\/h3>\n\n\n\n<p>When a server needs to handle multiple requests concurrently, it utilizes the following mechanisms:<\/p>\n\n\n\n<p><strong>6.1. Thread Pooling<\/strong><\/p>\n\n\n\n<ul>\n<li>A fixed number of threads (or worker processes) handle multiple requests in parallel.<\/li>\n\n\n\n<li>Each thread or process picks up a task (request) from a queue, processes it, and then moves to the next one.<\/li>\n\n\n\n<li>The size of the thread pool can often be dynamically adjusted based on the server load.<\/li>\n\n\n\n<li>This model ensures that the server can handle many concurrent connections without creating excessive threads, which would otherwise overwhelm the system.<\/li>\n<\/ul>\n\n\n\n<p><strong>6.2. Non-blocking I\/O<\/strong><\/p>\n\n\n\n<ul>\n<li>Servers using an event-driven model can handle thousands of connections simultaneously with just a few threads by using non-blocking I\/O. Instead of waiting for each request to finish, the server handles other tasks while waiting for I\/O.<\/li>\n\n\n\n<li>This approach is extremely efficient for I\/O-bound applications, where most of the time is spent waiting for external data.<\/li>\n<\/ul>\n\n\n\n<p><strong>6.3. Load Balancing<\/strong><\/p>\n\n\n\n<ul>\n<li>In large-scale applications, a <strong>loa<\/strong>d balancer distributes incoming requests across multiple servers. Each server processes a portion of the total requests, allowing the application to scale horizontally.<\/li>\n\n\n\n<li>If one server is overloaded or fails, the load balancer redirects traffic to other available servers.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>7. Closing the Connection<\/strong><\/h3>\n\n\n\n<p>After the response is sent, the server typically:<\/p>\n\n\n\n<ul>\n<li><strong>Closes the connection<\/strong>: For most HTTP\/1.0 and older HTTP\/1.1 configurations, the server closed the connection after sending the response.<\/li>\n\n\n\n<li><strong>Keeps the connection alive<\/strong>: Modern servers often use persistent connections (via the Connection: keep-alive header). This allows the same connection to handle multiple requests from the same client, reducing the overhead of establishing a new TCP connection for each request.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Real-World Examples of Server Request Handling<\/strong><\/h2>\n\n\n\n<ul>\n<li><strong>NGINX Handles 10K+ Connections Per Worker (Event Loop Efficiency): <\/strong>Uses a single-threaded event loop with epoll on <a href=\"https:\/\/www.guvi.in\/blog\/the-linux-filesystem\/\">Linux<\/a>, allowing one worker process to manage tens of thousands of concurrent connections without thread overhead. Commonly used as a reverse proxy to offload static content and route API traffic.<\/li>\n\n\n\n<li><strong>Apache Prefork Can Consume ~20 MB Per Request (Memory Trade-Off): <\/strong>In Prefork mode, each request runs in a separate process. At 500 concurrent users, memory usage can exceed 8\u201310 GB, making it unsuitable for high-concurrency systems without scaling.<\/li>\n\n\n\n<li><a href=\"https:\/\/www.guvi.in\/blog\/best-nodejs-frameworks-guide\/\"><strong>Node.js<\/strong><\/a><strong> Manages 100K Concurrent Requests but Fails on CPU Tasks: <\/strong>Uses a single-threaded event loop with async I\/O. Performs exceptionally for I\/O-heavy workloads like APIs, but CPU-heavy tasks (e.g., image processing) block the event loop and delay all incoming requests.<\/li>\n\n\n\n<li><strong>Tomcat Thread Pool Saturation Causes Request Queuing: <\/strong>Typical configuration: 200 threads. At 500 concurrent requests, 300 are queued. If wait time exceeds timeout thresholds, users experience failures even when the server is still running.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Common Challenges in Server Request Handling<\/strong><\/h2>\n\n\n\n<ul>\n<li><strong>Slow Database Queries Add 1\u20133 Seconds Per Request: <\/strong>Missing indexes or complex joins can delay the entire request lifecycle, even if the server itself is fast.<\/li>\n\n\n\n<li><strong>Thread Pool Exhaustion Causes Silent Failures: <\/strong>When all threads are busy, new requests wait in queue and eventually timeout, leading to failed user requests despite server uptime.<\/li>\n\n\n\n<li><strong>Memory Leaks Crash Servers After Hours of Uptime: <\/strong>Unreleased objects accumulate over time, gradually consuming RAM until the system becomes unstable or crashes.<\/li>\n\n\n\n<li><strong>Single Server Setup Caps Scalability at Peak Traffic: <\/strong>Without load balancing, one server becomes the bottleneck, leading to degraded performance during traffic spikes.<\/li>\n\n\n\n<li><strong>Network Instability Causes Random Request Failures: <\/strong>Packet loss, DNS delays, or intermittent connectivity issues can result in failed or delayed responses even when backend systems are healthy.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Best Practices for Efficient Server Request Handling<\/strong><\/h2>\n\n\n\n<ol>\n<li><strong>Cap Thread Pools Based on CPU Cores (Avoid Over-Allocation)- <\/strong>Set thread pool size to ~2\u20134\u00d7 CPU cores for I\/O-heavy apps. Example: 8-core server \u2192 32 threads max. More threads increase context switching and reduce throughput instead of improving it.<\/li>\n\n\n\n<li><strong>Set Request Timeouts (Prevent Resource Locking)- <\/strong>Always define timeouts (e.g., 2\u20135 seconds for APIs). Hanging requests can occupy threads indefinitely, leading to thread exhaustion under load.<\/li>\n\n\n\n<li><strong>Use Reverse Proxy Layer (Offload Critical Work)- <\/strong>Place NGINX or a load balancer in front of your app server to handle SSL termination, caching, and static files. This reduces backend load by up to 30\u201350 percent.<\/li>\n\n\n\n<li><strong>Cache at Multiple Layers (Not Just One)- <\/strong>Implement caching at:\n<ol>\n<li>CDN (static assets)<\/li>\n\n\n\n<li>Server (HTML\/<a href=\"https:\/\/www.guvi.in\/blog\/api-response-structure-best-practices\/\">API responses<\/a>)<\/li>\n\n\n\n<li>Database (query results via Redis)<br>Multi-layer caching reduces latency and backend pressure significantly.<\/li>\n<\/ol>\n<\/li>\n\n\n\n<li><strong>Avoid Blocking Operations in Request Path (Critical for Node Systems)- <\/strong>In Node.js, never run CPU-heavy logic (e.g., image processing) inside request handlers. Move it to worker queues to prevent event loop blocking.<\/li>\n\n\n\n<li><strong>Limit Request Payload Size (Protect Against Abuse)- <\/strong>Set max request size (e.g., 1\u20135 MB for APIs). Prevents memory spikes and protects against denial-of-service attempts using large payloads.<\/li>\n\n\n\n<li><strong>Enable Keep-Alive Connections (Reduce TCP Overhead)- <\/strong>Reuse TCP connections for multiple requests. Reduces latency and CPU overhead of repeated handshakes, especially for high-frequency clients.<\/li>\n\n\n\n<li><strong>Monitor Key Metrics (Catch Issues Early)- <\/strong>Track:\n<ol>\n<li>Response time (p95, p99 latency)<\/li>\n\n\n\n<li>Error rate<\/li>\n\n\n\n<li>CPU and memory usage<\/li>\n\n\n\n<li>Active connections<br>Without monitoring, performance issues go unnoticed until failure.<\/li>\n<\/ol>\n<\/li>\n\n\n\n<li><strong>Use Load Balancing Early (Don\u2019t Wait for Failure)- <\/strong>Distribute traffic across multiple servers using round-robin or least-connections strategy. Prevents single-point overload during sudden traffic spikes.<\/li>\n\n\n\n<li><strong>Graceful Degradation Under Load (Fail Smart, Not Hard)- <\/strong>When overloaded:\n<ol>\n<li>Serve cached responses<\/li>\n\n\n\n<li>Disable non-critical features<\/li>\n\n\n\n<li>Return partial data instead of failing completely<\/li>\n<\/ol>\n<\/li>\n<\/ol>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Conclusion<\/strong><\/h2>\n\n\n\n<p>Understanding how servers handle requests is fundamental to building scalable and high-performance applications. From threading models to event-driven architectures, each concept plays a critical role in real-world systems. If you want to master backend systems, APIs, and scalable architecture, start building real-world projects and experiment with different server models. The more you build, the deeper your understanding becomes.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">FAQs<\/h2>\n\n\n<div id=\"rank-math-faq\" class=\"rank-math-block\">\n<div class=\"rank-math-list \">\n<div id=\"faq-question-1777039138453\" class=\"rank-math-list-item\">\n<h3 class=\"rank-math-question \"><strong>How does a server process a request step by step?<\/strong><\/h3>\n<div class=\"rank-math-answer \">\n\n<p>A server processes a request by accepting the connection, assigning resources (thread\/event loop), parsing the request, executing business logic, generating a response, and sending it back to the client.<\/p>\n\n<\/div>\n<\/div>\n<div id=\"faq-question-1777039162054\" class=\"rank-math-list-item\">\n<h3 class=\"rank-math-question \"><strong>What is the difference between blocking and non-blocking servers?<\/strong><\/h3>\n<div class=\"rank-math-answer \">\n\n<p>Blocking servers handle one request per thread, while non-blocking servers handle multiple requests using an event loop, improving scalability.<\/p>\n\n<\/div>\n<\/div>\n<div id=\"faq-question-1777039180771\" class=\"rank-math-list-item\">\n<h3 class=\"rank-math-question \"><strong>Why is load balancing important in servers?<\/strong><\/h3>\n<div class=\"rank-math-answer \">\n\n<p>Load balancing distributes traffic across multiple servers, preventing overload and improving performance and availability.<\/p>\n\n<\/div>\n<\/div>\n<div id=\"faq-question-1777039196788\" class=\"rank-math-list-item\">\n<h3 class=\"rank-math-question \"><strong>Which server model is best for high traffic applications?<\/strong><\/h3>\n<div class=\"rank-math-answer \">\n\n<p>Event-driven, non-blocking servers like NGINX and Node.js are best for handling high traffic efficiently.<\/p>\n\n<\/div>\n<\/div>\n<div id=\"faq-question-1777039213754\" class=\"rank-math-list-item\">\n<h3 class=\"rank-math-question \"><strong>What are concurrent requests in servers?<\/strong><\/h3>\n<div class=\"rank-math-answer \">\n\n<p>Concurrent requests are multiple client requests handled by a server at the same time using threads, processes, or event-driven mechanisms.<\/p>\n\n<\/div>\n<\/div>\n<\/div>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>Quick Answer: Servers handle requests through a structured pipeline that includes accepting connections, assigning resources (threads or event loops), processing the request, and sending back a response. Modern servers use efficient models like thread pooling and event-driven architectures to handle thousands of concurrent requests while maintaining performance and reliability. Every second, millions of server requests [&hellip;]<\/p>\n","protected":false},"author":60,"featured_media":71810,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[714],"tags":[],"views":"7589","authorinfo":{"name":"Vaishali","url":"https:\/\/www.guvi.in\/blog\/author\/vaishali\/"},"thumbnailURL":"https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/01\/servers-handle-requests-300x112.webp","jetpack_featured_media_url":"https:\/\/www.guvi.in\/blog\/wp-content\/uploads\/2025\/01\/servers-handle-requests.webp","_links":{"self":[{"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/posts\/71725"}],"collection":[{"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/users\/60"}],"replies":[{"embeddable":true,"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/comments?post=71725"}],"version-history":[{"count":6,"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/posts\/71725\/revisions"}],"predecessor-version":[{"id":108275,"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/posts\/71725\/revisions\/108275"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/media\/71810"}],"wp:attachment":[{"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/media?parent=71725"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/categories?post=71725"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.guvi.in\/blog\/wp-json\/wp\/v2\/tags?post=71725"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}