{"id":473733,"date":"2025-09-02T21:01:47","date_gmt":"2025-09-02T21:01:47","guid":{"rendered":"http:\/\/savepearlharbor.com\/?p=473733"},"modified":"-0001-11-30T00:00:00","modified_gmt":"-0001-11-29T21:00:00","slug":"","status":"publish","type":"post","link":"https:\/\/savepearlharbor.com\/?p=473733","title":{"rendered":"<span>Building a Resume Matcher with tRPC, NLP, and Vertex AI<\/span>"},"content":{"rendered":"<div><!--[--><!--]--><\/div>\n<div id=\"post-content-body\">\n<div>\n<div class=\"article-formatted-body article-formatted-body article-formatted-body_version-2\">\n<div xmlns=\"http:\/\/www.w3.org\/1999\/xhtml\">\n<figure class=\"full-width\"><img decoding=\"async\" src=\"https:\/\/habrastorage.org\/r\/w1560\/getpro\/habr\/upload_files\/aa1\/931\/7dc\/aa19317dc10183bc67c2667f02856095.png\" alt=\"Building a Resume Matcher with tRPC, NLP, and Vertex AI\" title=\"Building a Resume Matcher with tRPC, NLP, and Vertex AI\" width=\"1248\" height=\"832\" sizes=\"auto, (max-width: 780px) 100vw, 50vw\" srcset=\"https:\/\/habrastorage.org\/r\/w780\/getpro\/habr\/upload_files\/aa1\/931\/7dc\/aa19317dc10183bc67c2667f02856095.png 780w,&#10;       https:\/\/habrastorage.org\/r\/w1560\/getpro\/habr\/upload_files\/aa1\/931\/7dc\/aa19317dc10183bc67c2667f02856095.png 781w\" loading=\"lazy\" decode=\"async\"\/><\/p>\n<div><figcaption>Building a Resume Matcher with tRPC, NLP, and Vertex AI<\/figcaption><\/div>\n<\/figure>\n<p>I recently built a small resume matcher app in TypeScript that compares PDF resumes to job postings. I wanted a fast way to prototype an API, so I chose tRPC for the backend. tRPC is a TypeScript-first RPC framework that promises \u201cend-to-end typesafe APIs\u201d, meaning I could share types between client and server without writing OpenAPI schemas or GraphQL SDL. In practice that meant I could focus on writing logic instead of boilerplate. Unlike REST or GraphQL, tRPC doesn\u2019t expose a generic schema \u2013 it just exposes procedures (essentially functions) on the server that the client can call, sharing input\/output types directly.<\/p>\n<p>Why is that useful? In short, I was building an internal tool (an MVP) and I was already using TypeScript on both ends. tRPC\u2019s zero-build-step, type-safe model fit the bill. The official tRPC docs even tout automatic type-safety: if I change a server function\u2019s input or output, TypeScript will warn me on the client before I even send a request. That was a big win for catching bugs early. In contrast, with REST or GraphQL I\u2019d have to manually sync or generate schemas. On the flip side, I knew tRPC ties my API and client code closely together (it\u2019s not a language-agnostic API) \u2013 so it\u2019s best for \u201cTypeScript-first\u201d projects like this, not public cross-platform APIs.<\/p>\n<h3>Defining the tRPC Router and Input Validation<\/h3>\n<p>With tRPC set up, I wrote a simple router for the main operation: analyzing two uploaded PDF files (a CV and a job description). Using tRPC with Zod-form-data, I could validate file uploads easily. Here\u2019s a simplified version of the router code:<\/p>\n<pre><code class=\"javascript\">export const matchRouter = router({   analyzePdfs: baseProcedure     .input(zfd.formData({       vacancyPdf: zfd.file().refine(file =&gt; file.type === \"application\/pdf\", {         message: \"Only PDF files are allowed\",       }),       cvPdf: zfd.file().refine(file =&gt; file.type === \"application\/pdf\", {         message: \"Only PDF files are allowed\",       }),     }))     .mutation(async ({ input }) =&gt; {       const [cvText, vacancyText] = await Promise.all([         PDFService.extractText(input.cvPdf),         PDFService.extractText(input.vacancyPdf),       ]);       const result = await MatcherService.match(cvText, vacancyText);       return { matchRequestId, ...result };     }), });<\/code><\/pre>\n<p>Above, the\u00a0<code>analyzePdfs<\/code>\u00a0mutation takes a multipart form with two PDF files. The <code>zfd.file().refine(...)<\/code>\u00a0calls ensure each file is a PDF. Once the files are validated and uploaded (via a helper\u00a0<code>FileService<\/code>), I use\u00a0<code>PDFService.extractText(...)<\/code>\u00a0to pull out the raw text from each PDF. Then I call\u00a0<code>MatcherService.match(cvText, vacancyText)<\/code>, which does the actual analysis. Because tRPC knows the input\/output types, my frontend gets fully typed results without me writing extra DTOs. This rapid setup and tight type safety saved a lot of time on the MVP.<\/p>\n<h3>Extracting Skills with Basic NLP<\/h3>\n<p>Once I had the plain text of the CV and job description, I needed to extract meaningful keywords or skills from them. I kept it simple: I used a combination of\u00a0<em>natural<\/em>\u00a0(for tokenization),\u00a0<em>compromise<\/em>\u00a0(for part-of-speech like nouns), and a stopword filter. For example, in\u00a0<code>MatcherService<\/code>\u00a0I have a helper like this:<\/p>\n<pre><code class=\"javascript\">private static extractSkills(text: string): Set&lt;string&gt; {   const doc = nlp(text);   const nouns = doc.nouns().out(\"array\"); \/\/ nouns are often skills or keywords   const capitalizedWords = text.match(\/\\b[A-Z][a-zA-Z0-9.-]+\\b\/g) || [];    \/\/ also pick up capitalized words (like frameworks or proper nouns)   return new Set([...nouns, ...capitalizedWords].map(w =&gt; w.toLowerCase())); }<\/code><\/pre>\n<p>In plain terms, this code lowercases the text, runs it through\u00a0<em>compromise<\/em>\u00a0NLP to grab nouns, and also regex-matches any capitalized word (which often catches tech names). Merging those and removing duplicates gives me a set of candidate \u201cskills\u201d from each document. This basic keyword extraction isn\u2019t fancy ML \u2013 it\u2019s just a heuristic \u2013 but it\u2019s fast and served well for highlighting matching skills. (It reminds me of some old-school resume parsers.) No external model needed yet, just some handy libraries and a bit of regex in a shared service class.<\/p>\n<h3>Integrating Vertex AI (Gemini 1.5 Flash) for Matching<\/h3>\n<p>For the core matching logic, I decided to call out to Google\u2019s Vertex AI with the new Gemini 1.5 Flash model. This was mostly about getting a structured comparison result (like a score and suggestions) without me implementing complex NLP logic. In\u00a0MatcherService, after cleaning up the text and extracting skills, I build a prompt and fetch from Vertex. For example:<\/p>\n<pre><code class=\"javascript\">const aiPrompt = ` Analyze the job description and candidate's CV to provide a structured evaluation.  Job Description: ${cleanedJD}  Candidate CV: ${cleanedCV}  Provide a structured analysis in JSON format with fields \"score\", \"strengths\", and \"suggestions\". `;  const response = await fetch(process.env.AI_API_ENDPOINT!, {   method: \"POST\",   headers: {     Authorization: process.env.AI_API_TOKEN,     \"Content-Type\": \"application\/json\",   },   body: JSON.stringify({     contents: [       { role: \"user\", parts: [{ text: aiPrompt }] }     ]   }) });  const data = await response.json(); if (!data?.candidates?.[0]?.content?.parts?.[0]?.text) {   throw new AIServiceError('Invalid AI response format'); } const rawResponse = data.candidates[0].content.parts[0].text; \/\/ Then parse rawResponse as JSON for score, strengths, suggestions...<\/code><\/pre>\n<p>Here I\u2019m using\u00a0fetch\u00a0to POST to a Vertex AI endpoint (configured in\u00a0<code>AI_API_ENDPOINT<\/code>), passing a user prompt in the request body. The prompt tells the model to compare the job description and CV and output a JSON with a match score, strengths, etc. I then parse the JSON text out of\u00a0<code>data.candidates[0].<\/code><a href=\"http:\/\/content.parts\" rel=\"noopener noreferrer nofollow\"><code>content.parts<\/code><\/a><code>[0].text<\/code>. This approach was super helpful \u2013 Gemini churned out a result without me writing a ranking algorithm. It feels like treating the model as a black-box comparator. Of course, this means I\u2019m trusting the AI, and sometimes the output needed cleaning or validation. But overall, embedding Gemini in the service let me focus on UI and data flow, not on LLM prompting. (I did have to handle some errors and rate-limiting around the call.)<\/p>\n<h3>Why tRPC Was a Good Fit (and Its Trade-Offs)<\/h3>\n<p>Using tRPC definitely sped up development. With no API schemas to write, I could spin up the endpoint in minutes. Full-stack TypeScript means the router code I wrote above is shared code on the client (via a tRPC client generator), so I get compile-time checks. In practice, when I changed the Zod validation or the return shape, my React UI immediately failed to compile until I adjusted the UI types. This \u201cautocompletion\u201d feel is exactly what the tRPC site promises. And because tRPC has essentially zero boilerplate (no controller classes, no code-gen), the code stayed concise.<\/p>\n<p>On the other hand, I\u2019m aware of tRPC\u2019s limits. It ties my frontend directly to this server implementation, so if I ever needed a public REST or mobile client, I\u2019d have to reconsider. I also had to think about caching and rate limits myself (tRPC doesn\u2019t do caching out of the box like GraphQL might). The Directus blog hit the nail on the head: tRPC is great for internal, TypeScript-heavy tools, but it \u201climits your options\u201d if you need broad compatibility. For this project \u2013 essentially an internal demo \u2013 those trade-offs were acceptable. I even implemented a simple rate limiter middleware just in case my Vertex AI calls overwhelmed the quota.<\/p>\n<h3>Lessons Learned<\/h3>\n<p>Building this project with modern TypeScript APIs was pretty enjoyable. I got end-to-end typing (client knows exactly what\u00a0{ score: number }\u00a0shape comes back) and no separate client library to maintain. The code feels very \u201cSDK-like\u201d, just calling functions on\u00a0matchRouter\u00a0as if it were local code. On the NLP side, I learned that even simple heuristics (nouns + capitalized words) can do a passable job of keyword extraction in a pinch. And finally, integrating Vertex AI reminded me that a lot of the \u201cAI magic\u201d can be outsourced with a well-crafted prompt.<\/p>\n<p>All that said, nothing is a silver bullet. If I had more time, I\u2019d refine error handling around the AI service and maybe add caching of results (since PDF-to-text and AI calls are expensive). And if this app grew beyond a quick demo, I might swap tRPC for a more conventional REST\/GraphQL API if I needed a public interface. For now though, tRPC gave me exactly what I needed: a fast MVP with end-to-end typesafety and minimal ceremony.<\/p>\n<p>You can find the full code for this project here:\u00a0<a href=\"https:\/\/github.com\/Kapustin2000\/wolf-cv-matcher-technical-task-trpc\" rel=\"noopener noreferrer nofollow\">GitHub Repository<\/a>.<\/p>\n<p><strong>Sources:<\/strong>\u00a0I leaned on several resources while exploring this setup. The tRPC website calls out the \u201cmove fast, break nothing\u201d end-to-end TS approach, and blog posts compare how tRPC fits among REST\/GraphQL, noting its TypeScript-first advantages and constraints.<\/p>\n<ol>\n<li>\n<p><a href=\"https:\/\/trpc.io\" rel=\"noopener noreferrer nofollow\">tRPC Official Site<\/a> \u2013\u00a0<em>Move Fast and Break Nothing. End-to-end typesafe APIs made easy.<\/em>\u00a0(Shows tRPC\u2019s focus on full-stack TypeScript and type safety).<\/p>\n<\/li>\n<li>\n<p>Viljami Kuosmanen,\u00a0<a href=\"https:\/\/dev.to\/anttiviljami\/comparing-rest-graphql-trpc-12n8\" rel=\"noopener noreferrer nofollow\"><em>Comparing REST, GraphQL &amp; tRPC<\/em><\/a>\u00a0(dev.to, Oct 2023) \u2013 Discusses how tRPC exposes RPC-style functions and shares types instead of a generic schema.<\/p>\n<\/li>\n<li>\n<p>Bryant Gillespie,\u00a0<a href=\"https:\/\/directus.io\/blog\/rest-graphql-tprc\" rel=\"noopener noreferrer nofollow\"><em>REST vs. GraphQL vs. tRPC<\/em><\/a>\u00a0(Directus blog, Feb 2025) \u2013 Covers tRPC\u2019s strengths (minimal boilerplate, type safety) and trade-offs (TypeScript-only, limited API reach).<\/p>\n<\/li>\n<\/ol>\n<\/div>\n<\/div>\n<\/div>\n<p><!----><!----><\/div>\n<p><!----><!----><br \/> \u0441\u0441\u044b\u043b\u043a\u0430 \u043d\u0430 \u043e\u0440\u0438\u0433\u0438\u043d\u0430\u043b \u0441\u0442\u0430\u0442\u044c\u0438 <a href=\"https:\/\/habr.com\/ru\/articles\/943236\/\"> https:\/\/habr.com\/ru\/articles\/943236\/<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<div><!--[--><!--]--><\/div>\n<div id=\"post-content-body\">\n<div>\n<div class=\"article-formatted-body article-formatted-body article-formatted-body_version-2\">\n<div xmlns=\"http:\/\/www.w3.org\/1999\/xhtml\">\n<figure class=\"full-width\">\n<div><figcaption>Building a Resume Matcher with tRPC, NLP, and Vertex AI<\/figcaption><\/div>\n<\/figure>\n<p>I recently built a small resume matcher app in TypeScript that compares PDF resumes to job postings. I wanted a fast way to prototype an API, so I chose tRPC for the backend. tRPC is a TypeScript-first RPC framework that promises \u201cend-to-end typesafe APIs\u201d, meaning I could share types between client and server without writing OpenAPI schemas or GraphQL SDL. In practice that meant I could focus on writing logic instead of boilerplate. Unlike REST or GraphQL, tRPC doesn\u2019t expose a generic schema \u2013 it just exposes procedures (essentially functions) on the server that the client can call, sharing input\/output types directly.<\/p>\n<p>Why is that useful? In short, I was building an internal tool (an MVP) and I was already using TypeScript on both ends. tRPC\u2019s zero-build-step, type-safe model fit the bill. The official tRPC docs even tout automatic type-safety: if I change a server function\u2019s input or output, TypeScript will warn me on the client before I even send a request. That was a big win for catching bugs early. In contrast, with REST or GraphQL I\u2019d have to manually sync or generate schemas. On the flip side, I knew tRPC ties my API and client code closely together (it\u2019s not a language-agnostic API) \u2013 so it\u2019s best for \u201cTypeScript-first\u201d projects like this, not public cross-platform APIs.<\/p>\n<h3>Defining the tRPC Router and Input Validation<\/h3>\n<p>With tRPC set up, I wrote a simple router for the main operation: analyzing two uploaded PDF files (a CV and a job description). Using tRPC with Zod-form-data, I could validate file uploads easily. Here\u2019s a simplified version of the router code:<\/p>\n<pre><code class=\"javascript\">export const matchRouter = router({   analyzePdfs: baseProcedure     .input(zfd.formData({       vacancyPdf: zfd.file().refine(file =&gt; file.type === \"application\/pdf\", {         message: \"Only PDF files are allowed\",       }),       cvPdf: zfd.file().refine(file =&gt; file.type === \"application\/pdf\", {         message: \"Only PDF files are allowed\",       }),     }))     .mutation(async ({ input }) =&gt; {       const [cvText, vacancyText] = await Promise.all([         PDFService.extractText(input.cvPdf),         PDFService.extractText(input.vacancyPdf),       ]);       const result = await MatcherService.match(cvText, vacancyText);       return { matchRequestId, ...result };     }), });<\/code><\/pre>\n<p>Above, the\u00a0<code>analyzePdfs<\/code>\u00a0mutation takes a multipart form with two PDF files. The <code>zfd.file().refine(...)<\/code>\u00a0calls ensure each file is a PDF. Once the files are validated and uploaded (via a helper\u00a0<code>FileService<\/code>), I use\u00a0<code>PDFService.extractText(...)<\/code>\u00a0to pull out the raw text from each PDF. Then I call\u00a0<code>MatcherService.match(cvText, vacancyText)<\/code>, which does the actual analysis. Because tRPC knows the input\/output types, my frontend gets fully typed results without me writing extra DTOs. This rapid setup and tight type safety saved a lot of time on the MVP.<\/p>\n<h3>Extracting Skills with Basic NLP<\/h3>\n<p>Once I had the plain text of the CV and job description, I needed to extract meaningful keywords or skills from them. I kept it simple: I used a combination of\u00a0<em>natural<\/em>\u00a0(for tokenization),\u00a0<em>compromise<\/em>\u00a0(for part-of-speech like nouns), and a stopword filter. For example, in\u00a0<code>MatcherService<\/code>\u00a0I have a helper like this:<\/p>\n<pre><code class=\"javascript\">private static extractSkills(text: string): Set&lt;string&gt; {   const doc = nlp(text);   const nouns = doc.nouns().out(\"array\"); \/\/ nouns are often skills or keywords   const capitalizedWords = text.match(\/\\b[A-Z][a-zA-Z0-9.-]+\\b\/g) || [];    \/\/ also pick up capitalized words (like frameworks or proper nouns)   return new Set([...nouns, ...capitalizedWords].map(w =&gt; w.toLowerCase())); }<\/code><\/pre>\n<p>In plain terms, this code lowercases the text, runs it through\u00a0<em>compromise<\/em>\u00a0NLP to grab nouns, and also regex-matches any capitalized word (which often catches tech names). Merging those and removing duplicates gives me a set of candidate \u201cskills\u201d from each document. This basic keyword extraction isn\u2019t fancy ML \u2013 it\u2019s just a heuristic \u2013 but it\u2019s fast and served well for highlighting matching skills. (It reminds me of some old-school resume parsers.) No external model needed yet, just some handy libraries and a bit of regex in a shared service class.<\/p>\n<h3>Integrating Vertex AI (Gemini 1.5 Flash) for Matching<\/h3>\n<p>For the core matching logic, I decided to call out to Google\u2019s Vertex AI with the new Gemini 1.5 Flash model. This was mostly about getting a structured comparison result (like a score and suggestions) without me implementing complex NLP logic. In\u00a0MatcherService, after cleaning up the text and extracting skills, I build a prompt and fetch from Vertex. For example:<\/p>\n<pre><code class=\"javascript\">const aiPrompt = ` Analyze the job description and candidate's CV to provide a structured evaluation.  Job Description: ${cleanedJD}  Candidate CV: ${cleanedCV}  Provide a structured analysis in JSON format with fields \"score\", \"strengths\", and \"suggestions\". `;  const response = await fetch(process.env.AI_API_ENDPOINT!, {   method: \"POST\",   headers: {     Authorization: process.env.AI_API_TOKEN,     \"Content-Type\": \"application\/json\",   },   body: JSON.stringify({     contents: [       { role: \"user\", parts: [{ text: aiPrompt }] }     ]   }) });  const data = await response.json(); if (!data?.candidates?.[0]?.content?.parts?.[0]?.text) {   throw new AIServiceError('Invalid AI response format'); } const rawResponse = data.candidates[0].content.parts[0].text; \/\/ Then parse rawResponse as JSON for score, strengths, suggestions...<\/code><\/pre>\n<p>Here I\u2019m using\u00a0fetch\u00a0to POST to a Vertex AI endpoint (configured in\u00a0<code>AI_API_ENDPOINT<\/code>), passing a user prompt in the request body. The prompt tells the model to compare the job description and CV and output a JSON with a match score, strengths, etc. I then parse the JSON text out of\u00a0<code>data.candidates[0].<\/code><a href=\"http:\/\/content.parts\" rel=\"noopener noreferrer nofollow\"><code>content.parts<\/code><\/a><code>[0].text<\/code>. This approach was super helpful \u2013 Gemini churned out a result without me writing a ranking algorithm. It feels like treating the model as a black-box comparator. Of course, this means I\u2019m trusting the AI, and sometimes the output needed cleaning or validation. But overall, embedding Gemini in the service let me focus on UI and data flow, not on LLM prompting. (I did have to handle some errors and rate-limiting around the call.)<\/p>\n<h3>Why tRPC Was a Good Fit (and Its Trade-Offs)<\/h3>\n<p>Using tRPC definitely sped up development. With no API schemas to write, I could spin up the endpoint in minutes. Full-stack TypeScript means the router code I wrote above is shared code on the client (via a tRPC client generator), so I get compile-time checks. In practice, when I changed the Zod validation or the return shape, my React UI immediately failed to compile until I adjusted the UI types. This \u201cautocompletion\u201d feel is exactly what the tRPC site promises. And because tRPC has essentially zero boilerplate (no controller classes, no code-gen), the code stayed concise.<\/p>\n<p>On the other hand, I\u2019m aware of tRPC\u2019s limits. It ties my frontend directly to this server implementation, so if I ever needed a public REST or mobile client, I\u2019d have to reconsider. I also had to think about caching and rate limits myself (tRPC doesn\u2019t do caching out of the box like GraphQL might). The Directus blog hit the nail on the head: tRPC is great for internal, TypeScript-heavy tools, but it \u201climits your options\u201d if you need broad compatibility. For this project \u2013 essentially an internal demo \u2013 those trade-offs were acceptable. I even implemented a simple rate limiter middleware just in case my Vertex AI calls overwhelmed the quota.<\/p>\n<h3>Lessons Learned<\/h3>\n<p>Building this project with modern TypeScript APIs was pretty enjoyable. I got end-to-end typing (client knows exactly what\u00a0{ score: number }\u00a0shape comes back) and no separate client library to maintain. The code feels very \u201cSDK-like\u201d, just calling functions on\u00a0matchRouter\u00a0as if it were local code. On the NLP side, I learned that even simple heuristics (nouns + capitalized words) can do a passable job of keyword extraction in a pinch. And finally, integrating Vertex AI reminded me that a lot of the \u201cAI magic\u201d can be outsourced with a well-crafted prompt.<\/p>\n<p>All that said, nothing is a silver bullet. If I had more time, I\u2019d refine error handling around the AI service and maybe add caching of results (since PDF-to-text and AI calls are expensive). And if this app grew beyond a quick demo, I might swap tRPC for a more conventional REST\/GraphQL API if I needed a public interface. For now though, tRPC gave me exactly what I needed: a fast MVP with end-to-end typesafety and minimal ceremony.<\/p>\n<p>You can find the full code for this project here:\u00a0<a href=\"https:\/\/github.com\/Kapustin2000\/wolf-cv-matcher-technical-task-trpc\" rel=\"noopener noreferrer nofollow\">GitHub Repository<\/a>.<\/p>\n<p><strong>Sources:<\/strong>\u00a0I leaned on several resources while exploring this setup. The tRPC website calls out the \u201cmove fast, break nothing\u201d end-to-end TS approach, and blog posts compare how tRPC fits among REST\/GraphQL, noting its TypeScript-first advantages and constraints.<\/p>\n<ol>\n<li>\n<p><a href=\"https:\/\/trpc.io\" rel=\"noopener noreferrer nofollow\">tRPC Official Site<\/a> \u2013\u00a0<em>Move Fast and Break Nothing. End-to-end typesafe APIs made easy.<\/em>\u00a0(Shows tRPC\u2019s focus on full-stack TypeScript and type safety).<\/p>\n<\/li>\n<li>\n<p>Viljami Kuosmanen,\u00a0<a href=\"https:\/\/dev.to\/anttiviljami\/comparing-rest-graphql-trpc-12n8\" rel=\"noopener noreferrer nofollow\"><em>Comparing REST, GraphQL &amp; tRPC<\/em><\/a>\u00a0(dev.to, Oct 2023) \u2013 Discusses how tRPC exposes RPC-style functions and shares types instead of a generic schema.<\/p>\n<\/li>\n<li>\n<p>Bryant Gillespie,\u00a0<a href=\"https:\/\/directus.io\/blog\/rest-graphql-tprc\" rel=\"noopener noreferrer nofollow\"><em>REST vs. GraphQL vs. tRPC<\/em><\/a>\u00a0(Directus blog, Feb 2025) \u2013 Covers tRPC\u2019s strengths (minimal boilerplate, type safety) and trade-offs (TypeScript-only, limited API reach).<\/p>\n<\/li>\n<\/ol>\n<\/div>\n<\/div>\n<\/div>\n<p><!----><!----><\/div>\n<p><!----><!----><br \/> \u0441\u0441\u044b\u043b\u043a\u0430 \u043d\u0430 \u043e\u0440\u0438\u0433\u0438\u043d\u0430\u043b \u0441\u0442\u0430\u0442\u044c\u0438 <a href=\"https:\/\/habr.com\/ru\/articles\/943236\/\"> https:\/\/habr.com\/ru\/articles\/943236\/<\/a><br \/><\/br><\/br><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[],"tags":[],"class_list":["post-473733","post","type-post","status-publish","format-standard","hentry"],"_links":{"self":[{"href":"https:\/\/savepearlharbor.com\/index.php?rest_route=\/wp\/v2\/posts\/473733","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/savepearlharbor.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/savepearlharbor.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/savepearlharbor.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/savepearlharbor.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=473733"}],"version-history":[{"count":0,"href":"https:\/\/savepearlharbor.com\/index.php?rest_route=\/wp\/v2\/posts\/473733\/revisions"}],"wp:attachment":[{"href":"https:\/\/savepearlharbor.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=473733"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/savepearlharbor.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=473733"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/savepearlharbor.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=473733"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}