{"id":317126,"date":"2021-01-28T15:03:29","date_gmt":"2021-01-28T15:03:29","guid":{"rendered":"http:\/\/savepearlharbor.com\/?p=317126"},"modified":"-0001-11-30T00:00:00","modified_gmt":"-0001-11-29T21:00:00","slug":"","status":"publish","type":"post","link":"https:\/\/savepearlharbor.com\/?p=317126","title":{"rendered":"LIVENESS DETECTION \u2014 \u043f\u0440\u043e\u0432\u0435\u0440\u043a\u0430 \u0438\u0434\u0435\u043d\u0442\u0438\u0444\u0438\u043a\u0430\u0442\u043e\u0440\u0430 \u043d\u0430 \u043f\u0440\u0438\u043d\u0430\u0434\u043b\u0435\u0436\u043d\u043e\u0441\u0442\u044c \u00ab\u0436\u0438\u0432\u043e\u043c\u0443\u00bb \u043f\u043e\u043b\u044c\u0437\u043e\u0432\u0430\u0442\u0435\u043b\u044e"},"content":{"rendered":"\n<div class=\"post__text post__text_v2\" id=\"post-content-body\">\n<p>\u0422\u0435\u0445\u043d\u043e\u043b\u043e\u0433\u0438\u0435\u0439 \u0440\u0430\u0441\u043f\u043e\u0437\u043d\u0430\u0432\u0430\u043d\u0438\u044f \u043b\u0438\u0446 \u0443\u0436\u0435 \u043d\u0438\u043a\u043e\u0433\u043e \u043d\u0435 \u0443\u0434\u0438\u0432\u0438\u0442\u044c. \u041a\u0440\u0443\u043f\u043d\u044b\u0435 \u043a\u043e\u043c\u043f\u0430\u043d\u0438\u0438 \u0430\u043a\u0442\u0438\u0432\u043d\u043e \u0432\u043d\u0435\u0434\u0440\u044f\u044e\u0442 \u044d\u0442\u0443 \u0442\u0435\u0445\u043d\u043e\u043b\u043e\u0433\u0438\u044e \u0432 \u0441\u0432\u043e\u0438 \u0441\u0435\u0440\u0432\u0438\u0441\u044b \u0438 \u043a\u043e\u043d\u0435\u0447\u043d\u043e, \u043c\u043e\u0448\u0435\u043d\u043d\u0438\u043a\u0438 \u043f\u044b\u0442\u0430\u044e\u0442\u0441\u044f \u0438\u0441\u043f\u043e\u043b\u044c\u0437\u043e\u0432\u0430\u0442\u044c \u0440\u0430\u0437\u043d\u044b\u0435 \u0441\u043f\u043e\u0441\u043e\u0431\u044b, \u0432 \u0442\u043e\u043c \u0447\u0438\u0441\u043b\u0435 \u043f\u043e\u0434\u043c\u0435\u043d\u0443 \u0438\u0434\u0435\u043d\u0442\u0438\u0444\u0438\u043a\u0430\u0442\u043e\u0440\u0430 \u043b\u0438\u0446\u0430 \u0441 \u043f\u043e\u043c\u043e\u0449\u044c\u044e \u043c\u0430\u0441\u043a\u0438, \u0444\u043e\u0442\u043e \u0438\u043b\u0438 \u0437\u0430\u043f\u0438\u0441\u0438 \u0434\u043b\u044f \u043e\u0441\u0443\u0449\u0435\u0441\u0442\u0432\u043b\u0435\u043d\u0438\u044f \u0441\u0432\u043e\u0438\u0445 \u043f\u0440\u0435\u0441\u0442\u0443\u043f\u043d\u044b\u0445 \u0434\u0435\u0439\u0441\u0442\u0432\u0438\u0439. \u0422\u0430\u043a\u0430\u044f \u0430\u0442\u0430\u043a\u0430 \u043d\u0430\u0437\u044b\u0432\u0430\u0435\u0442\u0441\u044f \u0441\u043f\u0443\u0444\u0438\u043d\u0433\u043e\u043c.<\/p>\n<p>\u0425\u043e\u0442\u0438\u043c \u043f\u043e\u0437\u043d\u0430\u043a\u043e\u043c\u0438\u0442\u044c \u0432\u0430\u0441 \u0441 \u0442\u0435\u0445\u043d\u043e\u043b\u043e\u0433\u0438\u0435\u0439 liveness detection, \u0432 \u0437\u0430\u0434\u0430\u0447\u0443 \u043a\u043e\u0442\u043e\u0440\u043e\u0439 \u0432\u0445\u043e\u0434\u0438\u0442 \u043f\u0440\u043e\u0432\u0435\u0440\u043a\u0430 \u0438\u0434\u0435\u043d\u0442\u0438\u0444\u0438\u043a\u0430\u0442\u043e\u0440\u0430 \u043d\u0430 \u043f\u0440\u0438\u043d\u0430\u0434\u043b\u0435\u0436\u043d\u043e\u0441\u0442\u044c \u00ab\u0436\u0438\u0432\u043e\u043c\u0443\u00bb \u043f\u043e\u043b\u044c\u0437\u043e\u0432\u0430\u0442\u0435\u043b\u044e.<\/p>\n<p>\u0414\u0430\u0442\u0430\u0441\u0435\u0442 \u043c\u043e\u0436\u043d\u043e \u0441\u043a\u0430\u0447\u0430\u0442\u044c \u043f\u043e&nbsp;<a href=\"https:\/\/yadi.sk\/d\/fsYqrmQ7kgwb_w?w=1\" rel=\"noopener noreferrer nofollow\">\u0441\u0441\u044b\u043b\u043a\u0435<\/a>.<\/p>\n<p>\u0414\u043b\u044f \u043e\u0431\u0443\u0447\u0435\u043d\u0438\u044f \u0432 \u0434\u0430\u0442\u0430\u0441\u0435\u0442\u0435 &nbsp;\u0435\u0441\u0442\u044c 4 \u043f\u043e\u0434\u043a\u043b\u0430\u0441\u0441\u0430.<\/p>\n<ul>\n<li>\n<p>real \u2014 \u00ab\u0436\u0438\u0432\u043e\u0435\u00bb \u043b\u0438\u0446\u043e<\/p>\n<\/li>\n<li>\n<p>replay \u2014 \u043a\u0430\u0434\u0440\u044b \u0441 \u0432\u0438\u0434\u0435\u043e<\/p>\n<\/li>\n<li>\n<p>printed \u2014 \u0440\u0430\u0441\u043f\u0435\u0447\u0430\u0442\u0430\u043d\u043d\u0430\u044f \u0444\u043e\u0442\u043e\u0433\u0440\u0430\u0444\u0438\u044f<\/p>\n<\/li>\n<li>\n<p>2dmask \u2014 \u043d\u0430\u0434\u0435\u0442\u0430\u044f 2d \u043c\u0430\u0441\u043a\u0430<\/p>\n<\/li>\n<\/ul>\n<figure class=\"\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/habrastorage.org\/getpro\/habr\/upload_files\/54b\/972\/7dd\/54b9727ddd834a728503fdb5ed3324c4.png\" width=\"406\" height=\"418\"><figcaption><\/figcaption><\/figure>\n<p>\u041a\u0430\u0436\u0434\u044b\u0439 \u043e\u0431\u0440\u0430\u0437\u0435\u0446 \u043f\u0440\u0435\u0434\u0441\u0442\u0430\u0432\u043b\u0435\u043d \u043f\u043e\u0441\u043b\u0435\u0434\u043e\u0432\u0430\u0442\u0435\u043b\u044c\u043d\u043e\u0441\u0442\u044c\u044e \u0438\u0437 5 \u043a\u0430\u0440\u0442\u0438\u043d\u043e\u043a.<\/p>\n<h3>\u0421\u0442\u0440\u043e\u0438\u043c \u043c\u043e\u0434\u0435\u043b\u044c<\/h3>\n<p>\u0414\u043b\u044f \u0440\u0435\u0448\u0435\u043d\u0438\u044f \u0437\u0430\u0434\u0430\u0447\u0438 \u043a\u043b\u0430\u0441\u0441\u0438\u0444\u0438\u043a\u0430\u0446\u0438\u0438 \u0438\u0437\u043e\u0431\u0440\u0430\u0436\u0435\u043d\u0438\u0439 \u043d\u0430 \u043f\u0440\u0438\u043d\u0430\u0434\u043b\u0435\u0436\u043d\u043e\u0441\u0442\u044c \u00ab\u0436\u0438\u0432\u043e\u043c\u0443\u00bb \u043f\u043e\u043b\u044c\u0437\u043e\u0432\u0430\u0442\u0435\u043b\u044e \u0431\u0443\u0434\u0435\u043c \u043e\u0431\u0443\u0447\u0430\u0442\u044c \u043d\u0435\u0439\u0440\u043e\u043d\u043d\u0443\u044e \u0441\u0435\u0442\u044c, \u0438\u0441\u043f\u043e\u043b\u044c\u0437\u0443\u044f \u0444\u0440\u0435\u0439\u043c\u0432\u043e\u0440\u043a pytorch.<\/p>\n<p>\u0420\u0435\u0448\u0435\u043d\u0438\u0435 \u0441\u0442\u0440\u043e\u0438\u0442\u0441\u044f \u043d\u0430 \u0440\u0430\u0431\u043e\u0442\u0435 \u0441 \u043f\u043e\u0441\u043b\u0435\u0434\u043e\u0432\u0430\u0442\u0435\u043b\u044c\u043d\u043e\u0441\u0442\u044c\u044e \u043a\u0430\u0440\u0442\u0438\u043d\u043e\u043a, \u0430 \u043d\u0435 \u0441 \u043a\u0430\u0436\u0434\u043e\u0439 \u043a\u0430\u0440\u0442\u0438\u043d\u043a\u043e\u0439 \u043e\u0442\u0434\u0435\u043b\u044c\u043d\u043e. \u0418\u0441\u043f\u043e\u043b\u044c\u0437\u0443\u0435\u043c \u043d\u0435\u0431\u043e\u043b\u044c\u0448\u0443\u044e \u043f\u0440\u0435\u0442\u0440\u0435\u043d\u0438\u0440\u043e\u0432\u0430\u043d\u043d\u0443\u044e \u0441\u0435\u0442\u044c Resnet18 \u043d\u0430 \u043a\u0430\u0436\u0434\u0443\u044e \u043a\u0430\u0440\u0442\u0438\u043d\u043a\u0443 \u0438\u0437 \u043f\u043e\u0441\u043b\u0435\u0434\u043e\u0432\u0430\u0442\u0435\u043b\u044c\u043d\u043e\u0441\u0442\u0438. \u0417\u0430\u0442\u0435\u043c \u0441\u0442\u0430\u043a\u0430\u0435\u043c \u043f\u043e\u043b\u0443\u0447\u0435\u043d\u043d\u044b\u0435 \u0444\u0438\u0447\u0438 \u0438 \u043f\u0440\u0438\u043c\u0435\u043d\u044f\u0435\u043c 1d \u0441\u0432\u0435\u0440\u0442\u043a\u0443 \u0438 \u0434\u0430\u043b\u0435\u0435 fully connected \u0441\u043b\u043e\u0439 \u043d\u0430 1 \u043a\u043b\u0430\u0441\u0441.<\/p>\n<p>\u0422\u0430\u043a\u0438\u043c \u043e\u0431\u0440\u0430\u0437\u043e\u043c, \u043d\u0430\u0448\u0430 \u0430\u0440\u0445\u0438\u0442\u0435\u043a\u0442\u0443\u0440\u0430 \u0432\u044b\u0433\u043b\u044f\u0434\u0438\u0442 \u0441\u043b\u0435\u0434\u0443\u044e\u0449\u0438\u043c \u043e\u0431\u0440\u0430\u0437\u043e\u043c:<\/p>\n<pre><code class=\"python\">class Empty(nn.Module):     def __init__(self):         super(Empty, self).__init__()      def forward(self, x):         return x class SpoofModel(nn.Module):     def __init__(self):         super(SpoofModel, self).__init__()         self.encoder = torchvision.models.resnet18()         self.encoder.fc = Empty()         self.conv1d = nn.Conv1d(             in_channels=5,             out_channels=1,             kernel_size=(3),             stride=(2),             padding=(1))         self.fc = nn.Linear(in_features=256, out_features=1)      def forward(self, x):         vectors = []         for i in range(0, x.shape[1]):             v = self.encoder(x[:, i])             v = v.reshape(v.size(0), -1)             vectors.append(v)         vectors = torch.stack(vectors)         vectors = vectors.permute((1, 0, 2))         vectors = self.conv1d(vectors)         x = self.fc(vectors)         return x<\/code><\/pre>\n<p>\u0414\u043b\u044f \u043f\u0440\u0438\u043c\u0435\u0440\u0430 \u043c\u044b \u0431\u0443\u0434\u0435\u043c \u0442\u0440\u0435\u043d\u0438\u0440\u043e\u0432\u0430\u0442\u044c \u043d\u0430\u0448\u0443 \u043c\u043e\u0434\u0435\u043b\u044c 5 \u044d\u043f\u043e\u0445 \u0441 \u0431\u0430\u0442\u0447 \u0441\u0430\u0439\u0437\u043e\u0432 64, \u0447\u0442\u043e \u0437\u0430\u0439\u043c\u0451\u0442 \u043f\u0440\u0438\u043c\u0435\u0440\u043d\u043e 1 \u0447\u0430\u0441 \u0441 \u0443\u0447\u0435\u0442\u043e\u043c \u0432\u0430\u043b\u0438\u0434\u0430\u0446\u0438\u0438 \u043d\u0430 \u043e\u0434\u043d\u043e\u0439 2080TI.<\/p>\n<p>\u041d\u0430 \u0432\u0430\u043b\u0438\u0434\u0430\u0446\u0438\u0438 \u0441\u043c\u043e\u0442\u0440\u0438\u043c &nbsp;3 \u043c\u0435\u0442\u0440\u0438\u043a\u0438: f1, accuracy \u0438 f2 score.<\/p>\n<p>\u041a\u043e\u0434 \u0434\u043b\u044f \u0432\u0430\u043b\u0438\u0434\u0430\u0446\u0438\u0438:<\/p>\n<pre><code class=\"python\">def eval_metrics(outputs, labels, threshold=0.5):     return {         'f1': f1_score(y_true=labels, y_pred=(outputs &gt; threshold).astype(int), average='macro'),         'accuracy': accuracy_score(y_true=labels, y_pred=(outputs &gt; threshold).astype(int)),         'fbeta 2': fbeta_score(labels,  y_pred=(outputs &gt; threshold).astype(int), beta=2, average='weighted'),         'f1 weighted': f1_score(y_true=labels, y_pred=(outputs &gt; threshold).astype(int), average='weighted')     }  def validation(model, val_loader):     model.eval()     metrics = []     batch_size = val_loader.batch_size     tq = tqdm(total=len(val_loader) * batch_size, position=0, leave=True)     with torch.no_grad():         for i, (inputs, labels) in enumerate(val_loader):             inputs = inputs.cuda()             labels = labels.cuda()             outputs = model(inputs).view(-1)             tq.update(batch_size)             metrics.append(eval_metrics(outputs.cpu().numpy(), labels.cpu().numpy()))         metrics_mean = mean_metrics(metrics)     tq.close()     return metrics_mean<\/code><\/pre>\n<p>\u0412 \u043a\u0430\u0447\u0435\u0441\u0442\u0432\u0435 \u043e\u043f\u0442\u0438\u043c\u0430\u0439\u0437\u0435\u0440\u0430 \u0438\u0441\u043f\u043e\u043b\u044c\u0437\u0443\u0435\u043c SGD c learning rate = 0.001, \u0430 \u0432 \u043a\u0430\u0447\u0435\u0441\u0442\u0432\u0435 loss BCEWithLogitsLoss.<\/p>\n<p>\u041d\u0435 \u0431\u0443\u0434\u0435\u043c \u0438\u0441\u043f\u043e\u043b\u044c\u0437\u043e\u0432\u0430\u0442\u044c \u044d\u043a\u0437\u043e\u0442\u0438\u0447\u0435\u0441\u043a\u0438\u0445 \u0430\u0443\u0433\u043c\u0435\u043d\u0442\u0430\u0446\u0438\u0439. \u0414\u0435\u043b\u0430\u0435\u043c \u0442\u043e\u043b\u044c\u043a\u043e Resize \u0438 RandomHorizontalFlip \u0434\u043b\u044f \u0438\u0437\u043e\u0431\u0440\u0430\u0436\u0435\u043d\u0438\u0439 \u043f\u0440\u0438 \u043e\u0431\u0443\u0447\u0435\u043d\u0438\u0438.<\/p>\n<p>\u041f\u043e\u043b\u043d\u044b\u0439 \u043a\u043e\u0434 \u0444\u0443\u043d\u043a\u0446\u0438\u0438 \u0434\u043b\u044f \u0442\u0440\u0435\u043d\u0438\u0440\u043e\u0432\u043a\u0438:<\/p>\n<pre><code class=\"python\">\u0418\u0442\u043e\u0433\u043e\u0432\u044b\u0439 \u0445\u043e\u0434 \u0442\u0440\u0435\u043d\u0438\u0440\u043e\u0432\u043a\u0438 \u0432\u044b\u0433\u043b\u044f\u0434\u0438\u0442 \u0442\u0430\u043a:def train():     path_data = 'data\/'     checkpoints_path = 'model'     num_epochs = 5     batch_size = 64     val_batch_size = 32     lr = 0.001     weight_decay = 0.0000001     model = SpoofModel()     model.train()     model = model.cuda()     epoch = 0     if os.path.exists(os.path.join(checkpoints_path, 'model_.pt')):         epoch, model = load_model(model, os.path.join(checkpoints_path, 'model_.pt'))     optimizer = torch.optim.SGD(model.parameters(), lr=lr, weight_decay=weight_decay)     criterion = torch.nn.BCEWithLogitsLoss()     path_images = []      for label in ['2dmask', 'real', 'printed', 'replay']:         videos = os.listdir(os.path.join(path_data, label))         for video in videos:             path_images.append({                 'path': os.path.join(path_data, label, video),                 'label': int(label != 'real'),                 })     split_on = int(len(path_images) * 0.7)     train_paths = path_images[:split_on]     val_paths = path_images[split_on:]     train_transform = torchvision.transforms.Compose([         torchvision.transforms.ToPILImage(),         torchvision.transforms.Resize(224),         torchvision.transforms.RandomHorizontalFlip(),         torchvision.transforms.ToTensor(),         torchvision.transforms.Normalize(             [0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])     val_transform = torchvision.transforms.Compose([         torchvision.transforms.ToPILImage(),         torchvision.transforms.Resize(224),         torchvision.transforms.ToTensor(),         torchvision.transforms.Normalize(             [0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])     train_dataset = AntispoofDataset(paths=train_paths, transform=train_transform)     train_loader = DataLoader(dataset=train_dataset,                               batch_size=batch_size,                               shuffle=True,                               num_workers=8,                               drop_last=True)      val_dataset = AntispoofDataset(paths=val_paths, transform=val_transform)     val_loader = DataLoader(dataset=val_dataset,                             batch_size=val_batch_size,                             shuffle=True,                             num_workers=8,                             drop_last=False)     tq = None     try:         for epoch in range(epoch, num_epochs):             tq = tqdm(total=len(train_loader) * batch_size, position=0, leave=True)             tq.set_description(f'Epoch {epoch}, lr {lr}')             losses = []             for inputs, labels in train_loader:                 inputs = inputs.cuda()                 labels = labels.cuda()                 optimizer.zero_grad()                 with torch.set_grad_enabled(True):                     outputs = model(inputs)                     loss = criterion(outputs.view(-1), labels.float())                     loss.backward()                     optimizer.step()                     optimizer.zero_grad()                     tq.update(batch_size)                     losses.append(loss.item())                 intermediate_mean_loss = np.mean(losses[-10:])                 tq.set_postfix(loss='{:.5f}'.format(intermediate_mean_loss))             epoch_loss = np.mean(losses)             epoch_metrics = validation(model, val_loader=val_loader)             tq.close()             print('\\nLoss: {:.4f}\\t Metrics: {}'.format(epoch_loss, epoch_metrics))             save_model(model, epoch, checkpoints_path, name_postfix=f'e{epoch}')     except KeyboardInterrupt:         tq.close()         print('\\nCtrl+C, saving model...')         save_model(model, epoch, checkpoints_path)<\/code><\/pre>\n<p>\u0418\u0442\u043e\u0433\u043e\u0432\u044b\u0439 \u0445\u043e\u0434 \u0442\u0440\u0435\u043d\u0438\u0440\u043e\u0432\u043a\u0438 \u0432\u044b\u0433\u043b\u044f\u0434\u0438\u0442 \u0442\u0430\u043a:<\/p>\n<figure class=\"full-width\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/habrastorage.org\/getpro\/habr\/upload_files\/c80\/0ab\/fbd\/c800abfbd69cf1d5186c93dbdda39f65.png\" width=\"664\" height=\"157\"><figcaption><\/figcaption><\/figure>\n<p>\u0412&nbsp;\u043a\u0430\u0447\u0435\u0441\u0442\u0432\u0435 \u043c\u043e\u0434\u0435\u043b\u0438 \u0434\u043b\u044f \u043f\u0440\u043e\u0432\u0435\u0440\u043a\u0438 \u0438\u0441\u043f\u043e\u043b\u044c\u0437\u0443\u0435\u043c \u0432\u0435\u0441\u0430 \u0441 3 \u044d\u043f\u043e\u0445\u0438.<\/p>\n<p>\u0414\u043b\u044f \u043f\u0440\u043e\u0432\u0435\u0440\u043a\u0438 \u0443 \u043d\u0430\u0441 \u0435\u0441\u0442\u044c 10 \u043f\u0440\u0438\u043c\u0435\u0440\u043e\u0432. \u041f\u043e\u0441\u0442\u0440\u043e\u0438\u043c confusion matrix:<\/p>\n<figure class=\"\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/habrastorage.org\/getpro\/habr\/upload_files\/dd9\/bec\/ba6\/dd9becba6c734833b83c54b8b0d5d573.png\" width=\"364\" height=\"379\"><figcaption><\/figcaption><\/figure>\n<p>\u041d\u0430 10 \u043f\u0440\u0438\u043c\u0435\u0440\u0430\u0445 \u043c\u044b \u0434\u043e\u0441\u0442\u0438\u0433\u043b\u0438 100% \u0442\u043e\u0447\u043d\u043e\u0441\u0442\u0438. \u041a\u043e\u043d\u0435\u0447\u043d\u043e, \u0434\u043b\u044f \u0438\u0434\u0435\u0430\u043b\u044c\u043d\u043e\u0439 \u043f\u0440\u043e\u0432\u0435\u0440\u043a\u0438 \u043c\u043e\u0434\u0435\u043b\u0438 \u0442\u0440\u0435\u0431\u0443\u0435\u0442\u0441\u044f &nbsp;\u0434\u0430\u043d\u043d\u044b\u0445 \u0437\u043d\u0430\u0447\u0438\u0442\u0435\u043b\u044c\u043d\u043e \u0431\u043e\u043b\u044c\u0448\u0435.<\/p>\n<p>\u0422\u0430\u043a\u0438\u043c \u043e\u0431\u0440\u0430\u0437\u043e\u043c, \u0432 \u0441\u0432\u043e\u0435\u0439 \u0441\u0442\u0430\u0442\u044c\u0435 \u044f \u043f\u0440\u0435\u0434\u043b\u043e\u0436\u0438\u043b \u043e\u0434\u0438\u043d \u0438\u0437 \u0432\u0430\u0440\u0438\u0430\u043d\u0442\u043e\u0432 \u0440\u0435\u0430\u043b\u0438\u0437\u0430\u0446\u0438\u0438 liveness detection \u0441 \u043f\u043e\u043c\u043e\u0449\u044c\u044e \u043a\u043b\u0430\u0441\u0441\u0438\u0444\u0438\u043a\u0430\u0446\u0438\u0438 \u0438\u0437\u043e\u0431\u0440\u0430\u0436\u0435\u043d\u0438\u0439 \u043d\u0435\u0439\u0440\u043e\u043d\u043d\u043e\u0439 \u0441\u0435\u0442\u044c\u044e. \u041f\u043e\u043b\u043d\u044b\u0439 \u043a\u043e\u0434 \u0440\u0430\u0437\u043c\u0435\u0449\u0435\u043d \u043f\u043e&nbsp;<a href=\"https:\/\/colab.research.google.com\/drive\/1pSl8iIV3ccQE34Hue4nDOq8An-2ZJfBa?usp=sharing\" rel=\"noopener noreferrer nofollow\">\u0441\u0441\u044b\u043b\u043a\u0435<\/a><\/p>\n<\/div>\n<p> \u0441\u0441\u044b\u043b\u043a\u0430 \u043d\u0430 \u043e\u0440\u0438\u0433\u0438\u043d\u0430\u043b \u0441\u0442\u0430\u0442\u044c\u0438 <a href=\"https:\/\/habr.com\/ru\/post\/539496\/\"> https:\/\/habr.com\/ru\/post\/539496\/<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"\n<div class=\"post__text post__text_v2\" id=\"post-content-body\">\n<p>\u0422\u0435\u0445\u043d\u043e\u043b\u043e\u0433\u0438\u0435\u0439 \u0440\u0430\u0441\u043f\u043e\u0437\u043d\u0430\u0432\u0430\u043d\u0438\u044f \u043b\u0438\u0446 \u0443\u0436\u0435 \u043d\u0438\u043a\u043e\u0433\u043e \u043d\u0435 \u0443\u0434\u0438\u0432\u0438\u0442\u044c. \u041a\u0440\u0443\u043f\u043d\u044b\u0435 \u043a\u043e\u043c\u043f\u0430\u043d\u0438\u0438 \u0430\u043a\u0442\u0438\u0432\u043d\u043e \u0432\u043d\u0435\u0434\u0440\u044f\u044e\u0442 \u044d\u0442\u0443 \u0442\u0435\u0445\u043d\u043e\u043b\u043e\u0433\u0438\u044e \u0432 \u0441\u0432\u043e\u0438 \u0441\u0435\u0440\u0432\u0438\u0441\u044b \u0438 \u043a\u043e\u043d\u0435\u0447\u043d\u043e, \u043c\u043e\u0448\u0435\u043d\u043d\u0438\u043a\u0438 \u043f\u044b\u0442\u0430\u044e\u0442\u0441\u044f \u0438\u0441\u043f\u043e\u043b\u044c\u0437\u043e\u0432\u0430\u0442\u044c \u0440\u0430\u0437\u043d\u044b\u0435 \u0441\u043f\u043e\u0441\u043e\u0431\u044b, \u0432 \u0442\u043e\u043c \u0447\u0438\u0441\u043b\u0435 \u043f\u043e\u0434\u043c\u0435\u043d\u0443 \u0438\u0434\u0435\u043d\u0442\u0438\u0444\u0438\u043a\u0430\u0442\u043e\u0440\u0430 \u043b\u0438\u0446\u0430 \u0441 \u043f\u043e\u043c\u043e\u0449\u044c\u044e \u043c\u0430\u0441\u043a\u0438, \u0444\u043e\u0442\u043e \u0438\u043b\u0438 \u0437\u0430\u043f\u0438\u0441\u0438 \u0434\u043b\u044f \u043e\u0441\u0443\u0449\u0435\u0441\u0442\u0432\u043b\u0435\u043d\u0438\u044f \u0441\u0432\u043e\u0438\u0445 \u043f\u0440\u0435\u0441\u0442\u0443\u043f\u043d\u044b\u0445 \u0434\u0435\u0439\u0441\u0442\u0432\u0438\u0439. \u0422\u0430\u043a\u0430\u044f \u0430\u0442\u0430\u043a\u0430 \u043d\u0430\u0437\u044b\u0432\u0430\u0435\u0442\u0441\u044f \u0441\u043f\u0443\u0444\u0438\u043d\u0433\u043e\u043c.<\/p>\n<p>\u0425\u043e\u0442\u0438\u043c \u043f\u043e\u0437\u043d\u0430\u043a\u043e\u043c\u0438\u0442\u044c \u0432\u0430\u0441 \u0441 \u0442\u0435\u0445\u043d\u043e\u043b\u043e\u0433\u0438\u0435\u0439 liveness detection, \u0432 \u0437\u0430\u0434\u0430\u0447\u0443 \u043a\u043e\u0442\u043e\u0440\u043e\u0439 \u0432\u0445\u043e\u0434\u0438\u0442 \u043f\u0440\u043e\u0432\u0435\u0440\u043a\u0430 \u0438\u0434\u0435\u043d\u0442\u0438\u0444\u0438\u043a\u0430\u0442\u043e\u0440\u0430 \u043d\u0430 \u043f\u0440\u0438\u043d\u0430\u0434\u043b\u0435\u0436\u043d\u043e\u0441\u0442\u044c \u00ab\u0436\u0438\u0432\u043e\u043c\u0443\u00bb \u043f\u043e\u043b\u044c\u0437\u043e\u0432\u0430\u0442\u0435\u043b\u044e.<\/p>\n<p>\u0414\u0430\u0442\u0430\u0441\u0435\u0442 \u043c\u043e\u0436\u043d\u043e \u0441\u043a\u0430\u0447\u0430\u0442\u044c \u043f\u043e&nbsp;<a href=\"https:\/\/yadi.sk\/d\/fsYqrmQ7kgwb_w?w=1\" rel=\"noopener noreferrer nofollow\">\u0441\u0441\u044b\u043b\u043a\u0435<\/a>.<\/p>\n<p>\u0414\u043b\u044f \u043e\u0431\u0443\u0447\u0435\u043d\u0438\u044f \u0432 \u0434\u0430\u0442\u0430\u0441\u0435\u0442\u0435 &nbsp;\u0435\u0441\u0442\u044c 4 \u043f\u043e\u0434\u043a\u043b\u0430\u0441\u0441\u0430.<\/p>\n<ul>\n<li>\n<p>real \u2014 \u00ab\u0436\u0438\u0432\u043e\u0435\u00bb \u043b\u0438\u0446\u043e<\/p>\n<\/li>\n<li>\n<p>replay \u2014 \u043a\u0430\u0434\u0440\u044b \u0441 \u0432\u0438\u0434\u0435\u043e<\/p>\n<\/li>\n<li>\n<p>printed \u2014 \u0440\u0430\u0441\u043f\u0435\u0447\u0430\u0442\u0430\u043d\u043d\u0430\u044f \u0444\u043e\u0442\u043e\u0433\u0440\u0430\u0444\u0438\u044f<\/p>\n<\/li>\n<li>\n<p>2dmask \u2014 \u043d\u0430\u0434\u0435\u0442\u0430\u044f 2d \u043c\u0430\u0441\u043a\u0430<\/p>\n<\/li>\n<\/ul>\n<figure class=\"\"><figcaption><\/figcaption><\/figure>\n<p>\u041a\u0430\u0436\u0434\u044b\u0439 \u043e\u0431\u0440\u0430\u0437\u0435\u0446 \u043f\u0440\u0435\u0434\u0441\u0442\u0430\u0432\u043b\u0435\u043d \u043f\u043e\u0441\u043b\u0435\u0434\u043e\u0432\u0430\u0442\u0435\u043b\u044c\u043d\u043e\u0441\u0442\u044c\u044e \u0438\u0437 5 \u043a\u0430\u0440\u0442\u0438\u043d\u043e\u043a.<\/p>\n<h3>\u0421\u0442\u0440\u043e\u0438\u043c \u043c\u043e\u0434\u0435\u043b\u044c<\/h3>\n<p>\u0414\u043b\u044f \u0440\u0435\u0448\u0435\u043d\u0438\u044f \u0437\u0430\u0434\u0430\u0447\u0438 \u043a\u043b\u0430\u0441\u0441\u0438\u0444\u0438\u043a\u0430\u0446\u0438\u0438 \u0438\u0437\u043e\u0431\u0440\u0430\u0436\u0435\u043d\u0438\u0439 \u043d\u0430 \u043f\u0440\u0438\u043d\u0430\u0434\u043b\u0435\u0436\u043d\u043e\u0441\u0442\u044c \u00ab\u0436\u0438\u0432\u043e\u043c\u0443\u00bb \u043f\u043e\u043b\u044c\u0437\u043e\u0432\u0430\u0442\u0435\u043b\u044e \u0431\u0443\u0434\u0435\u043c \u043e\u0431\u0443\u0447\u0430\u0442\u044c \u043d\u0435\u0439\u0440\u043e\u043d\u043d\u0443\u044e \u0441\u0435\u0442\u044c, \u0438\u0441\u043f\u043e\u043b\u044c\u0437\u0443\u044f \u0444\u0440\u0435\u0439\u043c\u0432\u043e\u0440\u043a pytorch.<\/p>\n<p>\u0420\u0435\u0448\u0435\u043d\u0438\u0435 \u0441\u0442\u0440\u043e\u0438\u0442\u0441\u044f \u043d\u0430 \u0440\u0430\u0431\u043e\u0442\u0435 \u0441 \u043f\u043e\u0441\u043b\u0435\u0434\u043e\u0432\u0430\u0442\u0435\u043b\u044c\u043d\u043e\u0441\u0442\u044c\u044e \u043a\u0430\u0440\u0442\u0438\u043d\u043e\u043a, \u0430 \u043d\u0435 \u0441 \u043a\u0430\u0436\u0434\u043e\u0439 \u043a\u0430\u0440\u0442\u0438\u043d\u043a\u043e\u0439 \u043e\u0442\u0434\u0435\u043b\u044c\u043d\u043e. \u0418\u0441\u043f\u043e\u043b\u044c\u0437\u0443\u0435\u043c \u043d\u0435\u0431\u043e\u043b\u044c\u0448\u0443\u044e \u043f\u0440\u0435\u0442\u0440\u0435\u043d\u0438\u0440\u043e\u0432\u0430\u043d\u043d\u0443\u044e \u0441\u0435\u0442\u044c Resnet18 \u043d\u0430 \u043a\u0430\u0436\u0434\u0443\u044e \u043a\u0430\u0440\u0442\u0438\u043d\u043a\u0443 \u0438\u0437 \u043f\u043e\u0441\u043b\u0435\u0434\u043e\u0432\u0430\u0442\u0435\u043b\u044c\u043d\u043e\u0441\u0442\u0438. \u0417\u0430\u0442\u0435\u043c \u0441\u0442\u0430\u043a\u0430\u0435\u043c \u043f\u043e\u043b\u0443\u0447\u0435\u043d\u043d\u044b\u0435 \u0444\u0438\u0447\u0438 \u0438 \u043f\u0440\u0438\u043c\u0435\u043d\u044f\u0435\u043c 1d \u0441\u0432\u0435\u0440\u0442\u043a\u0443 \u0438 \u0434\u0430\u043b\u0435\u0435 fully connected \u0441\u043b\u043e\u0439 \u043d\u0430 1 \u043a\u043b\u0430\u0441\u0441.<\/p>\n<p>\u0422\u0430\u043a\u0438\u043c \u043e\u0431\u0440\u0430\u0437\u043e\u043c, \u043d\u0430\u0448\u0430 \u0430\u0440\u0445\u0438\u0442\u0435\u043a\u0442\u0443\u0440\u0430 \u0432\u044b\u0433\u043b\u044f\u0434\u0438\u0442 \u0441\u043b\u0435\u0434\u0443\u044e\u0449\u0438\u043c \u043e\u0431\u0440\u0430\u0437\u043e\u043c:<\/p>\n<pre><code class=\"python\">class Empty(nn.Module):     def __init__(self):         super(Empty, self).__init__()      def forward(self, x):         return x class SpoofModel(nn.Module):     def __init__(self):         super(SpoofModel, self).__init__()         self.encoder = torchvision.models.resnet18()         self.encoder.fc = Empty()         self.conv1d = nn.Conv1d(             in_channels=5,             out_channels=1,             kernel_size=(3),             stride=(2),             padding=(1))         self.fc = nn.Linear(in_features=256, out_features=1)      def forward(self, x):         vectors = []         for i in range(0, x.shape[1]):             v = self.encoder(x[:, i])             v = v.reshape(v.size(0), -1)             vectors.append(v)         vectors = torch.stack(vectors)         vectors = vectors.permute((1, 0, 2))         vectors = self.conv1d(vectors)         x = self.fc(vectors)         return x<\/code><\/pre>\n<p>\u0414\u043b\u044f \u043f\u0440\u0438\u043c\u0435\u0440\u0430 \u043c\u044b \u0431\u0443\u0434\u0435\u043c \u0442\u0440\u0435\u043d\u0438\u0440\u043e\u0432\u0430\u0442\u044c \u043d\u0430\u0448\u0443 \u043c\u043e\u0434\u0435\u043b\u044c 5 \u044d\u043f\u043e\u0445 \u0441 \u0431\u0430\u0442\u0447 \u0441\u0430\u0439\u0437\u043e\u0432 64, \u0447\u0442\u043e \u0437\u0430\u0439\u043c\u0451\u0442 \u043f\u0440\u0438\u043c\u0435\u0440\u043d\u043e 1 \u0447\u0430\u0441 \u0441 \u0443\u0447\u0435\u0442\u043e\u043c \u0432\u0430\u043b\u0438\u0434\u0430\u0446\u0438\u0438 \u043d\u0430 \u043e\u0434\u043d\u043e\u0439 2080TI.<\/p>\n<p>\u041d\u0430 \u0432\u0430\u043b\u0438\u0434\u0430\u0446\u0438\u0438 \u0441\u043c\u043e\u0442\u0440\u0438\u043c &nbsp;3 \u043c\u0435\u0442\u0440\u0438\u043a\u0438: f1, accuracy \u0438 f2 score.<\/p>\n<p>\u041a\u043e\u0434 \u0434\u043b\u044f \u0432\u0430\u043b\u0438\u0434\u0430\u0446\u0438\u0438:<\/p>\n<pre><code class=\"python\">def eval_metrics(outputs, labels, threshold=0.5):     return {         'f1': f1_score(y_true=labels, y_pred=(outputs &gt; threshold).astype(int), average='macro'),         'accuracy': accuracy_score(y_true=labels, y_pred=(outputs &gt; threshold).astype(int)),         'fbeta 2': fbeta_score(labels,  y_pred=(outputs &gt; threshold).astype(int), beta=2, average='weighted'),         'f1 weighted': f1_score(y_true=labels, y_pred=(outputs &gt; threshold).astype(int), average='weighted')     }  def validation(model, val_loader):     model.eval()     metrics = []     batch_size = val_loader.batch_size     tq = tqdm(total=len(val_loader) * batch_size, position=0, leave=True)     with torch.no_grad():         for i, (inputs, labels) in enumerate(val_loader):             inputs = inputs.cuda()             labels = labels.cuda()             outputs = model(inputs).view(-1)             tq.update(batch_size)             metrics.append(eval_metrics(outputs.cpu().numpy(), labels.cpu().numpy()))         metrics_mean = mean_metrics(metrics)     tq.close()     return metrics_mean<\/code><\/pre>\n<p>\u0412 \u043a\u0430\u0447\u0435\u0441\u0442\u0432\u0435 \u043e\u043f\u0442\u0438\u043c\u0430\u0439\u0437\u0435\u0440\u0430 \u0438\u0441\u043f\u043e\u043b\u044c\u0437\u0443\u0435\u043c SGD c learning rate = 0.001, \u0430 \u0432 \u043a\u0430\u0447\u0435\u0441\u0442\u0432\u0435 loss BCEWithLogitsLoss.<\/p>\n<p>\u041d\u0435 \u0431\u0443\u0434\u0435\u043c \u0438\u0441\u043f\u043e\u043b\u044c\u0437\u043e\u0432\u0430\u0442\u044c \u044d\u043a\u0437\u043e\u0442\u0438\u0447\u0435\u0441\u043a\u0438\u0445 \u0430\u0443\u0433\u043c\u0435\u043d\u0442\u0430\u0446\u0438\u0439. \u0414\u0435\u043b\u0430\u0435\u043c \u0442\u043e\u043b\u044c\u043a\u043e Resize \u0438 RandomHorizontalFlip \u0434\u043b\u044f \u0438\u0437\u043e\u0431\u0440\u0430\u0436\u0435\u043d\u0438\u0439 \u043f\u0440\u0438 \u043e\u0431\u0443\u0447\u0435\u043d\u0438\u0438.<\/p>\n<p>\u041f\u043e\u043b\u043d\u044b\u0439 \u043a\u043e\u0434 \u0444\u0443\u043d\u043a\u0446\u0438\u0438 \u0434\u043b\u044f \u0442\u0440\u0435\u043d\u0438\u0440\u043e\u0432\u043a\u0438:<\/p>\n<pre><code class=\"python\">\u0418\u0442\u043e\u0433\u043e\u0432\u044b\u0439 \u0445\u043e\u0434 \u0442\u0440\u0435\u043d\u0438\u0440\u043e\u0432\u043a\u0438 \u0432\u044b\u0433\u043b\u044f\u0434\u0438\u0442 \u0442\u0430\u043a:def train():     path_data = 'data\/'     checkpoints_path = 'model'     num_epochs = 5     batch_size = 64     val_batch_size = 32     lr = 0.001     weight_decay = 0.0000001     model = SpoofModel()     model.train()     model = model.cuda()     epoch = 0     if os.path.exists(os.path.join(checkpoints_path, 'model_.pt')):         epoch, model = load_model(model, os.path.join(checkpoints_path, 'model_.pt'))     optimizer = torch.optim.SGD(model.parameters(), lr=lr, weight_decay=weight_decay)     criterion = torch.nn.BCEWithLogitsLoss()     path_images = []      for label in ['2dmask', 'real', 'printed', 'replay']:         videos = os.listdir(os.path.join(path_data, label))         for video in videos:             path_images.append({                 'path': os.path.join(path_data, label, video),                 'label': int(label != 'real'),                 })     split_on = int(len(path_images) * 0.7)     train_paths = path_images[:split_on]     val_paths = path_images[split_on:]     train_transform = torchvision.transforms.Compose([         torchvision.transforms.ToPILImage(),         torchvision.transforms.Resize(224),         torchvision.transforms.RandomHorizontalFlip(),         torchvision.transforms.ToTensor(),         torchvision.transforms.Normalize(             [0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])     val_transform = torchvision.transforms.Compose([         torchvision.transforms.ToPILImage(),         torchvision.transforms.Resize(224),         torchvision.transforms.ToTensor(),         torchvision.transforms.Normalize(             [0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])     train_dataset = AntispoofDataset(paths=train_paths, transform=train_transform)     train_loader = DataLoader(dataset=train_dataset,                               batch_size=batch_size,                               shuffle=True,                               num_workers=8,                               drop_last=True)      val_dataset = AntispoofDataset(paths=val_paths, transform=val_transform)     val_loader = DataLoader(dataset=val_dataset,                             batch_size=val_batch_size,                             shuffle=True,                             num_workers=8,                             drop_last=False)     tq = None     try:         for epoch in range(epoch, num_epochs):             tq = tqdm(total=len(train_loader) * batch_size, position=0, leave=True)             tq.set_description(f'Epoch {epoch}, lr {lr}')             losses = []             for inputs, labels in train_loader:                 inputs = inputs.cuda()                 labels = labels.cuda()                 optimizer.zero_grad()                 with torch.set_grad_enabled(True):                     outputs = model(inputs)                     loss = criterion(outputs.view(-1), labels.float())                     loss.backward()                     optimizer.step()                     optimizer.zero_grad()                     tq.update(batch_size)                     losses.append(loss.item())                 intermediate_mean_loss = np.mean(losses[-10:])                 tq.set_postfix(loss='{:.5f}'.format(intermediate_mean_loss))             epoch_loss = np.mean(losses)             epoch_metrics = validation(model, val_loader=val_loader)             tq.close()             print('\\nLoss: {:.4f}\\t Metrics: {}'.format(epoch_loss, epoch_metrics))             save_model(model, epoch, checkpoints_path, name_postfix=f'e{epoch}')     except KeyboardInterrupt:         tq.close()         print('\\nCtrl+C, saving model...')         save_model(model, epoch, checkpoints_path)<\/code><\/pre>\n<p>\u0418\u0442\u043e\u0433\u043e\u0432\u044b\u0439 \u0445\u043e\u0434 \u0442\u0440\u0435\u043d\u0438\u0440\u043e\u0432\u043a\u0438 \u0432\u044b\u0433\u043b\u044f\u0434\u0438\u0442 \u0442\u0430\u043a:<\/p>\n<figure class=\"full-width\"><figcaption><\/figcaption><\/figure>\n<p>\u0412&nbsp;\u043a\u0430\u0447\u0435\u0441\u0442\u0432\u0435 \u043c\u043e\u0434\u0435\u043b\u0438 \u0434\u043b\u044f \u043f\u0440\u043e\u0432\u0435\u0440\u043a\u0438 \u0438\u0441\u043f\u043e\u043b\u044c\u0437\u0443\u0435\u043c \u0432\u0435\u0441\u0430 \u0441 3 \u044d\u043f\u043e\u0445\u0438.<\/p>\n<p>\u0414\u043b\u044f \u043f\u0440\u043e\u0432\u0435\u0440\u043a\u0438 \u0443 \u043d\u0430\u0441 \u0435\u0441\u0442\u044c 10 \u043f\u0440\u0438\u043c\u0435\u0440\u043e\u0432. \u041f\u043e\u0441\u0442\u0440\u043e\u0438\u043c confusion matrix:<\/p>\n<figure class=\"\"><figcaption><\/figcaption><\/figure>\n<p>\u041d\u0430 10 \u043f\u0440\u0438\u043c\u0435\u0440\u0430\u0445 \u043c\u044b \u0434\u043e\u0441\u0442\u0438\u0433\u043b\u0438 100% \u0442\u043e\u0447\u043d\u043e\u0441\u0442\u0438. \u041a\u043e\u043d\u0435\u0447\u043d\u043e, \u0434\u043b\u044f \u0438\u0434\u0435\u0430\u043b\u044c\u043d\u043e\u0439 \u043f\u0440\u043e\u0432\u0435\u0440\u043a\u0438 \u043c\u043e\u0434\u0435\u043b\u0438 \u0442\u0440\u0435\u0431\u0443\u0435\u0442\u0441\u044f &nbsp;\u0434\u0430\u043d\u043d\u044b\u0445 \u0437\u043d\u0430\u0447\u0438\u0442\u0435\u043b\u044c\u043d\u043e \u0431\u043e\u043b\u044c\u0448\u0435.<\/p>\n<p>\u0422\u0430\u043a\u0438\u043c \u043e\u0431\u0440\u0430\u0437\u043e\u043c, \u0432 \u0441\u0432\u043e\u0435\u0439 \u0441\u0442\u0430\u0442\u044c\u0435 \u044f \u043f\u0440\u0435\u0434\u043b\u043e\u0436\u0438\u043b \u043e\u0434\u0438\u043d \u0438\u0437 \u0432\u0430\u0440\u0438\u0430\u043d\u0442\u043e\u0432 \u0440\u0435\u0430\u043b\u0438\u0437\u0430\u0446\u0438\u0438 liveness detection \u0441 \u043f\u043e\u043c\u043e\u0449\u044c\u044e \u043a\u043b\u0430\u0441\u0441\u0438\u0444\u0438\u043a\u0430\u0446\u0438\u0438 \u0438\u0437\u043e\u0431\u0440\u0430\u0436\u0435\u043d\u0438\u0439 \u043d\u0435\u0439\u0440\u043e\u043d\u043d\u043e\u0439 \u0441\u0435\u0442\u044c\u044e. \u041f\u043e\u043b\u043d\u044b\u0439 \u043a\u043e\u0434 \u0440\u0430\u0437\u043c\u0435\u0449\u0435\u043d \u043f\u043e&nbsp;<a href=\"https:\/\/colab.research.google.com\/drive\/1pSl8iIV3ccQE34Hue4nDOq8An-2ZJfBa?usp=sharing\" rel=\"noopener noreferrer nofollow\">\u0441\u0441\u044b\u043b\u043a\u0435<\/a><\/p>\n<\/div>\n<p> \u0441\u0441\u044b\u043b\u043a\u0430 \u043d\u0430 \u043e\u0440\u0438\u0433\u0438\u043d\u0430\u043b \u0441\u0442\u0430\u0442\u044c\u0438 <a href=\"https:\/\/habr.com\/ru\/post\/539496\/\"> https:\/\/habr.com\/ru\/post\/539496\/<\/a><br \/><\/br><\/br><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[],"tags":[],"class_list":["post-317126","post","type-post","status-publish","format-standard","hentry"],"_links":{"self":[{"href":"https:\/\/savepearlharbor.com\/index.php?rest_route=\/wp\/v2\/posts\/317126","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/savepearlharbor.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/savepearlharbor.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/savepearlharbor.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/savepearlharbor.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=317126"}],"version-history":[{"count":0,"href":"https:\/\/savepearlharbor.com\/index.php?rest_route=\/wp\/v2\/posts\/317126\/revisions"}],"wp:attachment":[{"href":"https:\/\/savepearlharbor.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=317126"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/savepearlharbor.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=317126"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/savepearlharbor.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=317126"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}