Prepare COCO dataset of a specific subset of classes for semantic image segmentation

Prepare the below directory structure to save images and annotation files for training and validation data.

.
├── annotations
│   ├── train
│   └── validation
└── images
    ├── train
    └── validation

Install Dependency

!pip install -q git+https://github.com/philferriere/cocoapi.git#subdirectory=PythonAPI

Import required Packages

from pycocotools.coco import COCO
import matplotlib.pyplot as plt
from tqdm import tqdm
import numpy as np
import shutil
import torch
import cv2

 

train_annotations = COCO("instances_train2014.json")
valid_annotations = COCO("instances_val2014.json")

cat_ids = train_annotations.getCatIds(supNms=["person"])

train_img_ids = []
for cat in cat_ids:
    train_img_ids.extend(train_annotations.getImgIds(catIds=cat))
    
train_img_ids = list(set(train_img_ids))
print(f"Number of training images: {len(train_img_ids)}")

valid_img_ids = []
for cat in cat_ids:
    valid_img_ids.extend(valid_annotations.getImgIds(catIds=cat))
    
valid_img_ids = list(set(valid_img_ids))
print(f"Number of validation images: {len(valid_img_ids)}")

Prepare Training dataset

root_path = 'train2014'
_type = 'train'

for img_ids in tqdm(train_img_ids):
    
    img_data = train_annotations.loadImgs(img_ids)
    files = [str(root_path + '/' + img["file_name"]) for img in img_data]
        
    ann_ids = train_annotations.getAnnIds(
            imgIds=img_data[0]['id'], 
            catIds=cat_ids, 
            iscrowd=None
        )
    
    anns = train_annotations.loadAnns(ann_ids)
    mask = torch.LongTensor(np.max(np.stack([train_annotations.annToMask(ann) * ann["category_id"] 
                                                 for ann in anns]), axis=0)).unsqueeze(0)
    
    
    x_arr = mask.squeeze().cpu().detach().numpy()
    temp_str = files[0].split('/')[-1].replace('.jpg', '.png')
    cv2.imwrite(f"annotations/{_type}/{temp_str}", x_arr)

    src = files[0]
    dest = f'images/{_type}'
    shutil.copy(src, dest)

Prepare Validation Dataset

_type = 'validation'
root_path = 'val2014'

for img_ids in tqdm(valid_img_ids):
    
    img_data = valid_annotations.loadImgs(img_ids)
    files = [str(root_path + '/' + img["file_name"]) for img in img_data]
        
    ann_ids = valid_annotations.getAnnIds(
            imgIds=img_data[0]['id'], 
            catIds=cat_ids, 
            iscrowd=None
        )
    
    anns = valid_annotations.loadAnns(ann_ids)
    mask = torch.LongTensor(np.max(np.stack([valid_annotations.annToMask(ann) * ann["category_id"] 
                                                 for ann in anns]), axis=0)).unsqueeze(0)
    
    
    x_arr = mask.squeeze().cpu().detach().numpy()
    temp_str = files[0].split('/')[-1].replace('.jpg', '.png')
    cv2.imwrite(f"annotations/{_type}/{temp_str}", x_arr)

    src = files[0]
    dest = f'images/{_type}'
    shutil.copy(src, dest)

.
├── annotations
│   ├── train
│   │   └── COCO_train2014_000000262145.png
│   └── validation
│   └── COCO_val2014_000000262148.png
└── images
├── train
│   └── COCO_train2014_000000262145.jpg
└── validation
└── COCO_val2014_000000262148.jpg

Leave a Reply

Your email address will not be published. Required fields are marked *

Computer Vision Tutorials

YOLOV4: Train a yolov4-tiny on the custom dataset using google colab.

Video classification techniques with Deep Learning

Keras ImageDataGenerator with flow_from_dataframe()

Keras ImageDataGenerator with flow_from_directory()

Keras ImageDataGenerator with flow()

Keras ImageDataGenerator

Keras fit, fit_generator, train_on_batch

Keras Modeling | Sequential vs Functional API

Save and Load Keras Model

Convolutional Neural Networks (CNN) with Keras in Python

Transfer Learning for Image Recognition Using Pre-Trained Models

An introduction to Transfer Learning

Keras ImageDataGenerator and Data Augmentation

Introduction to Computer Vision