Human Part Segmentation in Depth Images with Annotated Part Positions

Hynes, Andrew and Czarnuch, Stephen (2018) Human Part Segmentation in Depth Images with Annotated Part Positions. Sensors, 18 (6). ISSN 1424-8220

[img] [English] PDF - Published Version
Available under License Creative Commons Attribution Non-commercial.

Download (985kB)

Abstract

We present a method of segmenting human parts in depth images, when provided the image positions of the body parts. The goal is to facilitate per-pixel labelling of large datasets of human images, which are used for training and testing algorithms for pose estimation and automatic segmentation. A common technique in image segmentation is to represent an image as a two-dimensional grid graph, with one node for each pixel and edges between neighbouring pixels. We introduce a graph with distinct layers of nodes to model occlusion of the body by the arms. Once the graph is constructed, the annotated part positions are used as seeds for a standard interactive segmentation algorithm. Our method is evaluated on two public datasets containing depth images of humans from a frontal view. It produces a mean per-class accuracy of 93.55% on the first dataset, compared to 87.91% (random forest and graph cuts) and 90.31% (random forest and Markov random field). It also achieves a per-class accuracy of 90.60% on the second dataset. Future work can experiment with various methods for creating the graph layers to accurately model occlusion.

Item Type: Article
URI: http://research.library.mun.ca/id/eprint/13705
Item ID: 13705
Additional Information: Memorial University Open Access Author's Fund
Keywords: human parts, interactive image segmentation, occlusion, grid graph
Department(s): Engineering and Applied Science, Faculty of
Date: 11 June 2018
Date Type: Publication
Related URLs:

Actions (login required)

View Item View Item

Downloads

Downloads per month over the past year

View more statistics