Moving Objects Segmentation and Background Estimation

Back to Verticals page

Moving Objects Segmentation and Background Estimation

 

In this vertical we push the boundaries of visual perception to enhance moving object detection and segmentation as well as background estimation for improved surveillance and situational awareness.

a. Object detection Using AdaBoost:

In this study, AdaBoost-based learning algorithm is employed for the detection of low-level objects that lack fixed geometric structure. Unlike human faces, these edge-corners can manifest in various orientations, presenting a challenge in learning due to the absence of common geometric features.

Related Publications

b Unsupervised Moving Object Segmentation Using Background Subtraction

In this study, the aim is to segment moving objects using background subtraction. Challenges such as illumination variations, dynamic backgrounds, camouflage, and scenes with bootstrapping are effectively handled. These algorithms utilize multiple adversarial regularizations and conventional least squares loss to improve segmentation.

Related Publications

c. Moving Objects Segmentation Using Generative Adversarial Modeling

In this study, the aim is to develop generative adversarial-based frameworks for Moving Objects Segmentation (MOS) to address challenging real-world conditions, including varying illumination, camouflage, dynamic backgrounds, shadows, weather variations, and camera jitters. The approach involve joint training with a classifier discriminator, a representation learning network, and a generator to handle MOS in diverse scenarios.

Related Publications

d. Background/Foreground Separation: Guided Attention-based Adversarial Modeling

In this study, we explore the attention based adversarial models that offer an advanced solution for background-foreground separation and appearance generation in computer vision applications. In contrast to existing methods like Robust Subspace Learning (RSL), GAAMs utilize a deep neural network to achieve more accurate results, particularly in challenging conditions such as bad weather, illumination variations, and occlusion. GAAMs efficiently extract pixel- level boundaries through an attention map, guiding the generator network for enhanced foreground object segmentation.

Related Publications

e. Unsupervised Moving Object Detection in Complex Scenes Using Adversarial Regularizations

In this study, we explore GAN-based methods that excel in achieving complete detection of moving objects, crucial for applications like depth estimation and scene understanding in computer vision. The method is designed to generate occlusion-free results by conditioning non-occluded pixels during training, addressing challenges in moving object detection in an unsupervised setting.

Related Publications

f. Dynamic Background Subtraction using Least Squares Adversarial Learning

Dynamic Background Subtraction (BS) is crucial in vision applications, facing challenges like illumination variations and shadows. We address these issues using conditional least squares adversarial networks, employing L1-Loss and Perceptual-Loss during training.

Related Publications

g. CS-RPCA: Clustered Sparse RPCA for Moving Object Detection

Moving Object Detection (MOD) is crucial in computer vision, with RPCA being a promising solution. However, RPCA's performance degrades in complex scenes. To address this, we explore Clustered Sparse RPCA (CS-RPCA), extracting multiple features and merging sparse subspaces.

Related Publications

h. Unsupervised Adversarial Learning for Dynamic Background Modeling

Dynamic Background Modeling (DBM) is vital for various computer vision applications. In this study, we explore end-to-end frameworks, based on GANs, that address DBM in an unsupervised manner. These frameworks adeptly handle challenges like illumination changes, camouflage, and intermittent motion, generating dynamic background information during testing.

Related Publications

i. Moving Object Detection in Complex Scene Using Spatiotemporal Structured- Sparse RPCA

commonly tackled using Robust Principal Component Analysis (RPCA). Our approach introduces a spatiotemporal structured sparse RPCA algorithm that leverages spatial and temporal graph Laplacians derived from superpixel features. This incorporation helps preserve the spatiotemporal structure of moving objects.

Related Publications

j. Unsupervised Deep Context Prediction for Background Estimation and Foreground Segmentation

In many applications such as tracking and surveillance, background estimation is a crucial initial step. We present a unified approach utilizing GANs, aiming to explore context prediction networks. Hybrid GAN models are used for unsupervised visual feature learning, followed by semantic inpainting networks for texture enhancement. Additionally, we introduce solutions for arbitrary region inpainting using the center region inpainting method and Poisson blending.

Related Publications