Computer Vision - ACCV 2020
15th Asian Conference on Computer Vision, Kyoto, Japan, November 30 - December 4, 2020, Revised Selected Papers, Part IV
Herausgegeben:Ishikawa, Hiroshi; Liu, Cheng-Lin; Pajdla, Tomas; Shi, Jianbo
Computer Vision - ACCV 2020
15th Asian Conference on Computer Vision, Kyoto, Japan, November 30 - December 4, 2020, Revised Selected Papers, Part IV
Herausgegeben:Ishikawa, Hiroshi; Liu, Cheng-Lin; Pajdla, Tomas; Shi, Jianbo
- Broschiertes Buch
- Merkliste
- Auf die Merkliste
- Bewerten Bewerten
- Teilen
- Produkt teilen
- Produkterinnerung
- Produkterinnerung
The six volume set of LNCS 12622-12627 constitutes the proceedings of the 15th Asian Conference on Computer Vision, ACCV 2020, held in Kyoto, Japan, in November/ December 2020._ The total of 254 contributions was carefully reviewed and selected from 768 submissions during two rounds of reviewing and improvement. The papers focus on the following topics:
Part I: 3D computer vision; segmentation and grouping
Part II: low-level vision, image processing; motion and tracking
Part III: recognition and detection; optimization, statistical methods, and learning; robot vision Part IV: deep…mehr
Andere Kunden interessierten sich auch für
- Computer Vision - ACCV 202077,99 €
- Computer Vision - ACCV 202077,99 €
- Computer Vision - ACCV 202077,99 €
- Computer Vision - ACCV 202077,99 €
- Computer Vision - ACCV 202077,99 €
- Computer Vision - ACCV 2020 Workshops39,99 €
- Computer Vision - ACCV 2022131,99 €
-
-
-
The six volume set of LNCS 12622-12627 constitutes the proceedings of the 15th Asian Conference on Computer Vision, ACCV 2020, held in Kyoto, Japan, in November/ December 2020._
The total of 254 contributions was carefully reviewed and selected from 768 submissions during two rounds of reviewing and improvement. The papers focus on the following topics:
Part I: 3D computer vision; segmentation and grouping
Part II: low-level vision, image processing; motion and tracking
Part III: recognition and detection; optimization, statistical methods, and learning; robot vision
Part IV: deep learning for computer vision, generative models for computer vision
Part V: face, pose, action, and gesture; video analysis and event recognition; biomedical image analysis
Part VI: applications of computer vision; vision for X; datasets and performance analysis
_The conference was held virtually.
The total of 254 contributions was carefully reviewed and selected from 768 submissions during two rounds of reviewing and improvement. The papers focus on the following topics:
Part I: 3D computer vision; segmentation and grouping
Part II: low-level vision, image processing; motion and tracking
Part III: recognition and detection; optimization, statistical methods, and learning; robot vision
Part IV: deep learning for computer vision, generative models for computer vision
Part V: face, pose, action, and gesture; video analysis and event recognition; biomedical image analysis
Part VI: applications of computer vision; vision for X; datasets and performance analysis
_The conference was held virtually.
Produktdetails
- Produktdetails
- Lecture Notes in Computer Science 12625
- Verlag: Springer / Springer International Publishing / Springer, Berlin
- Artikelnr. des Verlages: 978-3-030-69537-8
- 1st ed. 2021
- Seitenzahl: 736
- Erscheinungstermin: 25. Februar 2021
- Englisch
- Abmessung: 235mm x 155mm x 40mm
- Gewicht: 1095g
- ISBN-13: 9783030695378
- ISBN-10: 3030695379
- Artikelnr.: 60929859
- Herstellerkennzeichnung Die Herstellerinformationen sind derzeit nicht verfügbar.
- Lecture Notes in Computer Science 12625
- Verlag: Springer / Springer International Publishing / Springer, Berlin
- Artikelnr. des Verlages: 978-3-030-69537-8
- 1st ed. 2021
- Seitenzahl: 736
- Erscheinungstermin: 25. Februar 2021
- Englisch
- Abmessung: 235mm x 155mm x 40mm
- Gewicht: 1095g
- ISBN-13: 9783030695378
- ISBN-10: 3030695379
- Artikelnr.: 60929859
- Herstellerkennzeichnung Die Herstellerinformationen sind derzeit nicht verfügbar.
Deep Learning for Computer Vision.- In-sample Contrastive Learning and Consistent Attention for Weakly Supervised Object Localization.- Exploiting Transferable Knowledge for Fairness-aware Image Classification.- Introspective Learning by Distilling Knowledge from Online Self-explanation.- Hyperparameter-Free Out-of-Distribution Detection Using Cosine Similarity.- Meta-Learning with Context-Agnostic Initialisations.- Second Order enhanced Multi-glimpse Attention in Visual Question Answering.- Localize to Classify and Classify to Localize: Mutual Guidance in Object Detection.- Unified Density-Aware Image Dehazing and Object Detection in Real-World Hazy Scenes.- Part-aware Attention Network for Person Re-Identification.- Image Captioning through Image Transformer.- Feature Variance Ratio-Guided Channel Pruning for Deep Convolutional Network Acceleration.- Learn more, forget less: Cues from human brain.- Knowledge Transfer Graph for Deep Collaborative Learning.- Regularizing Meta-Learning via Gradient Dropout.- Vax-a-Net: Training-time Defence Against Adversarial Patch Attacks.- Towards Optimal Filter Pruning with Balanced Performance and Pruning Speed.- Contrastively Smoothed Class Alignment for Unsupervised Domain Adaptation.- Double Targeted Universal Adversarial Perturbations.- Adversarially Robust Deep Image Super-Resolution using Entropy Regularization.- Online Knowledge Distillation via Multi-branch Diversity Enhancement.- Rotation Equivariant Orientation Estimation for Omnidirectional Localization.- Contextual Semantic Interpretability.- Few-Shot Object Detection by Second-order Pooling.- Depth-Adapted CNN for RGB-D cameras.- Generative Models for Computer Vision.- Over-exposure Correction via Exposure and Scene Information Disentanglement.- Novel-View Human Action Synthesis.- Augmentation Network for Generalised Zero-Shot Learning.- Local Facial Makeup Transfer via Disentangled Representation.- OpenGAN: Open Set Generative Adversarial Networks.- CPTNet: Cascade Pose Transform Network for Single Image Talking Head Animation.- TinyGAN: Distilling BigGAN for Conditional Image Generation.- A cost-effective method for improving and re-purposing large, pre-trained GANs by fine-tuning their class-embeddings.- RF-GAN: A Light and Reconfigurable Network for Unpaired Image-to-Image Translation.- GAN-based Noise Model for Denoising Real Images.- Emotional Landscape Image Generation Using Generative Adversarial Networks.- Feedback Recurrent Autoencoder for Video Compression.- MatchGAN: A Self-Supervised Semi-Supervised Conditional Generative Adversarial Network.- DeepSEE: Deep Disentangled Semantic Explorative Extreme Super-Resolution.- dpVAEs: Fixing Sample Generation for Regularized VAEs.- MagGAN: High-Resolution Face Attribute Editing with Mask-Guided Generative Adversarial Network.- EvolGAN: Evolutionary Generative Adversarial Networks.- Sequential View Synthesis with Transformer.
Deep Learning for Computer Vision.- In-sample Contrastive Learning and Consistent Attention for Weakly Supervised Object Localization.- Exploiting Transferable Knowledge for Fairness-aware Image Classification.- Introspective Learning by Distilling Knowledge from Online Self-explanation.- Hyperparameter-Free Out-of-Distribution Detection Using Cosine Similarity.- Meta-Learning with Context-Agnostic Initialisations.- Second Order enhanced Multi-glimpse Attention in Visual Question Answering.- Localize to Classify and Classify to Localize: Mutual Guidance in Object Detection.- Unified Density-Aware Image Dehazing and Object Detection in Real-World Hazy Scenes.- Part-aware Attention Network for Person Re-Identification.- Image Captioning through Image Transformer.- Feature Variance Ratio-Guided Channel Pruning for Deep Convolutional Network Acceleration.- Learn more, forget less: Cues from human brain.- Knowledge Transfer Graph for Deep Collaborative Learning.- Regularizing Meta-Learning via Gradient Dropout.- Vax-a-Net: Training-time Defence Against Adversarial Patch Attacks.- Towards Optimal Filter Pruning with Balanced Performance and Pruning Speed.- Contrastively Smoothed Class Alignment for Unsupervised Domain Adaptation.- Double Targeted Universal Adversarial Perturbations.- Adversarially Robust Deep Image Super-Resolution using Entropy Regularization.- Online Knowledge Distillation via Multi-branch Diversity Enhancement.- Rotation Equivariant Orientation Estimation for Omnidirectional Localization.- Contextual Semantic Interpretability.- Few-Shot Object Detection by Second-order Pooling.- Depth-Adapted CNN for RGB-D cameras.- Generative Models for Computer Vision.- Over-exposure Correction via Exposure and Scene Information Disentanglement.- Novel-View Human Action Synthesis.- Augmentation Network for Generalised Zero-Shot Learning.- Local Facial Makeup Transfer via Disentangled Representation.- OpenGAN: Open Set Generative Adversarial Networks.- CPTNet: Cascade Pose Transform Network for Single Image Talking Head Animation.- TinyGAN: Distilling BigGAN for Conditional Image Generation.- A cost-effective method for improving and re-purposing large, pre-trained GANs by fine-tuning their class-embeddings.- RF-GAN: A Light and Reconfigurable Network for Unpaired Image-to-Image Translation.- GAN-based Noise Model for Denoising Real Images.- Emotional Landscape Image Generation Using Generative Adversarial Networks.- Feedback Recurrent Autoencoder for Video Compression.- MatchGAN: A Self-Supervised Semi-Supervised Conditional Generative Adversarial Network.- DeepSEE: Deep Disentangled Semantic Explorative Extreme Super-Resolution.- dpVAEs: Fixing Sample Generation for Regularized VAEs.- MagGAN: High-Resolution Face Attribute Editing with Mask-Guided Generative Adversarial Network.- EvolGAN: Evolutionary Generative Adversarial Networks.- Sequential View Synthesis with Transformer.