Open in app

Sign In

Write

Sign In

jun94
jun94

122 Followers

Home

About

Published in jun-devpBlog

·Apr 16, 2021

[Review] 6. YOLO ver 2

1. Contribution proposed a method to jointly train YOLO2 simultaneously on COCO detection dataset and ImageNet classification dataset, which enables the model to predict object classes that don’t have labeled detection data. instead of simply scaling up or ensembling multiple models, YOLO2 focused on simplifying the network while keeping the accuracy. applied…

Paper Review

6 min read

[Review] 6. YOLO ver 2
[Review] 6. YOLO ver 2
Paper Review

6 min read


Published in jun-devpBlog

·Apr 8, 2021

[Paper Review] 5. YOLO ver 1

1. Contribution of YOLO YOLO views the object detection task as detecting a spatially separated bounding box and its corresponding class distribution. Further, unlike previous researches, it uses a single neural network architecture to predict bounding box locations and the associated class probabilities simultaneously (One-stage, end-to-end). YOLO archives a real-time evaluation speed (45 frames…

Paper Review

6 min read

[Paper Review] 5. YOLO ver 1
[Paper Review] 5. YOLO ver 1
Paper Review

6 min read


Published in jun-devpBlog

·Apr 8, 2021

[Review] 4. Faster R-CNN

1. Improvement from Fast R-CNN by introducing a Region Proposal Network Introduce a Region Proposal Network (RPN) and replace the external region proposal algorithms (e.g., Selective Search, EdgeBoxes), which costs 2 and 0.2 seconds per image, respectively. Region proposal algorithms are slow by order of magnitude as they are implemented on CPU, while the detection network is on GPU. RPN takes…

Paper Review

5 min read

[Review] 4. Faster R-CNN
[Review] 4. Faster R-CNN
Paper Review

5 min read


Published in jun-devpBlog

·Apr 8, 2021

[Review] 3. Fast R-CNN

Paper Review

Paper Review

1 min read

Paper Review

1 min read

[Review] 3. Fast R-CNN

--

--


Published in jun-devpBlog

·Dec 28, 2020

[DL] 13. Convolution and Pooling Variants (Dilated Convolution, SPP, ASPP)

1. Dilated Convolution Dilated convolution, which is also often called atrous convolution, was introduced in 2016 ICLR. Its major idea is that when performing convolution, not taking a look at directly adjacent pixels, but further away pixels by a certain distance. This distance is called ‘Dilation rate r,’ and the dilated convolution can…

Deep Learning

3 min read

[DL] 13. Convolution and Pooling Variants (Dilated Convolution, SPP, ASPP)
[DL] 13. Convolution and Pooling Variants (Dilated Convolution, SPP, ASPP)
Deep Learning

3 min read


Published in jun-devpBlog

·Dec 28, 2020

[DL] 12. Unsampling: Unpooling and Transpose Convolution

1. Motivation When we study neural network architectures based on encoder and decoder, it is commonly observed that the network performs downsampling in the encoder and upsampling inside the decoder, as is illustrated in Fig 1. Methods for downsampling are max pooling and strided convolution. …

Deep Learning

5 min read

[DL] 12. Unsampling: Unpooling and Transpose Convolution
[DL] 12. Unsampling: Unpooling and Transpose Convolution
Deep Learning

5 min read


Dec 23, 2020

[CV] 14. Image Alignment (Transformation Estimation)

1. Recap Previously, we have seen how local features are extracted from an image, using scale-invariant local feature descriptors such as Harris-Laplace and SIFT. After extracting local features individually from two images, the local features can be paired by searching the one with the highest similarity. More specifically, the basic matching algorithm…

Computer Vision

7 min read

[CV] 14.  Image Alignment (Transformation Estimation)
[CV] 14.  Image Alignment (Transformation Estimation)
Computer Vision

7 min read


Published in jun-devpBlog

·Dec 16, 2020

[CV] 13. Scale-Invariant Local Feature Extraction(3): SIFT

1. SIFT (Scale Invariant Feature Transform) Another scale-invariant algorithm I want to address is the SIFT. With SIFT, the location of local feature points (interest points) are extracted from an image, and further, the corresponding vector representation of the respective interest point is generated, which could be used later on for feature matching between images. As…

Computer Vision

10 min read

[CV] 13. Scale-Invariant Local Feature Extraction(3): SIFT
[CV] 13. Scale-Invariant Local Feature Extraction(3): SIFT
Computer Vision

10 min read


Published in jun-devpBlog

·Dec 15, 2020

[CV] 12. Scale-Invariant Local Feature Extraction(2): Harris-Laplace

This article is the second part of the topic: scale-invariant local feature extraction. …

Computer Vision

4 min read

[CV] 12. Scale-Invariant Local Feature Extraction(2): Harris-Laplace
[CV] 12. Scale-Invariant Local Feature Extraction(2): Harris-Laplace
Computer Vision

4 min read


Published in jun-devpBlog

·Dec 13, 2020

[CV] 11. Scale-Invariant Local Feature Extraction(1): Auto Scale Selection

1. Motivation: Why scale-invariant? Let’s recall what we studied about the local feature extractors: Harris and Hessian in the last article. Harris and Hessian are strong corner detectors. However, detectors are rotation invariant. However, they are not scale-invariant, which is a crucial drawback in terms of feature detection. …

Computer Vision

6 min read

[CV] 11. Scale-Invariant Local Feature Extraction(1): Auto Scale Selection
[CV] 11. Scale-Invariant Local Feature Extraction(1): Auto Scale Selection
Computer Vision

6 min read

jun94

jun94

122 Followers
Following
  • Jae Duk Seo

    Jae Duk Seo

  • Cassie Kozyrkov

    Cassie Kozyrkov

  • Valentina Alto

    Valentina Alto

  • Ms Aerin

    Ms Aerin

  • Han Bin Lee

    Han Bin Lee

Help

Status

Writers

Blog

Careers

Privacy

Terms

About

Text to speech