Estimation of Defocus Blur in Virtual Environments Comparing Graph Cuts and Convolutional Neural Network

dc.contributor.advisorChristopher, Lauren
dc.contributor.authorChowdhury, Prodipto
dc.contributor.otherKing, Brian
dc.contributor.otherBen-Miled, Zina
dc.date.accessioned2018-12-05T21:40:37Z
dc.date.available2018-12-05T21:40:37Z
dc.date.issued2018-12
dc.degree.date2018en_US
dc.degree.disciplineElectrical & Computer Engineeringen
dc.degree.grantorPurdue Universityen_US
dc.degree.levelM.S.E.C.E.en_US
dc.descriptionIndiana University-Purdue University Indianapolis (IUPUI)en_US
dc.description.abstractDepth estimation is one of the most important problems in computer vision. It has attracted a lot of attention because it has applications in many areas, such as robotics, VR and AR, self-driving cars etc. Using the defocus blur of a camera lens is one of the methods of depth estimation. In this thesis, we have researched this technique in virtual environments. Virtual datasets have been created for this purpose. In this research, we have applied graph cuts and convolutional neural network (DfD-net) to estimate depth from defocus blur using a natural (Middlebury) and a virtual (Maya) dataset. Graph Cuts showed similar performance for both natural and virtual datasets in terms of NMAE and NRMSE. However, with regard to SSIM, the performance of graph cuts is 4% better for Middlebury compared to Maya. We have trained the DfD-net using the natural and the virtual dataset and then combining both datasets. The network trained by the virtual dataset performed best for both datasets. The performance of graph-cuts and DfD-net have been compared. Graph-Cuts performance is 7% better than DfD-Net in terms of SSIM for Middlebury images. For Maya images, DfD-Net outperforms Graph-Cuts by 2%. With regard to NRMSE, Graph-Cuts and DfD-net shows similar performance for Maya images. For Middlebury images, Graph-cuts is 1.8% better. The algorithms show no difference in performance in terms of NMAE. The time DfD-net takes to generate depth maps compared to graph cuts is 500 times less for Maya images and 200 times less for Middlebury images.en_US
dc.identifier.urihttps://hdl.handle.net/1805/17925
dc.identifier.urihttp://dx.doi.org/10.7912/C2/2484
dc.language.isoen_USen_US
dc.rightsAttribution-NonCommercial-ShareAlike 3.0 United States
dc.rights.urihttps://creativecommons.org/licenses/by-nc-sa/3.0/us
dc.subject3Den_US
dc.subjectGraph cutsen_US
dc.subjectDepth estimationen_US
dc.subjectDeep learningen_US
dc.subjectVirtual environmentsen_US
dc.subjectConvolutional neural networken_US
dc.subjectDefocus bluren_US
dc.titleEstimation of Defocus Blur in Virtual Environments Comparing Graph Cuts and Convolutional Neural Networken_US
dc.typeThesisen
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
3. estimation-depth-defocus.pdf
Size:
5.93 MB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.99 KB
Format:
Item-specific license agreed upon to submission
Description: