- Browse by Subject
Browsing by Subject "image processing"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
Item Automated Fovea Detection Based on Unsupervised Retinal Vessel Segmentation Method(IEEE, 2017-10) Tavakoli, Meysam; Kelley, Patrick; Nazar, Mahdieh; Kalantari, Faraz; Physics, School of ScienceThe Computer Assisted Diagnosis systems could save workloads and give objective diagnostic to ophthalmologists. At first level of automated screening of systems feature extraction is the fundamental step. One of these retinal features is the fovea. The fovea is a small fossa on the fundus, which is represented by a deep-red or red-brown color in color retinal images. By observing retinal images, it appears that the main vessels diverge from the optic nerve head and follow a specific course that can be geometrically modeled as a parabola, with a common vertex inside the optic nerve head and the fovea located along the apex of this parabola curve. Therefore, based on this assumption, the main retinal blood vessels are segmented and fitted to a parabolic model. With respect to the core vascular structure, we can thus detect fovea in the fundus images. For the vessel segmentation, our algorithm addresses the image locally where homogeneity of features is more likely to occur. The algorithm is composed of 4 steps: multi-overlapping windows, local Radon transform, vessel validation, and parabolic fitting. In order to extract blood vessels, sub-vessels should be extracted in local windows. The high contrast between blood vessels and image background in the images cause the vessels to be associated with peaks in the Radon space. The largest vessels, using a high threshold of the Radon transform, determines the main course or overall configuration of the blood vessels which when fitted to a parabola, leads to the future localization of the fovea. In effect, with an accurate fit, the fovea normally lies along the slope joining the vertex and the focus. The darkest region along this line is the indicative of the fovea. To evaluate our method, we used 220 fundus images from a rural database (MUMS-DB) and one public one (DRIVE). The results show that, among 20 images of the first public database (DRIVE) we detected fovea in 85% of them. Also for the MUMS-DB database among 200 images we detect fovea correctly in 83% on them.Item CNN-based network has Network Anisotropy -work harder to learn rotated feature than non-rotated feature(IEEE, 2022-10) Dale, Ashley S.; Qui, Mei; Christopher, Lauren; Krogg, Wen; William, Albert; Electrical and Computer Engineering, School of Engineering and TechnologySuccessful object identification and classification in a generic Convolutional Neural Network (CNN) depends on object orientation. We expect CNN-based architectures to work harder to learn a rotated version of a feature than when learning the same feature in its default orientation. We name this phenomenon “Network Anisotropy”. A data set of 6000 RGB and grayscale images was created with rotated orientations of a feature predetermined and evenly distributed across four classes: 0°, 30°, 60°, 90°. Four ResNet (18, 34, 50, 101) classifier architectures were trained and the confidence scores were used to represent prediction accuracy. The results show that in all networks, training performance lags several epochs for the 30° and 60° rotation predictions compared to the 0° and 90° rotations, indicating a quantifiable network anisotropy. Because 0° and 90° both lie along a single rectilinear axis that coincides with the convolutional kernel of the CNN, we expect the classifier to do better on these two classes than on 30° and 60° classes. This work confirms that CNN architectures may have weaker performance based on feature orientation alone, independent of the feature distribution within the data set or the correlation of features within an image.Item Extracting the phase information from atomic memory by intensity correlation measurement(OSA, 2015-04) Guo, Jinxian; Zhang, Kai; Chen, L. Q.; Yuan, Chun-Hua; Bian, Cheng-ling; Ou, Z. Y.; Zhang, Weiping; Department of Physics, School of ScienceWe demonstrate experimentally controlled storage and retrieval of the optical phase information in a higher-order interference scheme based on Raman process in 87Rb atomic vapor cells. An interference pattern is observed in intensity correlation measurement between the write Stokes field and the delayed read Stokes field as the phase of the Raman write field is scanned. This result implies that the phase information of the Raman write field can be written into the atomic spin wave via Raman process in a high gain regime and subsequently read out via a spin-wave enhanced Raman process, thus achieving optical storage of phase information. This technique should find applications in optical phase image storage, holography and information processing.Item Multiresolution variance-based image fusion(2013-05) Ragozzino, Matthew; Salama, Paul; Christopher, Lauren; Rizkalla, Maher E.; King, BrianMultiresolution image fusion is an emerging area of research for use in military and commercial applications. While many methods for image fusion have been developed, improvements can still be made. In many cases, image fusion methods are tailored to specific applications and are limited as a result. In order to make improvements to general image fusion, novel methods have been developed based on the wavelet transform and empirical variance. One particular novelty is the use of directional filtering in conjunction with wavelet transforms. Instead of treating the vertical, horizontal, and diagonal sub-bands of a wavelet transform the same, each sub-band is handled independently by applying custom filter windows. Results of the new methods exhibit better performance across a wide range of images highlighting different situations.