- Browse by Author
Browsing by Author "Lee, John J."
Now showing 1 - 10 of 18
Results Per Page
Sort Options
Item Adaptive-optics Optical Coherence Tomography Processing Using a Graphics Processing Unit(Institute of Electrical and Electronics Engineers, 2014) Shafer, Brandon A.; Kriske, Jeffery E.; Kocaoglu, Omer P.; Turner, Timothy L.; Liu, Zhuolin; Lee, John J.; Miller, Donald T.; Department of Engineering Technology, School of Engineering and TechnologyGraphics processing units are increasingly being used for scientific computing for their powerful parallel processing abilities, and moderate price compared to super computers and computing grids. In this paper we have used a general purpose graphics processing unit to process adaptive-optics optical coherence tomography (AOOCT) images in real time. Increasing the processing speed of AOOCT is an essential step in moving the super high resolution technology closer to clinical viability.Item ASIC implemented MicroBlaze-based Coprocessor for Data Stream Management Systems(2020-05) Balasubramanian, Linknath Surya; Lee, John J.; Christopher, Lauren A; Rizkalla, Maher E.The drastic increase in Internet usage demands the need for processing data in real time with higher efficiency than ever before. Symbiote Coprocessor Unit (SCU), developed by Dr. Pranav Vaidya, is a hardware accelerator which has potential of providing data processing speedup of up to 150x compared with traditional data stream processors. However, SCU implementation is very complex, fixed, and uses an outdated host interface, which limits future improvement. Mr. Tareq S. Alqaisi, an MSECE graduate from IUPUI worked on curbing these limitations. In his architecture, he used a Xilinx MicroBlaze microcontroller to reduce the complexity of SCU along with few other modifications. The objective of this study is to make SCU suitable for mass production while reducing its power consumption and delay. To accomplish this, the execution unit of SCU has been implemented in application specific integrated circuit and modules such as ACG/OCG, sequential comparator, and D-word multiplier/divider are integrated into the design. Furthermore, techniques such as operand isolation, buffer insertion, cell swapping, and cell resizing are also integrated into the system. As a result, the new design attains 67.9435 µW of dynamic power as compared to 74.0012 µW before power optimization along with a small increase in static power, 39.47 ns of clock period as opposed to 52.26 ns before time optimization.Item Building a Private LoRaWAN Platform(BEIESP, 2019) Lee, John J.; Souryal, Youssef; Tam, Darren; Kim, Dongsoo; Kang, Kyubyung; Koo, Dan D.; Electrical and Computer Engineering, School of Engineering and TechnologyLoRaWAN technology has been here for several years as one of LPWAN technologies. It consists of various components such as end nodes, a gateway, a network server, and an application server at the minimum. The servers have been exclusive products of commercial companies, and not many experimental or academic ones are available. Recently one such software has been developed. However, few fully functional academic ones have been reported. In this study, we implement a fully functional private independent LoRaWAN platform for the academic research of LPWAN Internet of Things (IoT) and demonstrate that our platform can support not only end-to-end LoRaWAN communication but also graphical user interface on an embedded and limited computing power system.Item Emergency Evacuation Assistance(IEEE, 2020-01) Lee, John J.; Koo, Dan; Tadesse, Dinaol; Jain, Atharv; Shettar, Sushmitha; Kim, Dongsoo; Electrical and Computer Engineering, School of Engineering and TechnologyThere have been more than necessary casualties due to a lack of intelligence in emergency evacuation mechanisms such as exit signs. Although large or complex buildings and facilities have many exit doors, in case of emergency, people may not be able to escape quickly enough due to sudden loss of directions and difficulty in finding safe routes to exit doors. If you were ever in such a situation, you would wish that if there were ever smart escape route assistance mechanisms available or at least smart exit signs available that safely and quickly guide you to a safe haven. It is what we try to make such a wish come true. In this paper, we propose a graph mapping scheme and a new safe evacuation route algorithm for safe emergency evacuation assistance, with the aid of recent technology called Internet of Things (IoT). The gist of our approach is that people are not allowed to pass through or even go towards any area where fire or toxic gas is detected by controlling the direction signals installed on exit signs. The experiments performed with our methodology shows that the proposed technology may be able to save more lives.Item An Energy Efficient Register File Architecture for VLIW Streaming Processors on FPGAs(2019-12) Vaidya, Pranav S.; Yadav, Avinash; Surya, Linknath; Lee, John J.; Electrical and Computer Engineering, School of Engineering and TechnologyThe design of a register file with large scalability, high bandwidth, and energy efficiency is the major issue in the execution of streaming Very Long Instruction Word (VLIW) processors on Field Programmable Gate Arrays (FPGA's). This problem arises due to the fact that accessing multi-ported register files that can use optimized on-chip memory resources as well as enabling the maximum sharing of register operands are difficult provided that FPGA's on-chip memory resources only support up to two ports. To handle this issue, an Inverted Distributed Register File (IDRF) architecture is proposed in this article. This new IDRF is compared with the existing Central Register File (CRF) and the Distributed Register File (DRF) architectures on parameters such as kernel performance, circuit area, access delay, dynamic power, and energy. Experimental results show that IDRF matches the kernel performance with the CRF architecture but 10.4% improvement in kernel performance as compared to DRF architecture. Similar experimental results related to the circuit area, dynamic power, and energy are discussed in this article.Item GPU-OSDDA: A Bit-Vector GPU-based Deadlock Detection Algorithm for Single-Unit Resource Systems(Taylor & Francis, 2015-09) Abell, Stephen; Nhan, Do; Lee, John J.; Department of Electrical and Computer EngineeringThis article presents a GPU-based single-unit deadlock detection methodology and its algorithm, GPU-OSDDA. Our GPU-based design utilizes parallel hardware of GPU to perform computations and thus is able to overcome the major limitation of prior hardware-based approaches by having the capability of handling thousands of processes and resources, whilst achieving real-world run-times. By utilizing a bit-vector technique for storing algorithm ma- trices and designing novel, efficient algorithmic methods, we not only reduce memory usage dramatically but also achieve two orders of magnitude speedup over CPU equivalents. Additionally, GPU-OSDDA acts as an interactive service to the CPU, because all of the aforementioned computations and matrix management techniques take place on the GPU, requiring minimal interaction with the CPU. GPU-OSDDA is implemented on three GPU cards: Tesla C2050, Tesla K20c, and Titan X. Our design shows overall speedups of 6-595X over CPU equivalents.Item Imaging and quantifying ganglion cells and other transparent neurons in the living human retina(National Academy of Sciences, 2017-11-28) Liu, Zhuolin; Kurokawa, Kazuhiro; Zhang, Furu; Lee, John J.; Miller, Donald T.; Engineering Technology, School of Engineering and TechnologyGanglion cells are the primary building block of retinal neural circuitry, but have been elusive to observe and quantify in the living human eye. Here, we show a light microscopy modality that reveals not only the somas of these cells, but also their 3D packing geometry, primary subtypes, and spatial projection to other neurons. The method provides a glimpse of the rich tapestry of neurons, glia, and blood vessels that compose the retina, thus exposing the anatomical substrate for neural processing of visual information. Clinically, high-resolution images of retinal neurons in living eyes hold promise for improved diagnosis and assessing treatment of ganglion cell and other neuron loss in retinal disease., Ganglion cells (GCs) are fundamental to retinal neural circuitry, processing photoreceptor signals for transmission to the brain via their axons. However, much remains unknown about their role in vision and their vulnerability to disease leading to blindness. A major bottleneck has been our inability to observe GCs and their degeneration in the living human eye. Despite two decades of development of optical technologies to image cells in the living human retina, GCs remain elusive due to their high optical translucency. Failure of conventional imaging—using predominately singly scattered light—to reveal GCs has led to a focus on multiply-scattered, fluorescence, two-photon, and phase imaging techniques to enhance GC contrast. Here, we show that singly scattered light actually carries substantial information that reveals GC somas, axons, and other retinal neurons and permits their quantitative analysis. We perform morphometry on GC layer somas, including projection of GCs onto photoreceptors and identification of the primary GC subtypes, even beneath nerve fibers. We obtained singly scattered images by: (i) marrying adaptive optics to optical coherence tomography to avoid optical blurring of the eye; (ii) performing 3D subcellular image registration to avoid motion blur; and (iii) using organelle motility inside somas as an intrinsic contrast agent. Moreover, through-focus imaging offers the potential to spatially map individual GCs to underlying amacrine, bipolar, horizontal, photoreceptor, and retinal pigment epithelium cells, thus exposing the anatomical substrate for neural processing of visual information. This imaging modality is also a tool for improving clinical diagnosis and assessing treatment of retinal disease.Item INDF: Efficient Transaction Publishing in Blockchain(IEEE, 2021-06) Kumar, Valli Sanghami Shankar; Lee, John J.; Hu, Qin; Electrical and Computer Engineering, School of Engineering and TechnologyBlockchain is a distributed ledger technology based on the underlying peer-to-peer network. In this paper, we focus on improving the chances of a transaction being packaged into a valid block so as to be recorded on the main chain. Blockchain nodes typically broadcast transactions they receive to the whole network. Hence, for recording transactions on the blockchain more efficiently, it becomes essential to determine influential nodes to publish transactions, where influential nodes are more actively involved in mining, recording, or broadcasting transactions in the network. To that aim, we propose an Influential Node Determination Framework (INDF) using a series of significant factors, such as hash rate, latency, active time, and degree of a node. Specifically, INDF consists of two parallel schemes: the first scheme figures out influential pools according to their hash rates where a truth-telling mechanism design is employed to encourage the pool nodes to report their true hash rate values; the second one determines influential individual nodes based on an improved L-H index algorithm. Remarkably, the proposed truth-telling mechanism is proved to be incentive-compatible. Our improved L-H index algorithm is comparatively studied to reflect the impacts of different node parameters on the node’s ranking. Extensive experiments are conducted to demonstrate the effectiveness of our proposed framework.Item An Internet-of-Things (IoT) system development and implementation for bathroom safety enhancement(Elsevier, 2016) Koo, Dan D.; Lee, John J.; Sebastiani, Aleksei; Kim, Jonghoon; Department of Engineering Technology, School of Engineering and TechnologyStatistics show that a bathroom is one of the most hazardous places especially for older people. Older people typically have greater difficulties with mobility and balance, making them more vulnerable to fall and slip injuries in a bathroom and causing serious health issues related to short and long-term well-being. Various components in a bathroom including shower, tub, floor, and toilet have been re-designed, and independently upgraded their ergonomics and safety aspects; however, the number of bathroom injuries remains consistently high in general. Internet-of-Things (IoT) is a new concept applicable to almost everywhere and man-made objects. Wireless sensors detect abnormalities and send data through the network. A large amount of data can be collected from multiple IoT systems and it can be utilized for a big data analysis. The big data may reveal a hidden positive outcome beyond the initially intended purposes. A few commercial IoT applications such as wearable health monitoring and intelligent transportation systems are available. Nevertheless, An IoT application for a bathroom is not currently known. Unlike other applications, bathrooms have some unique aspects such as privacy and wet environment. This paper presents a holistic conceptual approach of an Internet-of-Things (IoT) system development and implementation to enhance bathroom safety. The concept focuses on the application in a large nursing care facility as a pilot testing bed. Authors propose 1) sensor selection and application, 2) integration of a wireless sensor local network system, 3) design concept for IoT implementation, and 4) a big data analysis system model in this paper.Item Machine-Learning-Enhanced Blockchain Consensus With Transaction Prioritization for Smart Cities(IEEE, 2022-04-15) Sanghami, S. Valli; Lee, John J.; Hu, Qin; Electrical and Computer Engineering, School of Engineering and TechnologyIn the given technology-driven era, smart cities are the next frontier of technology, and these smart cities aim to improve the quality of people’s lives. In this article, we introduce such future Internet of Things (IoT)-based smart cities that leverage blockchain technology. Particularly, when there are multiple parties involved, blockchain helps in improving the security and transparency of the system in an efficient manner. However, if a current fee-based or first-come–first-serve-based processing is used, emergency events may get delayed and even threaten people’s lives. Thus, there is a need for transaction prioritization based on the priority of information and a dynamic block creation mechanism for efficient data recording and faster event response. Also, our system focuses on the consortium blockchain maintained by a group of members working across different organizations to provide more efficiency. The leader election procedure in such a consortium blockchain becomes more important for the transaction prioritization process to take place honestly. Hence, in our proposed consensus protocol, we deploy a machine-learning (ML) algorithm to achieve efficient leader election, based on which a novel dynamic block creation algorithm is designed. Also, to ensure the honest block generation behavior of the leader, a peer-prediction-based verification mechanism is proposed. Both security analysis and simulation experiments are carried out to demonstrate the robustness, accuracy, and efficiency of our proposed scheme.