- Browse by Subject
Browsing by Subject "Crowdsourcing"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Comparing Crowdsourcing and Friendsourcing: A Social Media-Based Feasibility Study to Support Alzheimer Disease Caregivers(JMIR Publications, 2017-04-10) Bateman, Daniel Robert; Brady, Erin; Wilkerson, David A.; Yi, Eun-Hye; Karanam, Yamini; Callahan, Christopher M.; Psychiatry, School of MedicineBACKGROUND: In the United States, over 15 million informal caregivers provide unpaid care to people with Alzheimer disease (AD). Compared with others in their age group, AD caregivers have higher rates of stress, and medical and psychiatric illnesses. Psychosocial interventions improve the health of caregivers. However, constraints of time, distance, and availability inhibit the use of these services. Newer online technologies, such as social media, online groups, friendsourcing, and crowdsourcing, present alternative methods of delivering support. However, limited work has been done in this area with caregivers. OBJECTIVE: The primary aims of this study were to determine (1) the feasibility of innovating peer support group work delivered through social media with friendsourcing, (2) whether the intervention provides an acceptable method for AD caregivers to obtain support, and (3) whether caregiver outcomes were affected by the intervention. A Facebook app provided support to AD caregivers through collecting friendsourced answers to caregiver questions from participants' social networks. The study's secondary aim was to descriptively compare friendsourced answers versus crowdsourced answers. METHODS: We recruited AD caregivers online to participate in a 6-week-long asynchronous, online, closed group on Facebook, where caregivers received support through moderator prompts, group member interactions, and friendsourced answers to caregiver questions. We surveyed and interviewed participants before and after the online group to assess their needs, views on technology, and experience with the intervention. Caregiver questions were pushed automatically to the participants' Facebook News Feed, allowing participants' Facebook friends to see and post answers to the caregiver questions (Friendsourced answers). Of these caregiver questions, 2 were pushed to crowdsource workers through the Amazon Mechanical Turk platform. We descriptively compared characteristics of these crowdsourced answers with the friendsourced answers. RESULTS: In total, 6 AD caregivers completed the initial online survey and semistructured telephone interview. Of these, 4 AD caregivers agreed to participate in the online Facebook closed group activity portion of the study. Friendsourcing and crowdsourcing answers to caregiver questions had similar rates of acceptability as rated by content experts: 90% (27/30) and 100% (45/45), respectively. Rates of emotional support and informational support for both groups of answers appeared to trend with the type of support emphasized in the caregiver question (emotional vs informational support question). Friendsourced answers included more shared experiences (20/30, 67%) than did crowdsourced answers (4/45, 9%). CONCLUSIONS: We found an asynchronous, online, closed group on Facebook to be generally acceptable as a means to deliver support to caregivers of people with AD. This pilot is too small to make judgments on effectiveness; however, results trended toward an improvement in caregivers' self-efficacy, sense of support, and perceived stress, but these results were not statistically significant. Both friendsourced and crowdsourced answers may be an acceptable way to provide informational and emotional support to caregivers of people with AD.Item NuCLS: A scalable crowdsourcing approach and dataset for nucleus classification and segmentation in breast cancer(Oxford University Press, 2022) Amgad, Mohamed; Atteya, Lamees A.; Hussein, Hagar; Mohammed, Kareem Hosny; Hafiz, Ehab; Elsebaie, Maha A.T.; Alhusseiny, Ahmed M.; AlMoslemany, Mohamed Atef; Elmatboly, Abdelmagid M.; Pappalardo, Philip A.; Sakr, Rokia Adel; Mobadersany, Pooya; Rachid, Ahmad; Saad, Anas M.; Alkashash, Ahmad M.; Ruhban, Inas A.; Alrefai, Anas; Elgazar, Nada M.; Abdulkarim, Ali; Farag, Abo-Alela; Etman, Amira; Elsaeed, Ahmed G.; Alagha, Yahya; Amer, Yomna A.; Raslan, Ahmed M.; Nadim, Menatalla K.; Elsebaie, Mai A.T.; Ayad, Ahmed; Hanna, Liza E.; Gadallah, Ahmed; Elkady, Mohamed; Drumheller, Bradley; Jaye, David; Manthey, David; Gutman, David A.; Elfandy, Habiba; Cooper, Lee A.D.; Pathology and Laboratory Medicine, School of MedicineBackground: Deep learning enables accurate high-resolution mapping of cells and tissue structures that can serve as the foundation of interpretable machine-learning models for computational pathology. However, generating adequate labels for these structures is a critical barrier, given the time and effort required from pathologists. Results: This article describes a novel collaborative framework for engaging crowds of medical students and pathologists to produce quality labels for cell nuclei. We used this approach to produce the NuCLS dataset, containing >220,000 annotations of cell nuclei in breast cancers. This builds on prior work labeling tissue regions to produce an integrated tissue region- and cell-level annotation dataset for training that is the largest such resource for multi-scale analysis of breast cancer histology. This article presents data and analysis results for single and multi-rater annotations from both non-experts and pathologists. We present a novel workflow that uses algorithmic suggestions to collect accurate segmentation data without the need for laborious manual tracing of nuclei. Our results indicate that even noisy algorithmic suggestions do not adversely affect pathologist accuracy and can help non-experts improve annotation quality. We also present a new approach for inferring truth from multiple raters and show that non-experts can produce accurate annotations for visually distinctive classes. Conclusions: This study is the most extensive systematic exploration of the large-scale use of wisdom-of-the-crowd approaches to generate data for computational pathology applications.