- Browse by Subject
Browsing by Subject "Metrics"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Critical Assessment of Single-Use Ureteroscopes in an In Vivo Porcine Model(Hindawi, 2020-04-27) Ceballos, Brian; Nottingham, Charles U.; Bechis, Seth K.; Sur, Roger L.; Matlaga, Brian R.; Krambeck, Amy E.; Urology, School of MedicineMethods A female pig was placed under general anesthesia and positioned supine, and retrograde access to the renal collecting system was obtained. The LithoVue (Boston Scientific) and Uscope (Pusen Medical) were evaluated by three experienced surgeons, and each surgeon started with a new scope. The following parameters were compared between each ureteroscope: time for navigation to upper and lower pole calyces with and without implements (1.9 F basket, 200 μm laser fiber, and 365 μm laser fiber for upper only) in the working channel and subjective evaluations of maneuverability, irrigant flow through the scope, lever force, ergonomics, and scope optics. Results Navigation to the lower pole calyx was significantly faster with LithoVue compared to Uscope when the working channel was empty (24.3 vs. 49.4 seconds, p < 0.01) and with a 200 μm fiber (63.6 vs. 94.4 seconds, p=0.04), but not with the 1.9 F basket. Navigation to the upper pole calyx was similar for all categories except faster with LithoVue containing the 365 μm fiber (67.1 vs. 99.7 seconds, p=0.02). Subjective assessments of scope maneuverability to upper and lower pole calyces when the scope was empty and with implements favored LithoVue in all categories, as did assessments of irrigant flow, illumination, image quality, and field of view. Both scopes had similar scores of lever force and ergonomics. Conclusions In an in vivo porcine model, the type of single-use ureteroscope employed affected the navigation times and subjective assessments of maneuverability and visualization. In all cases, LithoVue provided either equivalent or superior metrics than Uscope. Further clinical studies are necessary to determine the implications of these findings.Item Demonstrating the Impact of Community Engagement: Realistic and Doable Strategies(2017-10-07) Norris, Kristin; Wendling, Lauren; Keyne, LisaMost campuses are eager to answer questions like “How are students, faculty, and staff on campus working to address civic issues and public problems?”, “To what extent is our engagement making a difference?”, “How can we better support community engagement?” Discover how to track, monitor, assess, and evaluate community-engaged activities, which include curricular, co-curricular, or project-based activities that are done in partnership with the community, in order to tell a more comprehensive story of engagement. Whether you’re interested in community outcomes, student outcomes, partnership assessment, or faculty/staff engagement, campuses confront an array of challenges when trying to combine and align these questions into a comprehensive assessment plan. This session will give participants tools, strategies, and information to design, initiate and/or enhance a systematic mechanism for monitoring and assessment of community-engaged activities.Item Understanding metric-related pitfalls in image analysis validation(ArXiv, 2023-09-25) Reinke, Annika; Tizabi, Minu D.; Baumgartner, Michael; Eisenmann, Matthias; Heckmann-Nötzel, Doreen; Kavur, A. Emre; Rädsch, Tim; Sudre, Carole H.; Acion, Laura; Antonelli, Michela; Arbel, Tal; Bakas, Spyridon; Benis, Arriel; Blaschko, Matthew B.; Buettner, Florian; Cardoso, M. Jorge; Cheplygina, Veronika; Chen, Jianxu; Christodoulou, Evangelia; Cimini, Beth A.; Collins, Gary S.; Farahani, Keyvan; Ferrer, Luciana; Galdran, Adrian; Van Ginneken, Bram; Glocker, Ben; Godau, Patrick; Haase, Robert; Hashimoto, Daniel A.; Hoffman, Michael M.; Huisman, Merel; Isensee, Fabian; Jannin, Pierre; Kahn, Charles E.; Kainmueller, Dagmar; Kainz, Bernhard; Karargyris, Alexandros; Karthikesalingam, Alan; Kenngott, Hannes; Kleesiek, Jens; Kofler, Florian; Kooi, Thijs; Kopp-Schneider, Annette; Kozubek, Michal; Kreshuk, Anna; Kurc, Tahsin; Landman, Bennett A.; Litjens, Geert; Madani, Amin; Maier-Hein, Klaus; Martel, Anne L.; Mattson, Peter; Meijering, Erik; Menze, Bjoern; Moons, Karel G. M.; Müller, Henning; Nichyporuk, Brennan; Nickel, Felix; Petersen, Jens; Rafelski, Susanne M.; Rajpoot, Nasir; Reyes, Mauricio; Riegler, Michael A.; Rieke, Nicola; Saez-Rodriguez, Julio; Sánchez, Clara I.; Shetty, Shravya; Summers, Ronald M.; Taha, Abdel A.; Tiulpin, Aleksei; Tsaftaris, Sotirios A.; Van Calster, Ben; Varoquaux, Gaël; Yaniv, Ziv R.; Jäger, Paul F.; Maier-Hein, Lena; Pathology and Laboratory Medicine, School of MedicineValidation metrics are key for the reliable tracking of scientific progress and for bridging the current chasm between artificial intelligence (AI) research and its translation into practice. However, increasing evidence shows that particularly in image analysis, metrics are often chosen inadequately in relation to the underlying research problem. This could be attributed to a lack of accessibility of metric-related knowledge: While taking into account the individual strengths, weaknesses, and limitations of validation metrics is a critical prerequisite to making educated choices, the relevant knowledge is currently scattered and poorly accessible to individual researchers. Based on a multi-stage Delphi process conducted by a multidisciplinary expert consortium as well as extensive community feedback, the present work provides the first reliable and comprehensive common point of access to information on pitfalls related to validation metrics in image analysis. Focusing on biomedical image analysis but with the potential of transfer to other fields, the addressed pitfalls generalize across application domains and are categorized according to a newly created, domain-agnostic taxonomy. To facilitate comprehension, illustrations and specific examples accompany each pitfall. As a structured body of information accessible to researchers of all levels of expertise, this work enhances global comprehension of a key topic in image analysis validation.