- Browse by Subject
Browsing by Subject "metrics"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Quality, Not Quantity: Metrics Made for Community Engaged Research(2024-09-19) Price, JeremyWithout a doubt, an intense focus on numbers and metrics is the current zeitgeist of higher education. When community-engaged researchers and their supporters meet, the question “What are our metrics?” often arises. While metrics are typically used to support market-driven goals and can oversimplify complex research-such as community engaged scholarship-they can also be valuable when used grounded in a humanizing and relational stance. Dr. Jeremy Price, supported by the CUMU-Collaboratory Fellowship, has been developing a set of metrics that better capture the true essence of community-engaged research. These new metrics consider the full context, relationships, and efforts involved in this type of work. Drawing inspiration from Neil Postman’s constructive approach to change, Dr. Price will discuss the benefits, challenges, and opportunities that come with using these more human-centered metrics.Item Using Machine Learning Techniques to Classify and Predict Static Code Analysis Tool Warnings(IEEE, 2018-10) Alikhashashneh, Enas A.; Raje, Rajeev R.; Hill, James H.; Computer and Information Science, School of ScienceThis paper discusses our work on using software engineering metrics (i.e., source code metrics) to classify an error message generated by a Static Code Analysis (SCA) tool as a true-positive, false-positive, or false-negative. Specifically, we compare the performance of Support Vector Machine (SVM), K-Nearest Neighbor (KNN), Random Forests, and Repeated Incremental Pruning to Produce Error Reduction (RIPPER) over eight datasets. The performance of the techniques is assessed by computing the F-measure metric, which is defined as the weighted harmonic mean of the precision and recall of the predicted model. The overall results of the study show that the F-measure value of the predicted model, which is generated using Random Forests technique, ranges from 83% to 98%. Additionally, the Random Forests technique outperforms the other techniques. Lastly, our results indicate that the complexity and coupling metrics have the most impact on whether a SCA tool with generate a false-positive warning or not.