INVESTIGATING THE IMPACT OF INTERACTION DESIGN ON THE DELIVERY OF ONLINE PHARMACEUTICAL COURSES: ADAPTING ONLINE COURSE GRAPHIC DESIGN FOR IMPROVED CONTENT RETENTION Melynda Buher Submitted to the faculty of the School of Informatics in partial fulfillment of the requirements for the degree of Master of Science Human-Computer Interaction, Indiana University December 2007 Accepted by the Faculty of Indiana University, in partial fulfillment of the requirements for the degree of Master of Science in Human-Computer Interaction Master’s Thesis Committee ________________________________________ Anthony Faiola, Director, Media Informatics and Human-Computer Interaction, Chair ________________________________________ Martin Siegel, Associate Dean for Graduate Studies and Chairperson, Department of Informatics ________________________________________ Tammy Cullen, Adjunct Faculty, Indiana University ii © 2007 Melynda Buher ALL RIGHTS RESERVED iii Dedicated to my family and friends. iv TABLE OF CONTENTS Page LIST OF TABLES ..................................................................................................................................... VI LIST OF FIGURES ................................................................................................................................... vii ACKNOWLEDGEMENTS ...................................................................................................................... viii ABSTRACT ..................................................................................................................................................ix CHAPTER ONE: INTRODUCTION & BACKGROUND .......................................................................1 INTRODUCTION TO SUBJECT ........................................................................................................................1 IMPORTANCE OF SUBJECT............................................................................................................................1 CHAPTER TWO: LITERATURE REVIEW ............................................................................................2 ONLINE LEARNING ......................................................................................................................................2 INCORPORATING TECHNOLOGY ...................................................................................................................5 INTERACTION DESIGN .................................................................................................................................7 KNOWLEDGE RETENTION ............................................................................................................................9 RESEARCH QUESTIONS .............................................................................................................................. 10 CHAPTER THREE: METHODOLOGY ................................................................................................. 11 PARTICIPANTS ........................................................................................................................................... 11 TREATMENTS ............................................................................................................................................ 11 PROCEDURES ............................................................................................................................................. 19 DATA ANALYSIS ....................................................................................................................................... 21 CHAPTER FOUR: FINDINGS ................................................................................................................. 23 LEARNER RETENTION ................................................................................................................................ 23 LEARNER SATISFACTION ........................................................................................................................... 30 CHAPTER FIVE: DISCUSSION .............................................................................................................. 34 EXPLANATION OF OUTCOMES ................................................................................................................... 34 IMPLICATIONS OF RESULTS OF OUTCOMES................................................................................................ 37 CHAPTER SIX: CONCLUSION .............................................................................................................. 38 LIMITATIONS ............................................................................................................................................. 38 FUTURE RESEARCH ................................................................................................................................... 39 SUMMARY ................................................................................................................................................. 40 REFERENCES ............................................................................................................................................ 41 APPENDIX A: E-MAIL COMMUNICATION ................................................................................................... 44 APPENDIX B: INFORMED CONSENT FORM ................................................................................................. 46 IUPUI INFORMED CONSENT STATEMENT FOR ............................................................................. 46 APPENDIX C: SAMPLE TEST QUESTIONS ................................................................................................... 48 APPENDIX D: DEMOGRAPHIC QUESTIONS FROM PRE-TEST ....................................................................... 49 APPENDIX E: SATISFACTION QUESTIONS FROM POST-TEST ...................................................................... 52 VITA ............................................................................................................................................................. 59 Note: Due to the proprietary nature of the software used in this study, portions of all course screen caps and test questions have been blocked out or omitted. v LIST OF TABLES Table 3.1: Treatment Distribution ............................................................................12 Table 4.1: Score Change by Treatment......................................................................23 Table 4.2: ANOVA score ..........................................................................................24 Table 4.3: Mean/Standard Deviation by Treatment ...................................................24 Table 4.4: Retention-test Timing ...............................................................................25 Table 4.5: Model Summary for Treatment A ............................................................26 Table 4.6: Model Summary for Treatment B.............................................................27 Table 4.7: Model Summary for Treatment C – all data points ..................................28 Table 4.8: Model Summary for Treatment C – all data points except outliers 27 and 58 ..............................................................28 Table 4.8: Model Summary for Treatment D ............................................................29 vi LIST OF FIGURES Figure 3.1: Instructional Material Process Flow .......................................................12 Figure 3.2: Example of the HTML template used for all course materials in this study .....................................................................13 Figure 3.3: Course example from Treatment A .........................................................14 Figure 3.4: Course example from Treatment B .........................................................15 Figure 3.5: Course example from Treatment C .........................................................16 Figure 3.6: Course example from Treatment D .........................................................17 Figure 3.7: Question and answer tracking sheet ........................................................21 Figure 3.8: Excerpt from the Score Progression spreadsheet prior to SPSS .............22 Figure 4.1: Scatterplot for Treatment A .....................................................................26 Figure 4.2: Scatterplot for Treatment B .....................................................................27 Figure 4.3: Scatterplot for Treatment C .....................................................................28 Figure 4.4: Scatterplot for Treatment D .....................................................................29 vii ACKNOWLEDGEMENTS Althea Gibson, American sportswoman, once said, “No matter what accomplishments you make, somebody helps you.” This resonates with me, as I could not have persevered through this long and arduous process without the support of my colleagues, family and friends. To my thesis committee – you successfully coached me every little step of the way; without you, I would probably still be looking for a viable topic. Thank you for your direction, persistence, encouragement and support over the past six years. To my study group – you kept me motivated and gave me hope that I might just complete this crazy process. I am honored to call you colleagues and friends. To my participants and reviewers – thank you for taking time out of your busy schedules to help me complete this project and, ultimately, my degree. To my aunts, uncles, cousins, friends and everyone at the winery – thank you for your support and patience during this extensive ordeal. To my best friends – you know who you are – for your patience throughout my many stressed-out rants, for understanding when I couldn’t come out to play and for encouraging me when I needed reassurance that I was going to survive. Most importantly, thank you to my mom, dad and sister – You encouraged me to start the degree program many years ago, you were there for me when I needed encouragement and you kept me motivated when I often got burned out. Thank you from the bottom of my heart! viii ABSTRACT Melynda Buher INVESTIGATING THE IMPACT OF INTERACTION DESIGN ON THE DELIVERY OF ONLINE PHARMACEUTICAL COURSES: ADAPTING ONLINE COURSE GRAPHIC DESIGN FOR IMPROVED CONTENT RETENTION In recent years, the use of online courses has emerged as a way to quickly and easily deliver content to large numbers of trainees. In writing these courses, pharmaceutical course developers often use traditional instructional design models and techniques to design course content for online learning. But is this truly enough? Interaction design principles and practices can also be incorporated to increase the quality of learning by improving learner comprehension and retention. Using pharmaceutical content and learners, this research investigated how interaction design impacts online learning by measuring the effect of applying different graphical user interfaces. The results were surprising, as the data showed no significant improvement in retention rates between graphical treatments. However, the incorporation of graphics did slightly improve overall course satisfaction. ix CHAPTER ONE: INTRODUCTION & BACKGROUND Introduction to Subject In recent years, the number of corporate education courses provided online has increased when compared to traditional classroom delivery. Many companies use the improved functionality of internet tools to quickly and easily deliver content to large numbers of people. According to Wisher, Curnow, and Seidel (2001), “distance learning has proven to be a useful instructional delivery strategy” (p. 20). But are people really learning what they need to know for job performance? This study applied interaction design processes to e-learning instructional design techniques. More specifically, the research sought to determine whether the addition of static graphics and/or simulations improved learner satisfaction and content retention, which according to Wisher et al. (2001) is a key learning outcome. Importance of Subject In regulated environments, such as the pharmaceutical industry, it is critical to have employees who are qualified to perform the tasks of their job. Companies offer numerous training courses, including online courses, to help employees understand the intricacies of the many aspects of their job. These relevant courses are developed to reduce the time spent in training while still gaining valuable knowledge. This study’s goal was to understand how graphics can be optimally incorporated in online learning, thereby enhancing the learner’s experience and enabling them to retain the appropriate knowledge for performing the tasks of their job. 1 CHAPTER TWO: LITERATURE REVIEW Online Learning 1 According to Kazmer and Haythornthwaite (2004), at least one million people are taking an online course on any given day. This volume shows that e-learning has become widely recognized as a valuable tool for flexible course delivery, as per Calder and McCollum (1998). Just as the US Army is moving from a classroom-centric methodology to a learner-centric distance learning strategy, similar moves are underway in industry to increase employee’s e-learning opportunities (Wisher et al., 2001). Daniels and Salisbury (2002) assert that online learning incorporates technology to offer corporate learning outside of the traditional classroom, providing a “structured, interactive approach to education, communication and conveyance of information” (p. 814). Employees are able to gain the necessary job-related knowledge when convenient (Sadler-Smith and Smith, 2004). By using instructional design methodology to design courses, it stands to reason that course material can be effectively communicated. With the growth of technological tools, such as HTML, XML, Java and Flash, course material no longer needs to be delivered in a textual, straightforward and deliberate manner. According to Stewart, Waight, Norwood and Ezell (2004), presenting information in different formats is a great strength of e-learning. Zhu and Grabowski (2006) believe that technological advances allow e-learning to contain multimedia instruction that includes “motion, voice, data, text, graphics, and still images” (Moore, Burton and Myers, 2003, p. 980). 1 The following terms are used interchangeably throughout this document: e-learning, online learning, online training and distance learning. 2 There are many advantages to using e-learning. To begin with, it has the potential to be learner-centric, allowing people to have an active role in the learning process through online activities and exercises, such as interactive simulations. Learners can review materials multiple times, as necessary, or pass over topics in which they have demonstrated competency. For many, the e-learning environment can be less intimidating than a traditional classroom since individuals are able to learn at their own pace. Course material will not vary by instructor; therefore, learners will always get the same consistent course material. Most importantly, e-learning retention and comprehension rates have been shown to improve 25-60% over traditional classroom training (Daniels and Salisbury, 2002). With all of these advantages, why doesn’t everyone use e-learning? Not surprisingly, there are also disadvantages. In the beginning, course developers simply copied existing course material into the e-learning interface. This method provided so much information to learners that they often had difficulty absorbing the material, which obviously did not enhance their educational experience (Teo and Gay, 2006). Also, elearning does not always provide the interaction some learners find necessary to a wellrounded training event (Daniels and Salisbury, 2002). Teo and Gay (2006) discuss their views on technology and e-learning as follows: More importantly, instead of designing reusable and learner-centric content, many developers are placing too much emphasis on the technology aspects of e-learning [McCalla 2004]. This has lead [sic] to an unfortunate situation where most content developers are now more concerned with showcasing their technology- 3 enhanced products, showing little interest in enhancing the “knowledge aspect” of e-learning, which should be at the heart of it. (p.2) A final disadvantage of e-learning is that designers may find it difficult and timeconsuming to provide a “fulfilling educational experience to the learners” (Mishra, 2005, p. 569). Additional challenges for e-learning must be addressed in the corporate environment. First, material must often reach large, global audiences in a timely manner. They need to be location-independent, allowing anyone anywhere with the proper credentials to access the training material. Second, organizations need to commit not only funds to technology and training, but also time for the learners to be trained (Snoddy and Novick, 2004). Even with strict time and budget challenges (Zaharias, 2004), organizations are spending an increasing amount of money on end-user training (Gupta, 2006). However, according to Paul Walliker at Caterpillar, “The cost to deliver online training is less [as much as] three times less expensive” (Bartholomew, 2005, p. 35). With the advent of e-learning in the corporate environment, manufacturing companies are seeing resistance by older employees (Bartholomew, 2005). In addition to ensuring all personnel understand the content delivered via online training, companies may need to train people in basic computer skills. Lastly, many industries, including pharmaceutical, are regulated by government agencies that mandate all employees are current on any compliance-driven training related to their particular tasks. These “exacting and uncompromising” regulatory standards require a significant training program for employees (Lindeman and Boerner, 1993, p. 5). With these challenges, it is essential to capture the attention of the learner in an efficient and effective manner. 4 Incorporating Technology As discussed earlier, new technologies have contributed to the growth of online training. Learners can now access any topic at any time throughout the course, giving them the option of a high level of interactivity (Kilby, 2001). By using computer graphics and simulation in instructional materials, learning can potentially be enhanced in an affordable manner (Hamel and Ryan-Jones, 1997). Multimedia materials often combine text with video recordings, images, sound files and interactive simulations. Interactive instructional programs have been found to have a higher effectiveness than those with little or no interactivity (Thurman, 1993). Simulations can also be effective in explaining complex concepts and ideas (Lam and McNaught, 2006). However, Zhu and Grabowski (2006) feel that not enough research has been performed on simulations to evaluate their instructional effectiveness. As previously discussed, effectively incorporating multimedia, particularly graphics and simulations, is crucial to e-learning in today’s environment. Many people, especially those who were raised in the television generation, tend to watch material pass by on their screens (Ryan, 2001). It is the responsibility of the instructional designer to grab the learner’s attention through multimedia to help the learner retain the information being taught. Learners who were given any type of illustration have shown more interest and motivation to learn than those given text-only courses (Park and Lim, 2007). Graphics in e-learning have also been helpful to convey procedural knowledge, such as in software training courses, and give learners practice opportunities (Hamel and RyanJones, 1997, p. 77; Schnotz and Rasch, 2005, p. 47). Although these methods require a large time investment from the course developer, taking advantage of multimedia tools is 5 vital, as in general ”we remember 20 percent of what we read, 30 percent of what we do and 90 percent of what we see, hear, say and do” (Rose and Nicholl, 1997, p. 71). Determining the best type of graphics to include in e-learning courses has led to much debate over the past several years. When reviewing both individual and collaborative group learning, one study found initial evidence that simulation graphics resulted in higher retention rates when compared to static pictures (Sangin, Molinari, and Dillenbourg, 2006). Another study speculated that interactive graphics would be an improvement over non-interactive graphics (Scaife and Rogers, 1996). Despite the lack of conclusive evidence, educators and learning professionals continue to list interactive multimedia as a powerful tool for e-learning (Passerini, 2007). On the other hand, research has failed to show definitive proof of benefits of animated graphics over fixed graphics (Sangin et al., 2006). In another study, Park and Lim (2007) found that the type of graphic “did not have an effect on learners’ information recall or on comprehension” (p. 141). Zhu and Grabowski (2006) raise an interesting question by asking if it is worthwhile to develop simulations when static graphics have shown to be just as effective. Simulations add significant resources to the course development cycle. Therefore, if course material comprehension and content retention are not drastically improved with interactive simulations, it may be more cost effective to use static, non-interactive graphics. According to Schnotz and Rasch (2005), although simulations “are inherently attractive, they are not always beneficial for learning” (p. 47). This study was designed to further investigate this issue. 6 Interaction Design An interaction designer inherently seeks to improve the effectiveness, efficiency and satisfaction in whatever they are doing. Zaharias (2004) described effectiveness as the ability of the instructional interface to function properly and act as expected for the learner. He defined efficiency as the ability of the instructional interface to present minimal obstacles and frustrations for the learner. Lastly, he describes satisfaction as how comfortable the learner is in the overall learning environment. In the context of pharmaceutical e-learning, this implies that the learner completes the course in a timely manner, retains the course material, and does so without frustration. So many times, needed interface and navigational decisions are treated as an afterthought by instructional course designers to the detriment of their learners (Ryan, 2001). By incorporating good design practices into e-learning courses, learners should see an improved learning environment. Many researchers feel a uniform appearance, including navigation and structure, are important (Ryan, 2001). McFarland (1995) believes keeping the amount of content on each screen between one and three topics and providing passive screens seems to give the learner time to process information. Lastly, a study by Franco, da Cruz and Lopes (2006) addressed potential problems successfully with information visualization techniques. There are methods to ensure interaction design principles and guidelines are met, ranging from a full-blown usability study to a quick heuristic evaluation. On the high end of the cost and time investment spectrum, a formal usability testing session on the full interface could be held to enhance the e-learning system (Masemola and de Villiers, 2006). In many business or academic settings, the economics of e-learning production do 7 not support this type of usability evaluation (Zaharias, 2004). On a smaller scale, a heuristic usability test can be performed which applies usability guidelines to systematically evaluate the e-learning environment. This inexpensive and simple usability test, often performed by a usability expert, can use general usability guidelines or those specifically written for the context being reviewed. Zaharias (2004) discusses the Squires and Preece (1999) proposal for a set of “learning with software” heuristics, which include navigational fidelity, appropriate levels of learner control, understandable and meaningful symbolic representations, and curriculum matching. Whatever method or methods are used, performing usability tests are essential to ensuring effective, efficient and satisfying training is delivered to the learner. In order to best design courses for e-learners, it is important to understand who they are and how they learn. To begin, one must understand “the people who will use the software” (Quesenbery, 2001, ¶2) and the resulting training. Thinking of the end-users, “interface design must not only organize the content for easy access, but must incorporate the right combination of technologies and interaction techniques to allow users to work”, or perform their job duties (Quesenbery, 2001, ¶7). Traditional instructional course development, by design, takes a systematic approach to developing course material, thus leaving the end-users out of the design process. Instead, user-centered design identifies the dimensions of usability important to the user by working with the people destined to complete the training. Their practical knowledge potentially creates a course that is more likely to be one that users can use and learn from effectively (Blythe, 2001). One must keep in mind, however, that “user-centered design does not translate easily to instructional design.” (Blythe, 2001, p. 336) 8 Over the years, instructional design challenges have developed because e-learners have increasingly diverse backgrounds (Chen and Macredie, 2004; Mupinga Nora and Yaw, 2006), learning styles and preparedness. To combat these challenges, interaction design should be utilized in hopes of improving the e-learning environment and the amount of knowledge retained by the learner. Knowledge Retention A key outcome expected when delivering e-learning training, particularly in a corporate training environment, is content retention. According to Wisher et al. (2001), retention of knowledge is basically the knowledge that is not forgotten after the original learning period. Despite the importance of retention, research on content retention seems to be largely absent from e-learning evaluations (Wisher et al., 2001). Perhaps this is because it is difficult to measure the exact reasons why content is retained. Often, people practice and use what they have learned from the e-learning course prior to completing a content retention-test; thereby eliminating the ability to focus specifically on the course itself as a source of knowledge. To accurately measure e-learning content retention, it is vital that the participants have no additional practice or learning on the topic at hand between taking the e-learning course and the content retention-test. This time period is sometimes called the retention interval (Wisher et al, 2001). According to Snoddy and Novick (2004), learning should continue after the formal training ends. Studies have shown that about fifty percent of learners did not retain their new skills after eight weeks. (Olfman, L., and Bostrom, R. P., 1988; Olfman. L., Sein, M., and Bostrom, R. P., 1989; Shayo, C., Olfman, L., Teitelroit, R., Nordahl, C., 9 and Rodriguez, M.,1996). In a study by Wisher et al. (2001), he discovered that even within the first four weeks, most of the knowledge decay occurs. By using information visualization systems, interactive techniques and multimedia tools in teaching and learning experiences, content retention can be improved for up to 90% of learners (Franco et al, 2006). All in all, course developers must find ways to design e-learning that is efficient, effective, engaging and ultimately matches the user’s needs (Quesenbery, 2001) Research Questions This research study focused on melding interaction and instructional design principles and practices to improve the online learner’s training environment, which should lead to improved learner retention. By creating an improved training environment for pharmaceutical learners, it attempted to answer these questions: RQ 1: To what degree does human computer design in online instructional pharmaceutical content have an impact on learner retention when measuring recall and recognition? RQ 2: To what degree does human computer design in online instructional pharmaceutical content have an impact on overall learning experience when measuring learner satisfaction? Courses containing text, graphics, and simulations were delivered to pharmaceutical learners, who were then tested to determine if interaction design graphic elements do, in fact, improve learner retention. 10 CHAPTER THREE: METHODOLOGY Participants Study participants were recruited using personnel employed at a pharmaceutical company during the summer of 2007 in order to accurately represent learners in the pharmaceutical industry. Sixty-nine people originally volunteered, with 54 ultimately completing the entire study. The majority of the participants were between the ages of 26 and 55, split evenly between males and females. They were well-educated, with 93% having a college or post-graduate degree, and had longevity with this company, with 69% of the participants employed for at least six years of employment. All participants were familiar with technology, as they used computers for more than six years and regularly used the Microsoft Windows environment on a daily basis as part of job tasks. Eightyfive percent of the participants completed several online courses in the past year, showing their familiarity with online course delivery. Prior to course completion, all but two people felt that online training courses were generally a good way to provide training in this corporate environment, as long as the course design is “well thought out and end-user focused”. Treatments Four treatment groups were created for this research study. Each treatment contained one course and three tests: a pre-test, post-test and retention-test. All courses contained the same detailed material; however the method of graphical presentation was different in each treatment (see Table 3.1 for a treatment summary). 11 Table 3.1 Treatment Distribution Treatment group Participant group 1 Participant group 2 Participant group 3 Participant group 4 Treatment ID A B C D Treatment Description all text, no graphics mostly text, cartoon graphics limited text, detailed graphics limited text, detailed interactive graphics Participants completed the pre-test prior to beginning the course. Once they finished taking the course, they answered the post-test questions. Approximately two weeks later, they were asked to complete a retention-test to finish the study. An illustration of this can be seen in Figure 3.1 Figure 3.1 Instructional Material Process Flow Online Pharmaceutical Courses Four online courses were prepared; one for each treatment. The foundation for the courses used in this study was an existing course currently offered by this pharmaceutical company. General instructional design techniques were incorporated when the original course was written. The course material outlined tasks within a proprietary software program used by this pharmaceutical company. All four courses were prepared with Macromedia Dreamweaver using a locallydeveloped HTML course template which framed the content with a header and navigation arrows (see figure 3.2). This template is currently used to deliver courses to many of the 12 participants within this company; therefore, the participants did not have to learn a new interface to take part. All of the courses were delivered from a local training server using a web browser. Figure 3.2 Example of the HTML template used for all course materials in this study. The first course, Treatment A, allowed participants to review detailed text instructions for tasks without displaying any graphics. The course contained an introduction page, twelve pages of content, and a conclusion page, which allowed participants to easily launch the post-test. Navigation arrows were available on the lower right portion of the screen enabling participants to move through the course at their own pace and review previously displayed content as necessary. Figure 3.3 shows an example of the course for Treatment A. 13 Figure 3.3 Course example from Treatment A The second course, Treatment B, was similar to the first course. The structure and navigation of the course was identical. The difference was the addition of simple graphics which appeared on each content page. These cartoon-like graphics were selected based upon their loose relationship to the content (i.e., when giving steps related to a package, the cartoon-like graphic included was a birthday present). Figure 3.4 shows an example of the course for Treatment B. 14 Figure 3.4 Course example from Treatment B For Treatment C, the third course, participants saw not only the same text-based instructions featured in the first two treatments, but instead of cartoon-like graphics were presented with screen captures of the software, where appropriate. These static graphics enabled the participants to view menus and task steps in context to the system’s pages. Figure 3.5 shows an example of the course for Treatment C. 15 Figure 3.5 Course example from Treatment C In the fourth and final course, Treatment D, participants navigated the majority of task instructions using detailed interactive graphics in lieu of text-based instruction. Six simulations, created using Adobe Captivate, were incorporated into the course material, enabling participants to practice performing the tasks in a shell of the software. They could also move forward and backward through the simulation to properly learn the material. The content on a few pages did not lend itself to simulations; therefore, even within Treatment D, the remaining content pages simply displayed static graphics of the system and related menus. Figure 3.6 shows an example of the course for Treatment D. 16 Figure 3.6 Course example from Treatment D Instruments: Tests and surveys To measure learned knowledge and retention on all tasks included in the course content, there were 50 questions were written to be used on all tests: pre-test, post-test and retention-test. These questions were multiple-choice, true-false and matching. Participants completed each test online using surveymonkey.com. All three tests for each treatment contained the same 50 questions pertaining to the course material. By using the same questions on all tests, the participants’ learned knowledge and retention were measured by tracking their actual scores. To ensure 17 participants did not display any bias towards the questions, the question order was different across each pre-test, post-test and retention-test. To ensure the questions were not easy to answer correctly, a trial run was conducted where the content questions were answered by people who had no knowledge of the proprietary software. After the questions were validated, one version of each test (pre-test, post-test and retention-test) for each treatment was published using surveymonkey.com. In order to gather the participant’s baseline knowledge on the topic, a pre-test was designed to be given at the beginning of the study. The pre-test contained questions related to the soon-to-be-learned content, as well as questions designed to gather general demographic information. If a participant scored 75% or higher on the content questions, they demonstrated an existing understanding of the material and thus were removed from the study. No participants met this criterion; therefore, no participants were excluded from the study. To measure their immediate understanding of the course material, a post-test was created for participants to complete immediately after taking the course. This post-test served as the baseline for learned knowledge in later statistical evaluations. The post-test also contained questions covering learner satisfaction and the use of interaction design elements throughout the course, based upon the treatment assigned. In addition to measuring how well the participants were satisfied with the course itself, the interaction design questions measured how well participants were able to maneuver through the course and comprehend the course material using the interaction design elements provided. 18 For the final knowledge assessment, a retention-test was created containing the same content questions from the previous two tests. It was designed to be given a designated amount of time after the course was completed to measure content retention. In Velayo (1993), retention was measured after one week. Since retention continues to diminish as more time passes, this study was designed to measure content retention after two weeks. Procedures Since participants had four separate interactions with the instructional material in this study, it was important to make each phase as easy as possible to use to maintain a high level of participation. By doing so, the focus was on the study’s instructional material and not on potential internal and external variables. The study was designed so that each person completed the following activities in this order: a. b. c. d. Completed knowledge pre-test Reviewed the course material for their treatment Completed knowledge post-test Completed the knowledge retention-test, approximately two weeks later As previously mentioned, participants were asked to voluntarily take part in this study via e-mail (see appendix A). In order to ensure confidentiality throughout the study, two separate spreadsheets were created to track participant information. In the first file, the participant’s name was entered as they volunteered for the study and they were assigned a participant number. In the second file, the participant number was assigned a treatment group: Treatment A, Treatment B, Treatment C or Treatment D 2. Therefore, the 2 This second file, created prior to the study onset, was populated by randomly drawing treatments A, B, C & D then assigning the treatment to a participant number in numerical order. 19 treatment group and participant name were never viewed concurrently. Once the volunteer pool was complete, the confidential participant’s name file was closed and not referred to during the data analysis phase. As participants signed up, the procedure listed above was followed to assign the treatment group. After participants reviewed and signed the IUPUI Informed Consent Statement 3(see appendix B), each participant was sent an assigned participant number and initial study link via e-mail (see appendix A). To enable participants to easily complete the study, an automated process flow for each treatment was incorporated into the surveys. As such, the initial link sent to the participants was for their assigned treatment’s pre-test. Upon completion of the pre-test, the survey automatically launched the appropriate course material. Once the course was completed, participants were then automatically taken to the post-test. Approximately two weeks after the first three interactions were completed, each participant was sent a link to the assigned treatment’s retention-test. The last test only included content related questions. Participants were encouraged to complete the activities at their own workstation in their office to allow for convenient participation; however some participants chose to complete the study in their home offices. By completing the activities in their environment, it better simulated real-work situations, such as interruptions. Since Dagada and Jakovljevic (2004) found that it was difficult to conduct focus group interviews in the corporate education environment, participants responded to all research questions using instruments published on surveymonkey.com. 3 As indicated on the IUPUI Informed Consent Statement, participant’s responses did not result in any advantages or disadvantages to their employment. 20 Data Analysis Data was gathered and analyzed in multiple phases during this research study. By using surveymonkey.com to gather data, all test responses were downloaded into Microsoft Excel spreadsheets for further analysis. To begin, test scores were calculated and incorporated into a score progression spreadsheet. Demographic data was then examined on a per-treatment and overall basis. Lastly, satisfaction answers were explored to determine if any treatment could be recommended as the treatment of choice. All data was statistically analyzed using Microsoft Excel 2003 or SPSS v. 15. Grading tests Using the same 50 questions allowed for a smooth grading process, so a spreadsheet was created which captured all test questions and answers. This file also contained the number where the test question appeared on the pre-test, post-test and retention-test (see figure 3.7 for more details). Figure 3.7 Question and answer tracking sheet 21 Determining score progression per participant Next, three spreadsheets were created, one for each test type (pre-test, post-test and retention-test), that took each participant’s test answers and calculated the number of questions they got right and wrong. From there, each participant’s results were copied into the score progression spreadsheet (see figure 3.8 for a sample of this file). By having all scores centrally located in one file, the data could then be loaded into SPSS and statistical analysis performed. Figure 3.8 Excerpt from the Score Progression spreadsheet prior to SPSS Statistics – Learner Retention and Learner Satisfaction To evaluate learner retention, SPSS was used to perform basic descriptive statistics (mean, standard deviation and ANOVA). To assess learner satisfaction, the data downloaded into spreadsheets from surveymonkey.com was used to look at individual question response percentages by treatment and overall. 22 CHAPTER FOUR: FINDINGS This section includes research findings of the evaluation, statistics and participant comments. As previously discussed, the research questions addressed two matters related to human computer design in online instructional content: • Learner retention when measuring recall and recognition • Overall learning experience when measuring learner satisfaction Learner Retention Basic descriptive statistics were performed on the score progression data in an attempt to determine which treatment enabled participants a longer retention rate. The results can be seen in Tables 4.1 and 4.2. The standard deviation for each treatment was very large. This shows that not one treatment was statistically desirable for maintaining a high retention rate. Similarly, Zhu and Grabowski (2006, p. 343) found that “participants in the static graphics groups performed equally as well as those in the [interactive simulation] strategies group” which “adds to the growing literature that supports the power of static graphics and questions the instructional value of [simulation].” Table 4.1 Score Change by Treatment N 11 Mean -5.45 Std. Deviation 8.054 B 13 -4.08 C 16 -3.88 D 14 Total 54 Treatment A 95% Confidence Interval for Mean Std. Error 2.428 Lower Bound -10.87 Upper Bound -.04 Minimum -27 Maximum 1 3.013 .836 -5.90 4.588 1.147 -6.32 -2.26 -9 0 -1.43 -12 4 -7.07 4.938 1.320 -9.92 -4.22 -21 -2 -5.07 5.291 .720 -6.52 -3.63 -27 4 Note. On the score progression data, the “score change” between the post-test score and the retention-test score is a negative number (e.g., post-test score=42, retention-test score=33, score change= -9). As the mean in Table 4.X is based upon the “score change”, the mean was also expected to be negative. 23 Table 4.2 ANOVA score Between Groups Sum of Squares 93.375 Within Groups Total df 3 Mean Square 31.125 1390.329 50 27.807 1483.704 F 1.119 Sig. .350 53 Taking the statistical evaluation one step further, the mean and standard deviation were calculated not only by treatment but also by the type of test, as seen in Table 4.3. Again, no treatment materialized as a forerunner. Table 4.3 Mean/Standard Deviation by Treatment Treatments Measures Treatment A (n=11) Mean SD Treatment B (n=13) Mean SD Treatment C (n=16) Mean SD Treatment D (n=14) Mean SD Pre-test 25.64 3.414 22.62 4.194 23.19 2.588 24.07 5.255 Post-test 34.27 5.676 32.92 5.590 31.69 6.353 35.43 3.936 Retention-test 28.36 5.982 28.85 5.535 27.81 6.585 28.36 5.982 Since no treatment was emerging as an ideal option, the time elapsed between the post-test and retention-test completion was analyzed. E-mails containing the retention link were carefully sent to participants on the 14th day in hopes of receiving accurate retention-test scores which could be directly compared across the study. However, as indicated in table 4.4, only 28% of the participants completed the retention-test right at 14 days. Wisher et al (2001, p. 23) discussed that “most of the knowledge decay occurred within the first four weeks”; therefore, data was categorized into 7 day time increments. 24 In this study, participants who completed the retention-test in less than four weeks did not show a large retention-test score drop; while those who waited longer than four weeks showed a mean retention-test score drop of almost 7.5 points. This supports the above theory discussed by Wisher et al (2001) Table 4.4 Retention-test Timing Number of days between post-test and retention-test 14 days 15-21 days 22-28 days More than 28 days Number of participants: 15 15 9 15 Percent of participants: 27.8% 27.8% 16.6% 27.8% Mean Score Change -3.67 -5.13 -3.33 -7.47 Next, each individual treatment was reviewed for retention-test score trends and the specific length of time which occurred between the post-test and retention-test. Upon running scatterplot diagrams and linear regression analysis for each treatment, an interesting data anomaly began to appear. As seen in Figure 4.1 and Table 4.5, the retention-test score change for Treatment A seemed to trend towards a better score the longer participants waited to complete the retention-test. With the exception of the outlying result in the lower right corner, a linear regression showed an R value of .679, indicating a tight linear correlation. 25 Scatterplot for Treatment A W 26 0 W 3 10 WW W W 13 47 39 score change 5 60 W -10 W 19 35 -20 W 20 30 24 40 date difference Figure 4.1 Scatterplot for Treatment A Table 4.5 Model Summary for Treatment A Adjusted R R R Square Square .679(a) .461 .393 a Predictors: (Constant), date difference Model 1 Std. Error of the Estimate 3.051 For Treatment B, the investigation still shows a strong R value for high significance. As expected from prior research, there is a downward trend, supporting theories that retention drops as time passes. See Figure 4.2 and Table 4.6 for more details. 26 Scatterplot for Treatment B W 0 W 8 57 22 W score change -2 W 45 42 W 28 11 -4 W W 14 W -6 W 34 40 68 -8 W 15 20 25 W 1 30 31 35 date difference Figure 4.2 Scatterplot for Treatment B Table 4.6 Model Summary for Treatment B Adjusted R R R Square Square .572(a) .327 .266 a Predictors: (Constant), date difference Model 1 Std. Error of the Estimate 2.582 At first glance, data points within Treatment C seem to be widely varied. However, when removing the outlying data points, this treatment also demonstrates an upward trend. Figure 4.3 shows a wider variety of data points; while Tables 4.7 and 4.8 show that the R value can become somewhat significant if the extreme outliers are removed. 27 Scatterplot for Treatment C 4 W W 58 W 9 16 score change 0 W W -4 W W 48 W W W -12 23 56 61 32 W -8 W 2 69 W 49 43 W 6 27 33 W 15 7 20 25 30 date difference Figure 4.3 Scatterplot for Treatment C Table 4.7 Model Summary for Treatment C – all data points Adjusted R R R Square Square .190(a) .036 -.033 a Predictors: (Constant), date difference Model 1 Std. Error of the Estimate 4.663 Table 4.8 Model Summary for Treatment C – all data points except outliers 27 and 58 Adjusted R R R Square Square .597(a) .356 .302 a Predictors: (Constant), date difference Model 1 Std. Error of the Estimate 3.556 Lastly, an analysis of Treatment D shows a concentration of people completing the retention-test near the fourteen day recommended time-line. This concentration 28 supports earlier theories that retention is strongest shortly after completing a learning event. However, it also prevents the research from determining a precise linear regression and searching for expected or unexpected trends. Scatterplot for Treatment D W 36 W -5 4 WW 15 55 46 W W 17 score change WW 44 W 21 12 63 W -10 W 37 29 25 -15 -20 W 20 59 30 40 date difference Figure 4.4 Scatterplot for Treatment D Table 4.8 Model Summary for Treatment D Adjusted R R R Square Square .549(a) .301 .243 a Predictors: (Constant), date difference Model 1 Std. Error of the Estimate 4.296 This high significance in correlation, regression and upward trending for Treatments A and C does not correspond to current research that retention decreases as time passes, leading me to believe other confounding variables have occurred. One possibility is that the participants ignored the request to complete the tests individually 29 and discussed answers with other participants prior to completing the retention-test. Another possibility is that the small number of participants was not enough to truly identify trends in the data; further research in this area could enhance the validity of these results. Nonetheless, a noticeable difference between treatments has yet to materialize when based on retention scores alone. Learner Satisfaction Overall, participants preferred the courses with quality graphics – both system screen-captures and simulations. Approximately 86% of Treatment D participants and 75% of Treatment C rated the course as well designed or better. Similarly, over 70% of the participants in the graphical treatments thought the course appearance was attractive, compared with only 18% of the text-based course participants. Although they viewed the courses as well designed, participants were only moderately satisfied with the course itself. Participants in the treatments with cartoon graphics (Treatment B) and simulations (Treatment D) scored just over 50% in learner satisfaction. Many people felt that the pages contained too much detailed content to read and not enough interaction, particularly on the text-based (Treatment A) and text with related graphics (Treatment C). Participants encountered frustrations throughout all facets of the study. Combining the percentages for the three courses with graphics, almost seventy percent of the participants in these groups were frustrated during the course. Shockingly, the least frustrated treatment group was Treatment A, the all-text course. Some people in Treatment D thought the overlays in the simulations “forced the focus of the participant 30 to the instructions rather than seeing the system”. Along these same lines, when asked if they had fun taking this course, participants responded with a resounding 46% no, while only 24% said yes. 4 Not surprisingly, the treatment with the lowest satisfaction percentage was the all-text course; while the highest percentage came from the cartoon graphics treatment. Despite the irritations felt throughout the course, a resounding 87% felt their posttest score was higher than their pre-test score and most of them (76%) felt the course content display helped them achieve a better score. One participant from the simulation treatment (D) stated, “Due to my participation with the course animation, I was able to remember some of the content.” Although many participants commented that there was too much detailed content to remember for a post-test, let alone a retention test; pharmaceutical training is often very technical by nature and thus needs focused questions to measure understanding. Usability heuristics, sometimes called “rules of thumb” (Nielsen, 2005), are a key factor in evaluating user interface design. Participants were asked questions on the posttest surrounding these heuristics, including consistency, aesthetic design, error correction/prevention, and user control/navigation. Over 91% of participants in all treatments felt the courses were displayed in a consistent manner. Those who received courses with a type of graphics were the most satisfied with the content presentation; 62% in Treatment B and 50% in Treatments C and D compared with only 28% in Treatment A. Of the participants in Treatment D, 79% felt the amount of content 4 Only 18% of participants in Treatment A felt the course was fun; 31% of participants in Treatment B had fun taking the course. 31 displayed on screen was appropriate. All other groups thought the screens held too much content, with not enough “white space to give eyes and brain a break between learning”. Only 11% of the participants in all treatments received an error. While some people stated they had trouble answering the required questions on the post-test, three participants had difficulty using the simulations in Treatment D. Lastly, over 80% of the entire pool of participants thought the navigation was easy to maneuver. Based upon post-test comments, it seems those who had trouble only did so because they were not able to go back to previous questions while taking the tests 5 or did not like to see the navigation buttons within the course move. 6 Participants in Treatments A, B and C were asked questions on the post-test related to the text content. Eighty percent of these participants felt the text was organized in a clear manner and 93% thought the text instructions were clear and concise. When asked if the amount of text in future e-learning courses should change, the participants were split. Approximately half believed the amount of text was just right, while the other half felt there should be less text in future e-learning courses. Stated one participant, “I don’t like the courses that are just walls of text.” When commenting, participants suggested improvements such as “mnemonic trigger devices (pictures, screen shots, diagrams, etc.)”, “make use of the functionality of the computer” and text “could have been greatly aided by additional graphical or interactive content”. When asked about the graphical displays incorporated into Treatments B, C and 5 Participants were not given the option to return to previous questions on the pre-test, post-test and retention-test because some later questions potentially gave away answers to earlier questions. Therefore, the tests functioned as designed. 6 Course navigation arrows were displayed at the end of each page of content. If there was a lot of content on a page, the navigation arrows changed position. 32 D, responses were not surprising. Those in Treatment B, the cartoon graphics course, were relatively pleased (62%) with the number of pictures incorporated; but only 31% rated the same pictures helpful in learning the course content. Participants stated that the graphics were too “cartoony” and “relatively unhelpful to the training” while the “use of different media would have helped”. For the course with text instructions and static system screen captures, Treatment C, only 12% were not pleased with the amount of graphics and 56% felt the graphics were helpful to their learning. One participant liked that portions of the system were “presented in little bits…I didn’t need to see the whole screen to understand that I should click a menu option” while others felt the text and graphic combination was “simply distracting” and “too crowded”. When asked if the graphics were appropriate to the content, over eighty percent answered “yes”. On the post-test, one participant commented, “it makes it less intimidating when you actually go to use” the software. Lastly, participants in the simulation course, Treatment D, were pleased with the amount of graphics (100%), thought the graphics were helpful (93%) and felt the simulations were appropriate to the course (93%). When asked specifically about the use of simulations, 86% thought they were easy to use. Based upon the comments received, people thought the “step by step instruction was beneficial”, helping “to stay somewhat focused”. On the flip side, the remaining 14% either thought they were just clicking through the screens to get through or wanted to be practicing in the actual system. One participant stated “the focus becomes completing the walkthrough rather than the information that is being presented”, while another mentioned “all I really had to do was click on the box and the typing was done by the system”. 33 CHAPTER FIVE: DISCUSSION Explanation of Outcomes As mentioned in the literature review, a debate has emerged among researchers attempting to determine whether graphical user interfaces are valuable (Park and Lim, 2007; Schnotz and Rasch, 2005; Wisher et al., 2001; Zhu and Grabowski, 2006). These results show that there is not a clearly effective frontrunner for e-learning courses. Although this outcome is consistent with the aforementioned studies, modifications in research study design have provided new findings for the interaction design field. To begin, the amount of questions administered to the participants was much larger in this study as compared to previous studies (Park and Lim, 2007; Zhu and Grabowski, 2006), leading to a more accurate retention score analysis. Also, the study performed by Wisher et al. (2001) consisted of a six-phase course lasting 24 hours per month for 10 months. The opportunity for outside influences to affect retention rates was very high. On the other hand, participants in this study were focused on one topic which required less than one hour of their time, thus minimizing the potential for external factors to interfere. The type of material being taught was different as well. In prior studies, detailed science material was delivered to the participants (Park and Lim, 2007; Schnotz and Rasch, 2005) which could have been difficult for their participants to absorb depending on their background and cognitive skills. Instead, this study used a participant pool that was well-versed in technology, thus reducing the cognitive load. Finally, this was the sole study which incorporated user satisfaction results, an often overlooked factor. 34 When beginning this research topic, it was anticipated that the simulations in Treatment D would emerge as the graphical option of choice for e-learning courses. While reviewing the conflicting literature reviews, this assumption began to be questioned. After evaluating the study results, it can be stated that the desired graphical user interface really depends on many factors, such as the types of navigation, level of interaction and the use of technology. From the research questions, two aspects of graphical user interfaces in e-learning were considered: content retention and user satisfaction. Graphics make the course appearances more attractive, but are people really retaining the knowledge learned through graphics-infused e-learning courses? For the most part, simulations – or the thought of them – were recommended across the board. One participant commented that simulations would improve their performance since “when you can practice something you just learned without fear of messing something up in the real world, it helps the confidence level.” By modeling the application, they can get a real feel for how the system works, as opposed to piecing together text instructions and screen captures. Throughout the learner satisfaction comments, an additional theme started to emerge. In a “grass is always greener” manner, many participants in treatments A and B wanted more interactivity and graphics, but some of those who had graphics and interactivity wanted less interruption. According to Ryan (2001, p. 51), it is important to “engage the learner”. However, some participants in Treatments A & B stated “I just 35 wasn’t engaged” and asked that the “course be created in a program such as Captivate 7… you would feel like you are working in the actual program”; while other participants in Treatments C & D wanted to be given simple overviews and be allowed to get hands-on practice in the actual system, stating that “learning a software application begs for interactivity during training”. While reviewing the results, I found there were other factors at play within this study. Several negative responses across all treatments centered on the test questions. Each of the three tests contained fifty questions in an attempt to reduce potentially inflated scores due to guessing. Instead of remaining focused throughout each test and answering each question accurately, some participants commented that they simply clicked a response to finish the test. Although the test questions were not specifically designed to be included in the satisfaction evaluation, as they were not related to the course’s graphical user interface, many people responded to the satisfaction questions with the test lengths in the forefront of their minds. Another factor that contributes to the study’s results is the preconceived notion that online training is not effective. When asked if online training was effective, participants were split between thinking it is a “great way to deliver training” and “generally good, but doesn’t replace classroom learning”. But some comments skewed towards thinking online training is not such a good idea, including “online training never works well for me” and “system training of this type should be instructor led”. If participants held an inflexible attitude towards e-learning courses, this attitude may have been reflected in the way they reviewed the course and approached the test questions. 7 Captivate is a software program available through Adobe which can be used to create simulations. 36 Similarly, some participants thought the course was too focused on the software itself. They remarked that they “wanted to understand the business process” and see how the software could “solve a problem”. Although this was originally considered, the decision was made to focus on using the software itself. However, participants may have factored their attitudes towards the specific content into their satisfaction responses. Lastly, even though the pre-test was essential in determining existing knowledge on the course topic, some participants admitted they paid special attention in the course to topics they remembered on the original test. According to Wisher et al (2001, p. 21), they found that “pre-test measures may sensitize students to the knowledge-based items, and they may score higher on the post-test regardless of the delivery method”. Therefore, this sensitivity may have played a role in potentially inflated post-test scores in this study. Implications of Results of Outcomes Particularly in the pharmaceutical industry, instructional designers seek to find the best method of delivering courses to appropriately convey relevant content while still providing these courses in the most cost effective manner. Based strictly on the content retention data outlined in the Results section, the data shows that text-based e-learning courses can enable students to remember what they have learned. They will continue to retain content without wasting precious development and financial resources to create elearning courses with superfluous graphical user interfaces. That said, an important aspect of human computer interaction is user satisfaction. Learners should also have a sense of contentment and investment with the course material; otherwise, they may feel that training is unnecessary and irrelevant. 37 CHAPTER SIX: CONCLUSION As previously discussed, conflicting earlier studies found that simulations either had a great effect on learners, or no significant differences. In attempting to find the best type of graphical user interface for e-learning courses, this study found that additional graphics are just not necessary when based upon retention scores alone. Nevertheless, learners were more satisfied with the incorporation of additional graphics. If resource constraints are minimal, then the inclusion of graphics is reasonable; but if focusing solely on retention scores, text-based courses are sufficient. Further studies are indeed warranted to accurately determine the best graphical method of presentation in e-learning courses. Limitations As with any corporate environment, certain limitations exist when trying to gather research data. First and foremost, it was more difficult than even originally expected to recruit participants. Job tasks and deadlines exist in the participant’s normal day-to-day activities which obviously take precedence over their involvement in this study. When participants originally volunteered, their schedules may have shown availability. However, with a study that spans 2 to 4 weeks time, it is inevitable that other priorities emerge that would keep participants from completing the second half of the study. Due to the smaller than expected turnout, the statistical significance of the data was not ideal. Having more users complete the study could have led to an improved statistical outcome and a better defined frontrunner in the graphical user interface treatment options. Additionally, while every attempt was made to create tests which examined 38 learning objectives and provided measurable results, using the same test questions on each test instance may have biased participants. Creating a larger pool of questions which were then applied to separate tests could have led to more significant results. Future Research Several aspects of this study could be expanded for future research. Most important would be to perform the study with a larger sample size to improve statistical significance. Also, Treatment B could be removed and the study completed again using only three treatments – text only, screen captures and simulations. Enhancements could also be made to the graphical user content, such as offering audio clips to these options to account for multiple learning styles or modifying the simulations to encourage a deeper thought process. The type of learning, such as procedure vs. knowledge, could also be incorporated into future endeavors. Further studies could also be performed using non-technical content. The course content delivered in this study attempted to teach people how to use a proprietary software package. By doing so, simulations were designed around pointing and clicking through a mock-up of the software. If a non-software topic was chosen for the course, different results may be discovered. Additionally, learning checks could be incorporated throughout the course material and interim practice sessions held between post-test and retention-test completion. A few participants commented that since they were not able to practice on the software, they lost most of what they had learned. 39 Summary This study investigated how the use of graphical user interfaces in e-learning courses could improve content retention and learner satisfaction. The results indicate that content retention is not significantly improved by creating extensive graphics and other simulations. Therefore, straightforward text e-learning courses can be just as effective as other courses in improving retention. Participants did show a slightly improved level of satisfaction when graphics were incorporated, but given today’s time and budget constraints, simple courses can be good enough. 40 REFERENCES Bartholomew, D. (2005, June). Taking the e-train. Industry Week, 34-37. Blythe, S. (2001). Designing online courses: User-centered practices. Computers and Composition, 18, 329-346. Calder, J and McCollum, A (1998). Open and flexible learning in vocational education and training. London: Kogan Page. Chen, S. Y., & Macredie, R. D. (2004). Cognitive modeling of student learning in webbased instructional programs. International Journal of Human-Computer Interaction, 17(3), 375-402. Dagada, R., & Jakovljevic, M. (2004). 'Where have all the trainers gone?' E-learning Strategies and Tools in the Corporate Training Environment. Paper presented at the SAICSIT 2004. Daniels, W. J., & Salisbury, S. (2002). Using the internet to achieve your workplace training objectives. Applied Occupational and Environmental Hygiene, 17(12), 814-817. Franco, J. F., da Cruz, S. R., & Lopes, R. d. D. (2006). Computer graphics, interactive technologies and collaborative learning synergy supporting individuals' skills development. Gupta, S., & Bostrom, R. P. (2006, April). End-user training methods: What we know, need to know. Paper presented at the SIGMIS-CPR '06, Claremont, CA. Hamel, C. J., & Ryan-Jones, D. L. (1997). Using three-dimensional interactive graphics to teach equipment procedures. Educational Technology Research and Development, 45(4), 77-87. Kazmer, M. M., & Haythornthwaite, C. (2004). Multiple perspectives on online learning. SIGGROUP Bulletin, 25(1), 7-11. Kilby, T. (2001). The direction of web-based training: A practitioner's view. The Learning Organization, 8, 194-199. Lam, P., & McNaught, C. (2006). Design and evaluation of online courses containing media-enhanced learning materials. Educational Media International, 43(3), 199218. Lindeman, L. W., & Boerner, H. (1993, October). The development of a generic pharmaceutical training institute. Paper presented at the National Council for Occupational Education, Atlanta, Georgia. Masemola, S. S. T., & de Villiers, M. R. R. (2006). Towards a framework for usability testing of interactive e-learning applications in cognitive domains, Illustrated by a case study. Paper presented at the SAICSIT 2006. McCalla, G. (2004). The ecological approach to the design of e-learning environments: Purpose-based capture and use of information about learners. [Special Issue]. Journal of Interactive Media Education, 7. Special issue on the educational semantic web. McFarland, R. (1995). Ten design points for the human interface to instructional multimedia. T.H.E. Journal, 22, 67. 41 Mishra, S. (2005). Learning from the online learners. British Journal of Educational Technology, 36(3), 569-574. Moore, D. M., Burton, J. K., & Myers, R. J. (2003). Multiple-channel communication: The theoretical and research foundations of multimedia. In D. H. Jonassen (Ed.), Handbook of research on educational communications and technology (2nd ed., pp. 979-1005). Mahwah, NJ: Lawrence Erlbaum. Mupinga, D. M., Nora, R. T., & Yaw, D. C. (2006). The learning styles, expectations, and needs of online students. College Teaching, 54(1), 185-189. Nielsen, J. (2007). Ten usability heuristics. Retrieved October 15, 2007 from http://www.useit.com/papers/heuristic/heuristic_list.HTML Olfman, L., and Bostrom, R. P. (1988, March). The influence of training on the use of end-user software. Paper presented at the meeting for ACM SIGOIS and IEEECS TC-OA on office information systems, 110-117. Olfman. L., Sein, M., and Bostrom, R. P. (1989). EUC training: Comparison of methods and the role of individual differences. Paper presented at the meeting of the Twenty- Second annual Hawaii International Conference on System Sciences, IV. Park, S., & Lim, J. (2007). Promoting positive emotion in multimedia learning using visual illustrations. Journal of Educational Multimedia and Hypermedia, 16(2), 141-162. Passerini, K. (2007). Performance and behavioral outcomes in technology-supported learning: The role of interactive multimedia. Journal of Educational Multimedia and Hypermedia, 16(2), 183-211. Quesenbery, W. (2001). On beyond help: Meeting user needs for useful online information [Electronic version]. Technical Communication, 48(2). Rose, C. & Nicholl, M. J. (1997). Accelerated learning for the 21st century. London: Judy Platkus. Ryan, C. D. (2001). The human-computer interface: Challenges for educational multimedia and web designers. inroads SIGCSE Bulletin, 33(4), 51-54. Sadler-Smith, E., & Smith, P. J. (2004). Strategies for accommodating individuals' styles and preferences in flexible learning programmes. British Journal of Educational Technology, 35(4), 395-412. Sangin, M., Molinari, G., & Dillenbourg, P. (2006). Collaborative learning with animated pictures: The role of verbalizations. Paper presented at the International Conference on Learning Sciences 2006. Scaife, M., & Rogers, Y. (1996) External cognition: How do graphical representations work. International Journal of Human-Computer Studies, 45, 185-213. Schnotz, W., & Rasch, T. (2005). Enabling, facilitating, and inhibiting effects of animations in multimedia learning: Why reduction of cognitive load can have negative results on learning. ETR&D, 53(3), 47-58. Shayo, C., Olfman, L., Teitelroit, R., Nordahl, C., and Rodriguez, M. (1996). Training pre-assessment: is it feasible? Proceedings of the 1996 conference on ACM SIGCPR/SIGMIS, 244-258. Snoddy Jr, S., & Novick, D. (2004, October). Post-training support for learning technology. Paper presented at the SIGDOC'04, Memphis TN. 42 Stewart, B. L., Waight, C. L., Norwood, M. M., & Ezell, S. D. (2004). Formative and summative evaluation on online courses. The Quarterly Review of Distance Education, 5(2), 101-109. Teo, C. B., & Gay, R. K. L. (2006). A knowledge-driven model to personalize e-learning. ACM Journal of Educational Resources in Computing, 6(1), 1-15. Thurman, R. A. (1993). Instructional simulation from a cognitive psychology viewpoint. Educational Technology Research and Development, 41, 75-89. Velayo, R. S. (1993). Retention of content as a function of presentation mode and perceived difficulty. Reading improvement, 30, 216-227. Wisher, R. A., K, C. C., & Seidel, R. J. (2001). Knowledge retention as a latent outcome measure in distance learning. The American Journal of Distance Education, 15(3), 20-35. Zaharias, P. (2004, June). Usability and e-learning: The road towards integration. eLearn magazine, 2004, 4. Zhu, L., & Grabowski, B. L. (2006). Web-based animation or static graphics: Is the extra cost of animation worth it? Journal of Educational Multimedia and Hypermedia, 15(3), 329-347. 43 Appendix A: E-mail Communication 44 45 Appendix B: Informed Consent Form IUPUI INFORMED CONSENT STATEMENT FOR Investigating the Impact of Interaction Design on Pharmaceutical Online Course Delivery: Adapting Online Course Graphic Design for Improved Content Retention You are invited to participate in a research study investigating online course delivery enhancements which should improve content retention and overall satisfaction for online pharmaceutical learners. We ask that you read this form and ask any questions you may have before agreeing to be in the study. The study is being conducted by Melynda Buher, Human Computer Interaction graduate student in the School of Informatics. STUDY PURPOSE The purpose of this research is to enhance online course delivery to improve content retention and overall satisfaction for online pharmaceutical learners, as online courses are an integral part of the education process for pharmaceutical employees. The results of the study will help guide instructional designers to better develop materials for pharmaceutical employee education. NUMBER OF PEOPLE TAKING PART IN THE STUDY: If you agree to participate, you will be one of approximately 100 subjects who will be participating in this research locally. PROCEDURES FOR THE STUDY: If you agree to be in the study, you will do the following things: Phase 1: 1. Complete a knowledge pre-test and demographic survey 2. Learn about a pharmaceutical related topic through an online course 3. Take a knowledge post-test and satisfaction survey Phase 2 (approximately 2 weeks later) 1. Take a knowledge retention survey In all, the time involved should be approximately one hour of your time. All surveys and courses will be delivered electronically and are designed to be completed in your own workspace. RISKS OF TAKING PART IN THE STUDY: There are no risks associated with this study. BENEFITS OF TAKING PART IN THE STUDY: There is no direct benefit of taking part in this study. However, you will learn how to interact with a software program used within this company. 46 CONFIDENTIALITY Efforts will be made to keep your personal information confidential. We cannot guarantee absolute confidentiality. Your personal information may be disclosed if required by law. Your identity will be held in confidence in reports in which the study may be published. Organizations that may inspect and/or copy your research records for quality assurance and data analysis include groups such as the study investigator and his/her research associates, the IUPUI Institutional Review Board or its designees, study sponsor, and (as allowed by law) state or federal agencies (specifically the Office for Human Research Protections (OHRP)) and may need to access your research records. COSTS There are no costs associated with taking part in this study. PAYMENT You will not receive payment for taking part in this study. However, you will be entered into a random drawing with all participants for a $50 Target Gift Card. CONTACTS FOR QUESTIONS OR PROBLEMS For questions about the study or your participation within, contact the researcher Melynda Buher at 317696-6336 or via e-mail at mbuher@indiana.edu. VOLUNTARY NATURE OF STUDY Taking part in this study is voluntary and has been approved by your management team. You may choose not to take part or may leave the study at any time. Leaving the study will not result in any penalty or loss of benefits to which you are entitled. Your decision whether or not to participate in this study will not affect your current or future relations with this pharmaceutical company. SUBJECT’S CONSENT In consideration of all of the above, I give my consent to participate in this research study. I will be given a copy of this informed consent statement to keep for my records. SUBJECTS SIGNATURE: Date: (must be dated by the subject) SIGNATURE OF PERSON OBTAINING CONSENT: Date: 47 Appendix C: Sample Test Questions 48 Appendix D: Demographic Questions from Pre-Test 49 50 51 Appendix E: Satisfaction Questions from Post-Test 52 53 54 55 56 57 58 VITA Melynda Buher mbuher@alumni.indiana.edu (317) 696-6336 5243 Carrollton Ave Indianapolis, IN 46220 Education Master of Science in Human-Computer Interaction, Expected December 2007 School of Informatics, Indiana University Purdue University at Indianapolis (IUPUI) Thesis: Investigating the Impact of Interaction Design on Pharmaceutical Online Course Delivery: Adapting Online Course Graphic Design for Improved Content Retention Research Advisor: Anthony Faiola BS Business, Computer Information Systems, May, 1993, Indiana University Experiences Software Artistry/Tivoli Systems (a division of IBM) 7/96-6/99, Indianapolis, IN Implementation Consultant, Technical Trainer Pharmaceutical Company 6/02 – present, Indianapolis, IN Senior Training Associate Theoris, Inc under contract with a pharmaceutical company 9/01 – 11/01, Indianapolis, IN Lead Technical Trainer Bradford-Scott Data Corporation 6/93 – 7/96, Indianapolis, IN Software Support Specialist Interactive Intelligence 8/99 – 7/01, Indianapolis, IN Knowledge Engineer Corepoint Technologies/IBM Software Group 6/99 – 8/99, Indianapolis, IN Sr. Implementation Consultant 59