- Browse by Author
Browsing by Author "Smith, Benjamin D."
Now showing 1 - 10 of 13
Results Per Page
Sort Options
Item Advancing Expert Human-Computer Interaction Through Music(Michigan Publishing, 2012-09) Smith, Benjamin D.; Garnett, Guy E.One of the most important challenges for computing over the next decade is discovering ways to augment and extend human control over ever more powerful, complex, and numerous devices and software systems. New high-dimensional input devices and control systems provide these affordances, but require extensive practice and learning on the part of the user. This paper describes a system created to leverage existing human expertise with a complex, highly dimensional interface, in the form of a trained violinist and violin. A machine listening model is employed to provide the musician and user with direct control over a complex simulation running on a high-performance computing system.Item ArraYnger: New Interface for Interactive 360° Spatialization(2017) Andersen, Neal; Smith, Benjamin D.; Music and Arts Technology, School of Engineering and TechnologyInteractive real-time spatialization of audio over large immersive speaker arrays poses significant interface and control challenges for live performers. Fluidly moving and mixing numerous sound objects over unique speaker configurations requires specifically designed software interfaces and systems. Currently available software solutions either impose configuration limitations, require extreme degrees of expertise, or extensive configuration time to use. A new system design, focusing on simplicity, ease of use, and live interactive spatialization is described. Automation of array calibration and tuning is included to facilitate rapid deployment and configuration. Comparisons with other solutions show favorability in terms of complexity, depth of control, and required features.Item Big Tent: A Portable Immersive Intermedia Environment(2016) Smith, Benjamin D.; Cox, Robin; Department of Music and Arts Technology, School of Engineering and TechnologyBig Tent, a large scale portable environment for 360 degree immersive video and audio artistic presentation and research, is described and initial experiences are reported. Unlike other fully-surround environments of considerable size, Big Tent may be easily transported and setup in any space with adequate foot print, allowing immersive, interactive content to be brought to non-typical audiences and environments. Construction and implementation of Big Tent focused on maximizing portability by minimizing setup and tear down time, crew requirements, maintenance costs, and transport costs. A variety of different performance and installation events are discussed, exploring the possibilities Big Tent presents to contemporary multi-media artistic creation.Item The effect of music on body sway when standing in a moving virtual environment(PLOS, 2021-09-28) Dent, Shaquitta; Burger, Kelley; Stevens, Skyler; Smith, Benjamin D.; Streepey, Jefferson W.; Kinesiology, School of Health and Human SciencesMovement of the visual environment presented through virtual reality (VR) has been shown to invoke postural adjustments measured by increased body sway. The effect of auditory information on body sway seems to be dependent on context with sounds such as white noise, tones, and music being used to amplify or suppress sway. This study aims to show that music manipulated to match VR motion further increases body sway. Twenty-eight subjects stood on a force plate and experienced combinations of 3 visual conditions (VR translation in the AP direction at 0.1 Hz, no translation, and eyes closed) and 4 music conditions (Mozart's Jupiter Symphony modified to scale volume at 0.1 Hz and 0.25 Hz, unmodified music, and no music) Body sway was assessed by measuring center of pressure (COP) velocities and RMS. Cross-coherence between the body sway and the 0.1 Hz and 0.25 Hz stimuli was also determined. VR translations at 0.1 Hz matched with 0.1Hz shifts in music volume did not lead to more body sway than observed in the no music and unmodified music conditions. Researchers and clinicians may consider manipulating sound to enhance VR induced body sway, but findings from this study would not suggest using volume to do so.Item Electro Contra: Innovation for Tradition(2016) Smith, Benjamin D.; Department of Music and Arts Technology, School of Engineering and TechnologyTechnological interventions in American traditional fiddle and dance music are presented and specific design and development problems are considered. As folk dance communities and events explore the notion of incorporating modern electronic dance music into the experience certain inherent problems are exposed. Maintaining strict musical forms that are required for the traditional choreography, maintaining the fluidity and control of live bands, and interacting with the other performers require new software tools. Initial solutions developed in Ableton Live are described and show a successful method of solving these challenges.Item Enabling Live Presence: Dynamic Video Compression for the Telematic Arts(Michigan Publishing, 2012-09) Smith, Benjamin D.Telematic performance, connecting performing artists in different physical locations in a single unified ensemble, places extreme demands on the supporting media. High audio and video quality plays a fundamental role in enabling inter-artist communication and collaboration. However, currently available video solutions are either inadequate to the task or pose extreme technical requirements. A new solution is presented, vipr (videoimage protocol), which exposes a number of popular, robust video compression methods for real-time use in Jitter and Max. This new software has successfully enabled several inter-continental performances and presents exciting potentials for creative, telematic artists, musicians, and dancers.Item Interactive High Performance Computing for Music(Michigan Publishing, 2011-07-31) Smith, Benjamin D.; Garnett, Guy E.The origins of computer music are closely tied to the development of the first high-performance computers associated with major academic and research institutions. These institutions have continued to build extremely powerful computers, now containing thousands of CPUs with incredible processing power. Their precursors were typically designed to operate in non-real time, “batch” mode, and that tradition has remained a dominant paradigm for high performance computing. We describe experimental research in developing the interactive use of a modern high- performance machine, the Abe supercomputer at the National Center for Supercomputing Applications on the University of Illinois at Urbana-Champaign campus, for real-time musical and artistic purposes. We describe the requirements, development, problems, and observations from this project.Item ML.* MACHINE LEARNING LIBRARY AS A MUSICAL PARTNER IN THE COMPUTER-ACOUSTIC COMPOSITION FLIGHT(Michigan Publishing, 2014-09) Smith, Benjamin D.; Deal, W. ScottThis paper presents an application and extension of the ml.* library, implementing machine learning (ML) models to facilitate “creative” interactions between musician and machine. The objective behind the work is to effectuate a musical “virtual partner” capable of creation in a range of musical scenarios that encompass composition, improvisation, studio, and live concert performance. An overview of the piece, Flights, used to test the musical range of the application is given, followed by a description of the development rationale for the project. Its contribution to the aesthetic quality of the human musical process is discussed.Item Music Recombination Using a Genetic Algorithm(ICMA, 2018) Majumder, Sanjay; Smith, Benjamin D.; Music and Arts Technology, School of Engineering and TechnologyThis paper presents a new system, based on genetic algorithms, to compose music pieces automatically based on analysis of the exemplar MIDI files. The aim of this project is to create a new music piece which is based on the information in the source pieces. This system extracts musical features from two MIDI files and automatically generates a new music piece using a genetic algorithm. The user specifies the length of the piece to create, and the weighting of musical features from each of the MIDI files to guide the generation. This system will provide the composer a new music piece based on two selected music pieces.Item Musical Deep Learning: Stylistic Melodic Generation with Complexity Based Similarity(2017) Smith, Benjamin D.; Music and Arts Technology, School of Engineering and TechnologyThe wide-ranging impact of deep learning models implies significant application in music analysis, retrieval, and generation. Initial findings from musical application of a conditional restricted Boltzmann machine (CRBM) show promise towards informing creative computation. Taking advantage of the CRBM’s ability to model temporal dependencies full reconstructions of pieces are achievable given a few starting seed notes. The generation of new material using figuration from the training corpus requires restrictions on the size and memory space of the CRBM, forcing associative rather than perfect recall. Musical analysis and information complexity measures show the musical encoding to be the primary determinant of the nature of the generated results.