UNIVERSITY OF CALIFORNIA
SANTA BARBARA




- -
- - -
- - -
- - - -
- - -

 

/
THE MAT SEMINAR SERIES 2015-2016: FACULTY PANEL (CLOSING EVENt)

/

MAT Faculty Panel, moderated by Prof. Yon Visell. Members of the panel: Prof. Matthew Turk, Prof. Marcos Novak, Prof. JoAnn Kuchera-Morin, Prof. George Legrady, Prof. Marko Peljhan and curator Zhang Ga as special guest.

 

/
THE MAT SEMINAR SERIES 2015-2016: FINAL LECTURE

FRIDAY. MAY. 27th · 4 PM
California NanoSystems Institute (room 1982)

/
v2.nl/archive/people/zhang-ga

/
ZHANG Ga is an internationally recognized media art curator. He is Distinguished Professor at China Central Academy of Fine Arts. He has also had appointments as Consulting Curator of Media Art at the National Art Museum of China, and Senior Researcher at the Media + Design Lab of EPFL | Swiss Federal Institute of Technology Lausanne, Associate Professor of Media Art at Parsons The New School for Design and was a Visiting Scientist at the MIT Media Lab and a Visiting Scholar at the Art and Art History Department of Stanford University, currently also a Senior Fellow at University of California, Santa Barbara. Among his numerous curatorial works, widely acclaimed projects include as Artistic Director and Curator Synthetic Times: Media Art China 2008, a Beijing Olympics Cultural Project, Translife: International Triennial of New Media Art 2011 and thingworld: International Triennial of New Media Art 2014, all organized by the National Art Museum of China. These large-scale exhibitions critically investigated and examined global media art trends, generated intellectual discourses about art, technology and culture, and have received wide acclaim including the New York Times, China Daily, Artforum, ArtAsiaPacific, and The Art Newspaper, among others.

/
He has been on many jury and consultation committees including the World Trade Center Artist Residency Program, Prix Ars Electronica, VIDA, and Franklinfurnace’s Future of the Present Award, among others. He also initiated and established in collaboration with Rhizome the Prix Net Award, recognizing the future promise of artists making outstanding work on the internet. From 2004 to 2006, together with Prof. Lu Xiaobo, he organized and curated the First, Second and Third Beijing International New Media Art Exhibitions and Symposiums, extending the global new media art discourse into mainland China. He has spoken widely on media art and culture around the world in venues such as Relive: The Third International Conference on the Histories of Media Art, Science and Technology Histories (keynote address, Melbourne), Modern Mondays (MoMA), National Art Center (Tokyo), Künsthalle Zurich, São Paulo State University (keynote address, São Paulo), South Australian Art Museum (Adelaide), and UC Berkeley. He has edited several books and authored essays that were published by The MIT Press (Synthetic Times, 2008, The Posthuman as A Condition of Art, OCTOBER #155, forthcoming), Liverpool University Press (translife, 2011, thingworld, 2014), and Tsinghua University Press (In the Line of Flight, 2006). He currently also serves on the editorial board of Leonardo Books, published by the MIT Press. Since 2015, he is Artistic Director at the Chronus Art Center, a Shanghai based, nonprofit art organization dedicated to media art.

 

PERFORMING SYSTEMS

MONDAY. MAY. 23th · 4 PM
California NanoSystems Institute (room 2001)

/
www.sojamo.de

/
In recent years computational and screen-based works have flourished within media arts and have since made significant influence across multiple disciplines. With the shift in computing platforms towards mobile, embedded systems, micro-sized devices and the advent of bio-electrical systems the potential for new tools and artistic expressions, transdisciplinary investigations and interdisciplinary collaborations is imminent.
Technology and networks had become an ubiquitous part of our everyday lives. With easy access to hardware and software, the availability of technology for daily activities had now become the norm and defines the conditions in which we live, work, learn, and interact.
With these shifts in the relationship we have with technology, I would like to question how these conditions and systems we operate in inform and shape future art practices and interdisciplinary projects? During my talk I will present a series of works concerned with exploring such systems in my artistic practice, education, performance, and urban space.

/
Andreas Schlegel works across disciplines and creates artifacts, tools and interfaces where technology meets everyday life situations in a curious way. Many of his works are collaborative and concerned with emerging and open source technologies where audio, visual and physical output is created by humans and machines through interaction, computation and process.
Currently Andreas lives and works in Singapore and leads the Media Lab at Lasalle College of the Arts where he also lectures across faculties. His work at the lab is practice-based, collaborative and interdisciplinary in nature and aims to blur the boundaries between art and technology.
In 2007 he co-founded Syntfarm, an art collective interested in the intersection of art, nature and technology, where many of his collaborative projects are based. He stays active within the international media arts environment and has been contributing to the open source project Processing.org since 2004. In his current research he aims to find artistic expressions for real-time data that resides within generative and networked systems.

 

 

/
THE FUTURE OF SPATIAL COMPUTER MUSIC

MONDAY. MAY. 16th
California NanoSystems Institute (room 2001)

 

/
Composing computer music for large numbers of speakers is a daunting process, but it is becoming increasingly practicable. This paper argues for increased attention to the possibilities for this mode of computer music on the part of both creative artists and institutions that support advanced aesthetic research. We first consider the large role that timbre composition has played in computer music, and posit that this research direction may be showing signs of diminishing returns. We next propose spatial computer music for large numbers of speakers as a relatively unexplored area with significant potential, considering reasons for the relative preponderance of timbre composition over spatial composition. We present a case study of a computer music composition that focuses on the orchestration of spatial effects. Finally we propose some steps to be taken in order to promote exploration of the full potential of spatial computer music.

/
Eric Lyon is a composer and computer music researcher whose focuses on articulated noise, spatial orchestration and computer chamber music. His software includes FFTease and LyonPotpourri. He is the author of “Designing Audio Objects for Max/MSP and Pd”, which explicates the process of designing and implementing audio DSP externals. His music has been selected for the Giga-Hertz prize, MUSLAB, and the ISCM World Music Days. Lyon has taught computer music at Keio University, IAMAS, Dartmouth, Manchester University, and Queen’s University Belfast. Currently, he teaches at Virginia Tech, and is a faculty fellow at the Institute for Creativity, Arts, and Technology.

 


 

/
THE REALITY OF VIRTUAL REALITY

MONDAY. MAY. 9th · 4 PM
California NanoSystems Institute (room 2001)

/
blacklog.mitplw.com

/
Luis Blackaller is a writer, director, designer and media artist from Mexico City. He has a multidisciplinary background that covers aspects of entertainment, science, design, art and visual storytelling. He earned a BS with honors as a mathematician in the UNAM, and graduated as a character animator from the Vancouver Film School. With more than twelve years of experience in the Mexican film and television industry, Luis has worked as a designer, art director, motion graphics artist and storyboard artist among academy award winners in films like Amores Perros, 21 Grams and Babel. In 2008, Luis earned a MS from the MIT Media Lab under the mentorship of John Maeda, where he designed and developed interactive software systems and tools to study the spread of user generated content online, alternate revenue models, and online creative social systems and their relationship with artistic expression and communication. As a MIT student, he attended writing workshops with Junot Diaz and Joe Haldeman, received character design and world building training from Frank Espinosa, and learned transmedia methods with Henry Jenkins. After graduating from MIT, Luis worked as producer and creative director for several research initiatives at MIT, where he designed and supervised the execution of alternative strategies of communication and promotion of research with the help of documentary videographers and web developers. More recently, Luis can be found in Venice, California, where he holds the position of Creative Director at WEVR. During his tenure at WEVR, he has developed, shot and post-produced more cinematic virtual reality than almost anyone in the world, contributing to define the new sensibilities, visual language and production techniques that are pushing forward the emerging medium of VR.

/
In this talk, the author explores his own experience working as a creative director in a virtual reality studio.

 


 

/
LATEST ADVANCEMENTS IN HAPTIC NEUROSCIENCE AND ENGINEERING

FRIDAY. MAY. 6th · 4 PM
California NanoSystems Institute (room 2003)

/
merkel.jp

/
Research Institute for Electronic Science, Hokkaido University - After receiving a Ph.D. in engineering, Masashi Nakatani is pursuing the way how sense of touch can serve our emotional reality. He severs as the founder of TECHTILE, the TECHnology based tacTILE design. Currently he works on basic study on touch neuroscience and prevailing haptic content in education for understanding human's ability to realize the world.

/
It has been uncovered by our research group that light touch is mediated and amplified by Merkel cells, which are known as skin mechanoreceptors that transduce mechanical stimuli into electrical signals that activate somatosensory neurons. This function is achieved by the force-gated ion channel called Piezo2, and recent study shows that this channel also plays an important role in proprioception. This talk will overview latest research topics in haptic neuroscience and introduce a possible bridge between sensory biology and behavioral science. The speaker will also provide several ideas how behavioral neuroscience experiments can be implemented with haptic engineering based on our developed haptic toolkits.

 

 

/
PANDORA TO PHOTOSHOP:
INSIDE THE TEAMS, TECH, AND CAREERS
OF THE MUSIC AND VIDEO TECHNOLOGY INDUSTRY

MONDAY. MAY.2nd. 2016 · 4PM
California NanoSystems Institute (room 2001)

/
ingogunther.com

/
How do leading audio, music, and video tech companies bring products from ideation through commercialization? Join us we go behind the scenes at leading companies and explore key roles in the media tech industry including marketing, product management, software/hardware development, user experience, R&D, and more. We'll explore case studies of your favorite products.

/
Jay LeBoeuf is the Executive Director of Real Industry, an education nonprofit that prepares its students to enter the technology industry by exposing them to leading tech companies’ practices. He is a lecturer in music technology and music business at Stanford University's Center for Computer Research in Music and Acoustics (CCRMA), and he advises digital media startups such as Chromatik, LANDR, and Kadenze. As an entrepreneur, Jay founded Imagine Research, an intelligent audio technology startup that built a search engine for sound. In 2012, iZotope acquired Imagine Research and Jay joined the iZotope executive team leading research & development, technology strategy, and intellectual property. Prior to creating Imagine Research, LeBoeuf was an engineer and researcher in the Advanced Technology Group at Avid Technology. There he led research that expanded the power of Pro Tools, the industry standard for digital audio workstations. He has been recognized as a Bloomberg Businessweek Innovator, awarded $1.1M in Small Business Innovation Research grants by the U.S. National Science Foundation, and interviewed on BBC World.

 

 

/
RHYTHMIC PROCESSES IN ELECTRONIC MUSIC

MONDAY. APR.25th. 2016 · 6PM
California NanoSystems Institute (room 2001)

/
mat.ucsb.edu/~clang/home.html

/
Curtis Roads creates, teaches, and pursues research in the interdisciplinary territory spanning music and sound technology. His specialties include electronic music composition, microsound synthesis, graphical synthesis, sound analysis and transformation, sound spatialization and the history of electronic music. He is keenly interested in the integration of electronic music with visual and spatial media, as well as the visualization and sonification of data. Certain of his compositions feature granular and pulsar synthesis, methods he developed for generating sound from acoustical particles. These are featured in his CD+DVD set POINT LINE CLOUD (Asphodel, 2005). His writings include over a hundred monographs, research articles, reports, and reviews.

Books include the textbook The Computer Music Tutorial (1996, The MIT Press), Musical Signal Processing (co-editor, 1997, Routledge, London), L'audionumerique (2007, Dunod, Paris), The Computer Music Tutorial - Japanese edition (January 2001, Denki Daigaku Shuppan, Tokyo). Microsound (2001, The MIT Press) explores the aesthetics and techniques of composition with sound particles. The Chinese edition of The Computer Music Tutorial was published in 2011 (People's Music Publishing, Beijing). His new book is Composing Electronic Music: A New Aesthetic (2015 Oxford). A revised edition of The Computer Music Tutorial is in the works.

His new set of music is FLICKER TONE PULSE.

/
Electronic technology has liberated musical time and changed musical aesthetics. In the past, musical time was considered as a linear medium that was subdivided according to ratios and intervals of a more-or-less steady meter. However, the possibilities of envelope control and the creation of liquid or cloud-like sound morphologies suggests a view of rhythm not as a fixed set of intervals on a time grid, but rather as a continuously flowing, undulating, and malleable temporal substrate upon which events can be scattered, sprinkled, sprayed, scrambled, or stirred (scrubbed) at will. In this view, composition is not a matter of filling or dividing time, but rather of generating time. The core of this paper introduces aspects of rhythmic discourse that appear in my electronic music. These include: the design of phrases and figures, exploring a particle-based rhythmic discourse, deploying polyrhythmic processes, the shaping of streams and clouds, using fields of attraction and repulsion, creating pulsation and pitched tones by particle replication, using reverberant space as a cadence, contrasting ostinato and intermittency, using echoes as rhythmic elements, and composing with tape echo feedback. The lecture is accompanied by sound examples.

 

/
ATTEMPTS AT TOTALITY

MONDAY. APR.18th. 2016 · 4PM
California NanoSystems Institute (room 2001)

/
ingogunther.com



/
Ingo will share his experience as advisor of the New York Hall of Science and the Museum of Emerging Sciences and Innovation at Tokyo, Japan; as well as his creative process behind notable artworks such as: “World Processor x Geo-Cosmos”, “Exosphere / Globefield”, “Ceterum censeo: video ex nihilo” amongst others.

/
Ingo Günther grew up in Germany. Traveled to Northern Africa, North and Central America and Asia before studying Ethnology and Cultural Anthropology at Frankfurt University (1977) and sculpture and media at Kunstakademie Düsseldorf (Schwegler, Uecker, Paik, MA 1982, postgrad year 1983). German National Academic Foundation Scholarship; P.S.1 residency, New York; DAAD scholarship, and Kunstfonds grant.

[ Read full Bio …]

 


 

/
LESSONS FROM CREATIVE ADVENTURES
IN THE KINGDOM OF LIBRARIES, ECONOMISTS, COMPUTER SCIENTISTS AND ROBOTICIST

MONDAY. MAR. 7th. 2016 · 4PM
Engineering Science Building (room 1001)

/
alimomeni.net
c-uir.org
artfab.art.cmu.edu

/
Momeni will discuss several current projects at the intersection of performance art, design and engineering. These projects include a human-robot collaborative theatrical performance based on Ferdowsi's epic poem Shahnameh, a participatory public projection work about disease and healthcare, a investigation of the sounds of the gut as indicators for activities in the mind, and a mobile augmented reality application for creative engagement with works of art in museums and galleries.

/
Momeni was born in Isfahan, Iran and emigrated to the United States at the age of twelve. He studied physics and music at Swarthmore College and completed his doctoral degree in music composition, improvisation and performance with computers from the Center for New Music and Audio Technologies in UC Berkeley. He spent three years in Paris where he collaborated with performers and researchers from La Kitchen, IRCAM, Sony CSL and CIRM. Between 2007 and 2011, Momeni was an assistant professor in the Department of Art at the University of Minnesota in Minneapolis, where he directed the Spark Festival of Electronic Music and Art, and founded the urban projection collective called the MAW. Momeni currently teaches in the School of Art at Carnegie Mellon University and directs CMU ArtFab, co-directs CodeLab and teach in CMU’s IDEATE and Emerging Media Programs.

 

 

/
THE COMPUTER
AS A PROSTHETIC ORGAN OF PHILOSOPHY.

WEDNESDAY. MAR. 2nd. 2016 · 12pm
Engineering Science Building (room 1001)


/
davidrokeby.com/

   

/
David Rokeby will explore the idea that the computer can function as a sort of prosthetic organ of philosophy. Computers can serve as a tool with which to reinvigorate well-worn but unresolved philosphical questions in tangible and visceral ways. Through his computer-based art installations, Rokeby has probed questions of embodiment, consciousness, perception, language, and time as well as more political questions such as surveillance. He will illustrate these interests and explorations with examples from his 35 years of arts practice.

/
Charlie Roberts is an Assistant Professor in the School of Interactive Games and Media at the Rochester Institute of Technology, where his research explores human-centered computing in creative coding environments. He is the primary author of Gibber, a live-coding environment for the browser, and he has given live-coding performances across the US, Europe, and Asia. His recent scholarly work on live coding includes articles in the Computer Music Journal; the Journal of Music, Technology and Education; and an invited chapter in the Oxford Handbook of Algorithmic Music. Charlie earned his PhD from the Media Arts & Technology Program at UC Santa Barbara while working as a member of the AlloSphere Research Group.

 

 

/
SUPPORTING CO-LOCATED COLLABORATION
IN LARGE-SCALE HYBRID IMMERSIVE ENVIRONMENTS

MONDAY. FEB. 29th. 2016 · 4PM
Engineering Science Building
Room 2001

/
febretpository.net

/
In the domain of large-scale visualization instruments, Hybrid Reality Environments (HREs) are a recent innovation that combines the best-in-class capabilities of immersive environments, with the best-in-class capabilities of ultra-high-resolution display walls. Co-located research groups in HREs tend to work on a variety of tasks during a research session (sometimes in parallel), and these tasks require 2D data views, 3D views, linking between them and the ability to bring in (or hide) data quickly as needed. Addressing these needs requires a matching software infrastructure that fully leverages the technological affordances offered by these new instruments. I detail the requirements of such infrastructure and outline the model of an operating system for Hybrid Reality Environments: I present an implementation of core components of this model, called Omegalib. Omegalib is designed to support dynamic reconfigurability of the display environment: areas of the display can be interactively allocated to 2D or 3D workspaces as needed.

/
Alessandro Febretti is a Senior Interactive Visualization Specialist at Northwestern University and a PhD Candidate at the Electronic Visualization Lab, University of Illinois at Chicago. His research is at the intersection of immersive environments, scientific visualization and collaborative work. While at EVL he concentrated on Visualization Instrument research (hardware and software) and on human-computer interaction in single and multiuser scenarios. Alessandro contributed to the creation of a small scale hybrid workspace (the OmegaDesk) and a large scale Hybrid Reality Environment (the CAVE2). In addition to his work on hybrid systems, He designed and implemented desktop and wed based visualization tools and user interfaces in a variety of research domains, from Astrophysics to Art History. Prior to joining EVL, Alessandro worked at Milestone Games and taught Videogame Design at the Polytechnic university of Milan, Italy.

 

 

/
QUANTUM SENSITIVITY

MONDAY. FEB. 22nd. 2016 · 5pm
California NanoSystems Institute (room 1605)

/
www.portablepalace.com

   

/
Dmitry Gelfand (b.1974, St. Petersburg, Russia) and Evelina Domnitch (b. 1972, Minsk, Belarus) create sensory immersion environments that merge physics, chemistry and computer science with uncanny philosophical practices. Current findings, particularly in the domain of mesoscopics, are employed by the artists to investigate questions of perception and perpetuity. Such investigations are salient because the scientific picture of the world, which serves as the basis for contemporary thought, still cannot encompass the unrecordable workings of consciousness.

Having dismissed the use of recording and fixative media, Domnitch and Gelfand’s installations exist as ever-transforming phenomena offered for observation. Because these rarely seen phenomena take place directly in front of the observer without being intermediated, they often serve to vastly extend one’s sensory threshold. The immediacy of this experience allows the observer to transcend the illusory distinction between scientific discovery and perceptual expansion.

 

In order to engage such ephemeral processes, the duo has collaborated with numerous scientific research facilities, including the Drittes Physikalisches Institut (Goettingen University, Germany), the Institute of Advanced Sciences and Technologies (Nagoya), Ecole Polytechnique (Paris) and the European Space Agency. They are recipients of the Japan Media Arts Excellence Prize (2007), and four Ars Electronica Honorary Mentions (2013, 2011, 2009 and 2007).

/
Despite the overwhelming, century-long success of quantum physics it is still unknown as to where and if there lies a horizon separating our reality from the quantum world of light upon which it is based. It is this slippery frontier that Domnitch and Gelfand explore through installations and performances. Formerly considered impossible to imagine, let alone perceive, a variety of quantum phenomena have been detected on macroscopic scales, from colloidal liquids and biological systems to astrophysical phenomena. These thresholds can be experienced through meticulously orchestrated conditions coupled with an active attunement of the senses.

 

 


/
HACKTERIA
OPEN SOURCE BIOLOGICAL ART, DIY BIOLOGY, GENERIC LAB INFRASTRUCTURE

MONDAY. FEB. 8th. 2016 · 5PM
Engineering Science Building
Room 2001

 


/
hackteria.org

   

 

/
"Das verflixte 7te Jahr"
It has now been 7 years since the foundation of hackteria.org envisioning an idea to build a large knowledge base of instructions for artists, (bio-)hackers, educators and activists to work creatively with living media and contemporary life sciences. Through our acitivities a wide range of playful workshops have been developed, regular gatherings for collaboration and a global network "community of practice" established, with a common enthusiasm on sharing of knowledge, art/science collaboration and an embracing of an "amateur" and do-it-yourself approach to go beyond and disciplinary thinking. How can we apply the open source culture to modern biotechnological practices?
What kind of collaborative methodologies have we been developing to work together (DIWO, Do-It-With-Others) in a radically transdisciplinary way?
How does access to DIY and open source laboratory equipment change the way we will do sciene in the future?

During my talk I will give an overview of various projects that have been developed within the growing international hackteria network, ranging from building temporary DIWO labs in the djungles of Indonesia to open source hardware designed for manufacturing, from developing new bio-commons governance models in synthetic biology to hunting rabbits in Helsinki.

 

 

/
Dr. Marc R. Dusseiller is a transdisciplinary scholar, lecturer for micro- and nanotechnology, cultural facilitator and artist. He performs DIY (do-it-yourself) workshops in lo-fi electronics and synths, hardware hacking for citizen science and DIY microscopy. He was co-organizing Dock18, Room for Mediacultures, diy* festival (Zürich, Switzerland), KIBLIX 2011 (Maribor, Slovenia), workshops for artists, schools and children as the former president (2008-12) of the Swiss Mechatronic Art Society, SGMK. In collaboration with Kapelica Gallery, he has started the BioTehna Lab in Ljubljana (2012 - 2013), an open platform for interdisciplinary and artistic research on life sciences. Currently, he is developing means to perform bio- and nanotechnology research and dissemination, Hackteria | Open Source Biological Art, in a DIY / DIWO fashion in kitchens, ateliers and in developing countries. He was the co-organizer of the different editions of HackteriaLab 2010 - 2014 Zürich, Romainmotier, Bangalore and Yogyakarta.

 

/
PHYSICALLY MODELED MUSICAL INSTRUMENTS ON MOBILE DEVICES

MONDAY. jan. 25th. 2016 · 4PM
California NanoSystems Institute (room 1601)


/
www.moforte.com

/
The pervasiveness of handheld mobile computing devices has created an opportunity to realize ubiquitous virtual musical instruments. These devices are powerful, connected and equipped with a variety of sensors allowing parametrically controlled, physically modeled instruments that previously required an 8-DSP farm with minimal UI. We will provide a brief history of physically modeled musical instruments and the platforms that these models have been run on. We will also give an overview of what is currently possible on handheld mobile devices, including modern DSP strategies using a high level expression language that can be re-targeted across multiple platforms. moForte Guitar and GeoShred will be used as real world examples.

/
Pat Scandalis, CTO and acting CEO, has worked for a number of Silicon Valley High Tech Companies. He has held lead engineering positions at National Semiconductor, Teradyne, Apple and Sun. He has spent the past 18 years working in Digital Media. He was an Audio DSP researcher at Stanford University’s Center for Computer Research in Music and Acoustics (CCRMA). He co-founded and was the VP of engineering for Staccato Systems, a successful spinout of Stanford/CCRMA that was sold to Analog Devices in 2001. He has held VP positions at TuneTo.com, Jarrah Systems and Liquid Digital Media (formerly Liquid Audio). He most recently ran Liquid Digital Media, which developed and operated all online digital music e-commerce properties for Walmart. He holds a BSc in Physics from Cal Poly San Luis Obispo and is currently a visiting scholar at CCRMA, Stanford University.

 

/
AESTHETICS
ANDMACHINE INTELLIGENCE:

THE EMERGENCE OF A NEW TYPE OF HUMAN + MACHINE NARRATIVE

wednesday. jan. 13th. 2016 · noon
California NanoSystems Institute (room 1605)

/
www.icinema.unsw.edu.au

/
This seminar investigates a range of Australian Research Council funded projects recently exhibited at venues such as the Sydney Film Festival, ISEA and Chronus Shanghai. Realized through interdisciplinary collaborative research, involving the domains of media aesthetics, interactive narrative and machine learning, these projects create integrated 360-degree 3D immersive environments in which the user interacts with an intelligent digital world. Through its discussion of the aesthetics and AI architecture underpinning these environments, the paper enters into an explanation of what is termed ‘parallel’ aesthetics, a function of the mutually interdependent relationship between human users and autonomous machine images and characters. Understanding interaction as a parallel aesthetic or co-evolutionary narrative, the paper explores parallel aesthetics and co-evolutionary narrative as a dynamic two-way process between human and machine intelligence.

/
Dennis Del Favero is a research artists, Director of the UNSW iCinema Centre, Sydney and Executive Director of the Australian Research Council (Humanities and Creative Arts). His work has been widely exhibited in solo exhibitions in leading museums and galleries such as Sprengel Museum Hannover, Neue Galerie Graz and ZKM Media Museum Karlsruhe and in major group exhibitions including the Sydney Film Festival, Biennale of Architecture Rotterdam, Biennial of Seville and Battle of the Nations War Memorial, Leipzig (joint project with Jenny Holzer). His work is represented by Scharmann & Laskowski Cologne and William Wright Artists Projects Sydney. He is a Visiting Professorial Fellow at ZKM, Germany, Visiting Professorial Fellow at Academy of Fine Arts, Vienna, Visiting Professor at City University of Hong Kong and Visiting Professor at IUAV University of Venice.

 

 

/
MAKING SOURCE CODE DANCE:
VISUALIZING ALGORITHMS IN LIVE CODING PERFORMANCE

MONDAY. JAN. 11th. 2016 · 5PM
California NanoSystems Institute (room 1601)


/
charlie-roberts.com

/
In most canonical live coding performances, programmers code music and/or art while projecting source code, as it is written, for audience consumption. Although the live coding community actively debates both the meaning and necessity of this projection, I propose that visual annotations to source code can playfully help communicate algorithmic development to both performers and audiences. In this talk I will briefly outline the history of live coding, describe prior work in the live coding community using visual annotations to illuminate source code, and show my work with the live coding environment Gibber to make source code (and maybe even audiences) dance. I will conclude with a short performance demonstrating these ideas.

/
Charlie Roberts is an Assistant Professor in the School of Interactive Games and Media at the Rochester Institute of Technology, where his research explores human-centered computing in creative coding environments. He is the primary author of Gibber, a live-coding environment for the browser, and he has given live-coding performances across the US, Europe, and Asia. His recent scholarly work on live coding includes articles in the Computer Music Journal; the Journal of Music, Technology and Education; and an invited chapter in the Oxford Handbook of Algorithmic Music. Charlie earned his PhD from the Media Arts & Technology Program at UC Santa Barbara while working as a member of the AlloSphere Research Group.

 

 

/
FROM ANALOGUE TO DIGITAL...
AND BACK AGAIN

MONDAY. DEC. 7tH. 2015 · 4PM
Engineering Science Building
Room 2001

 


/
www.touch33.net

/
Curator & Producer; Lecturer & Publisher; Author & Editor; occasional exhibitions, installations and performances. He has been running the audio-visual label Touch for 30 years and in this period has acquired much experience and information on disseminating cultural sounds to a wider audience. Since 1982: Touch, with Jon Wozencroft [senior lecturer in the Department of Communication & Design at the Royal College of Art, London].

/
How a cultural organization and its artists respond to and develop their relationship with technology from 1982 onwards…

 

 

/
THINKING ABOUT
CHANNEL STRIPS

MONDAY. NOV.9th. 2015· 4PM
Engineering Science Building
Room 2001

/
mat.ucsb.edu/~stp

/
Stephen Travis Pope has realized his musical works in the North America (Toronto, Stanford, Berkeley, Santa Barbara, Havana) and Europe (Paris, Amsterdam, Stockholm, Salzburg, Vienna, Berlin). His music is available from Centaur Records, Perspectives of New Music, Touch Music, SBC Records, Absinthe Records, and the Electronic Music Foundation. Stephen also has over 100 technical publications on music theory and composition, computer music, and artificial intelligence. From 1995-2010 he worked at the University of California, Santa Barbara (UCSB)

/
Recording channel strips are an instrumental component in source processing in music recording as well as the processes of mixing and mastering. The stages of a typical channel strip include: Input, Dynamics, Equalization, Effects, and Output Routing. This presentation starts with a brief history of recording channel strips, and continues with a survey of current hardware and software implementation. The final section is the presenter's requirements for the ideal recording channel strip.

 

 


/
ON MY NEW AUDIOVISUAL WORK:
)ertur(

MONDAY. NOV. 2nd. 2015· 4PM
Engineering Science Building
Room 2001

 


/
www.clarlow.org

 

 



/
Born in 1945, Clarence Barlow obtained a science degree at Calcutta University in 1965 and a pianist diploma from Trinity College of Music London the same year. He studied acoustic and electronic composition from 1968-73 at Cologne Music University as well as sonology from 1971-72 at Utrecht University. His use of a computer as an algorithmic music tool dates from 1971. He initiated and in 1986 co-founded GIMIK: Initiative Music and Informatics Cologne, chairing it for thirteen years. He was in charge of computer music from 1982-1994 at the Darmstadt Summer Courses for New Music and from 1984-2005 at Cologne Music University. In 1988 he was Director of Music of the XIVth International Computer Music Conference, held that year in Cologne. From 1990-94 he was Artistic Director of the Institute of Sonology at the Royal Conservatory in The Hague, where from 1994-2006 he was Professor of Composition and Sonology. At UCSB he functions as professor at the Music Department (as Corwin Endowed Chair and Head of Composition), Media Arts and Technology and the College of Creative Studies. His interests are the algorithmic composition of instrumental, electronic and computer music, music software development as well as interdisciplinary activities e.g. between music and language and the visual.

/
music.ucsb.edu/people/clarence-barlow

/
DOWNLOAD PRESENTATION

/
DOWNLOAD BOOKLET

 

/
When requested by my Los Angeles-based art-collector friend Raj Dhawan to write music for an exhibition of his Alphonse Mucha (1860-1939) collection, I at once thought of basing the music on that of a Mucha contemporary and compatriot. My choice fell on the composer Leoš Janáček (1854-1928), born, like Mucha, in Moravia (then in the Austrian Empire, today in the Czech Republic). Thirty-seven selected Mucha paintings are matched by an equal number of Janáček pieces, many of them movements of larger works. The bigger the paintings, the longer the music selections. At first the music is constrained to the range of a minor seventh, all notes outside this range being discarded. The notes are also redistributed among five instruments - flute, clarinet, violin, cello and piano - and the range gradually increases to just over four octaves, each instrument being allotted exactly ten notes by then. Analog to this, each Mucha painting is first shown only with its most widespread color, the rest rendered in grey. During the run of each Janáček music, the colors of the Mucha works are expanded in range to finally include all the original ones. This audiovisual composition bears the title )ertur(, which could be expanded to include words like aperture (English, French), "apertura" (Czech, Italian, Polish), "copertura" (Italian), "abertura"/"cobertura" (Spanish, Portuguese) or "obertura" (Spanish). "Ertur" means "peas" in Icelandic.

 

 

/
Proprioceptive
position sense

MONDAY. OCT. 26th. 2015· 4PM
Engineering Science Building
Room 2001

 


/
www.researchgate.net/profile/Irene_Kuling

/
Irene A. Kuling received a B.Sc. degree in Physics at Utrecht University (2008) and a M.Sc. degree in Human-Technology Interaction from Eindhoven University of Technology (2011) in the Netherlands. Currently she is working toward the PhD degree in the field of visuo-haptic perception at Vrije Universiteit Amsterdam. Her research is part of the H-Haptics project, a large Dutch research program on haptics. Irene studies integration of and differences between proprioceptive and visual information about the hand. Her research interests include human perception, haptics and psychophysics.

/
How do you know where your hand is? Probably you don’t think about this question too often. However, knowledge about the position of your hand is essential during your daily interaction with all kinds of objects. Two factors that play an important role in the position sense of the hand are vision and proprioception. In this talk I’ll present some of my work on visuo-proprioceptive position sense of the hand. This work reveals systematic (subject-dependent) errors between the visual and the proprioceptive perceived position of the hand. These errors are consistent over time, influenced by skin stretch manipulations, and resistant to force manipulations. An interesting question is how we can use this knowledge in the design of human-machine interaction.

 

 

/
synthesizing performance

MONDAY. OCT. 12th. 2015· 4PM
Engineering Science Building
Room 2001


/
www.kevyb.com

/
Kristin Grace Erickson performs, composes and records under the name Kevin Blechdom and in her band Blectum from Blechdom. She received her Doctorate of Musical Arts degree from the California Institute of the Arts and both her B.A. and M.F.A. degrees in electronic music from Mills College. Kristin is the Technical Coordinator for the Digital Arts and New Media program at the University of California, Santa Cruz.

/
Kristin will describe synthesizing performance from abstract processes, her exploration of the computational potential of performing algorithms and her development of audio technologies to communicate real-time instructions to performers in her work. M.T.Brain (2012)

 

*

 

 

 

 

 

Seminar Thumbnail Seminar_Location

 

/

 

/

ABOUT

LOCATION

Since 1998, the Media Arts and Technology graduate program hosts a periodic seminar series. The transdisciplinary nature of our program is also reflected in the diverse range of fields our speakers come from: engineering, electronic music, art and science.

For this fall –2015–, talks will usually take place on mondays at 4pm. They are all free and open to the public.

Most of our seminar talks take place at the Engineering and Science Building (room 2001).

> UCSB Interactive Campus Map
> UCSB Parking

   

 

?


/
CREDITS

JUAN MANUEL ESCALANTE
Documentation, visual identity, original web design, video production
logistics and contact administrator

KRIS LISTOE · MARISA ORTEGA · LISA THWING
Logistics, financial and technical coordination

PROF. MARKO PELJHAN
Faculty Coordinator

PROF. GEORGE LEGRADY
MAT Chair



 

 



THE MAT SEMINAR SERIES
University of California Santa Barbara
M M X V - M M X V I


The 2015/2016 Seminar Series is supported by the generous support of the Media Arts and Technology Program at the University of California, Santa Barbara, the Chair's Fund at MAT,  Prof. Yon Vissel, The Systemics Lecture Series, The UCSB College of Engineering, The UCSB College of Humanities and Fine Arts.