ON the occasion of David Huron's retirement, EMR Editor Daniel Shanahan, recently interviewed him regarding the founding of Empirical Musicology Review, the growth of empirical musicology, digital corpus studies, the Humdrum Toolkit, and related topics.

BIOGRAPHY

David Huron was born in 1954 in the northern Canadian town of Peace River. He grew up mostly in Ottawa, where he received private training in piano, flute, organ, and music theory. He attended Canterbury High School for the performing arts, and later studied flute with Karin Schindler at the Royal Conservatory of Music in Toronto. Concluding that an undergraduate degree in music would be little more than review, Huron pursued interdisciplinary undergraduate studies at the University of Waterloo. He audited courses in music, computer science, psychology, and philosophy.

Throughout his studies and after graduation, Huron was active as a composer and performer in the new music scene in southern Ontario. In 1980 he began masters' studies at York University (Toronto), where again he pursued a degree in interdisciplinary studies—this time mixing intellectual history, value-framework of decision-making, philosophy of science, and music history. His advisor was music semiotician, David Lidov, the author of such works as Elements of Semiotics (St Martin's Press, 1999) and Is Language a Music (Indiana, 2004).

Huron graduated with a PhD in musicology from the University of Nottingham (UK) in 1989, where he worked with noted Brahms scholar Robert Pascall. He completed a dissertation on voice leading in the music of J.S. Bach. As Huron discusses below, much of this work on Bach was assisted by the incorporation of encoded scores of the composer's music, and would later turn into the Humdrum Toolkit. At Nottingham, Huron engaged scientists at the nearby British Institute for Hearing Research and began his first forays into empirical research. After graduating, he returned to Canada, taking up a post in the music department at the University of Waterloo. In subsequent years he also held adjunct appointments in the departments of Psychology and Systems Design Engineering—where he was also Coordinator of the Centre for Society, Technology, and Values.

In 1997 the School of Music and Department of Psychology at the Ohio State University were jointly awarded an academic enrichment grant aimed at bolstering research in music cognition. The goal was to build on existing faculty expertise in the work of Mari Riess Jones, Caroline Palmer, and David Butler. The following year, Huron took up a new appointment in the School of Music and founded the Cognitive and Systematic Musicology Laboratory. Over the next 22 years the laboratory generated more than 150 publications and produced a string of talented PhD graduates and post-doctoral fellows. Huron was awarded with the Outstanding Publication Award in 2001 from the Society for Music Theory for his article on voice leading, and would be presented with the Wallace Berry Award for his 2006 book Sweet Anticipation: Music and the Psychology of Expectation. Also in 2006, Huron founded Empirical Musicology Review in conjunction with his retired colleague, David Butler. The journal was one of the first humanities journals to practice open peer review. I sat down with David recently to discuss empirical musicology, corpus studies, open science, music and expectation, and research in general.

BACKGROUND

Daniel Shanahan: Can you tell me a bit about Peace River? What was it like growing up there?

David Huron: It was cold. In the winter I recall my father climbing under the car, carefully wiping the under-side of the engine with a cloth, and then using the cloth and some small twigs to build a fire under the oil pan. You needed to do that in order to get the oil fluid enough to start the car.

We moved a lot. My father was an agronomist who advised farmers, so we lived in a series of small towns. His career finally brought us to the Ottawa—which provided a much richer musical environment. Outside of school music, I was deeply involved in choral groups, orchestra, civic bands, and a rich chamber music scene.

DS: Where you ended up seems predestined in retrospect, though. As an undergraduate you found yourself in a classroom with both David Lidov and Anabel Cohen.

DH: We'd like to think that much of the bad stuff that happens in life is due to chance, and that much of the good stuff comes from our own efforts. But in retrospect, it's hard not to see that many of the good things that happen in life are also a result of chance encounters. We do make choices, of course, but those choices are constrained by circumstances and our limited imagination of what's possible. I'm grateful to the many people whose paths crossed mine at various points in my life and who set me new directions.

DS: You did a lot of really interesting things before music theory and cognition, ranging from organ building to composition. Could you discuss your work a bit before this career? Did it influence your research at all?

DH: I spent most of my high school career at the newly opened Canterbury H.S. for the performing arts in Ottawa. It was wonderful. We had private lessons in a major instrument, keyboard harmony, choral instruction, as well as plenty of theory. However, in my final year of high school our family moved to Guelph, Ontario where I had to attend a regular high school. I didn't last. I dropped out of school and began commuting to Toronto where I studied flute with Karin Schindler at the Royal Conservatory there. There was a period of about eight or nine months where I did nothing else but practice flute and piano all day. I also studied theory locally with John Goobie. Both of my older brothers were high school dropouts, and I think my parents were tolerant of me because at least I was working hard on music.

The following year I was accepted to a music industry arts program at Fanshawe College in London, Ontario. I lasted one semester. I found the subject matter interesting, but I was disappointed by the quality of the teaching. I thought it would be better to enroll in university instead. The following year I went back to high school and completed a diploma. Along the way, I met Bob Moog at a meeting of the Canadian Independent Record Producers Association in Toronto. I was inspired to purchase a (second-hand) Moog Series IIIP synthesizer and a bunch of recording gear. Of course, I had to take out a loan, and that meant I needed a job. So I began working at Guelph Pipe Organ Builders. The company was very small (just four employees) so I was exposed to the full range of tasks involved in assembling and reconditioning instruments. During the time I was there I worked on two organs, one installed in Erie, Pennsylvania, and a large organ we installed in a church in Washington, DC. Soon afterward the company went bankrupt.

I still had loan payments to make, so the following year I worked at Foseco, a factory involved in steel production. At Foseco I worked as an oven tender for nearly a year. It was a true blue-collar job, but one of the richest intellectual experiences I've ever had. My job was to load and unload product from six large furnaces—each about the size of a carwash. There was only about one minute of work every four minutes or so, so there was plenty of time to think during an eight-hour shift. I used to keep a small black notebook in my breast pocket and jot down notes from time to time. I spent most of that year just thinking about music. Mostly, I wrote down questions; but I also speculated a lot about music. I also composed. It was a mindless blue-collar job that ironically introduced me to the life of the mind. Many of the questions I wrote in those notebooks as a teenager proved to be the focus of my later research.

DS: You then went on to study at the University of Waterloo. Could you discuss your time at Waterloo a bit? How did being at a university with such established Math and Engineering programs influence your research?

DH: Paradoxically, my strong music background caused problems when I started university. I was deeply interested in music, but I thought an undergraduate degree in music would have been a waste of time since it would have been mostly review. The University of Waterloo offered an independent studies program where you could do whatever you wanted. The program was a legacy of the free-wheeling 60s. There were virtually no requirements; you didn't need to take any classes or even show up on campus.

The program did require that all students had to submit a year-end report, but there was no specified format. You could simply write on a single piece of paper "I had a good time this year" and that would satisfy the formal requirement. People did things like that. If you wanted to graduate with a degree there was a more formal process. In your final year, you had to assemble a faculty committee and get a program of activities approved. But otherwise you were left to do whatever you wanted.

Most of my classmates were social activists of various flavors, including environmentalists, feminists, peace and labor activists. One of my classmates was Ann Hansen, a "direct action" advocate who later received a life sentence for setting off a large car bomb at a plant that manufactured components for cruise missiles. Nobody died, but there were plant workers who were seriously injured.

I was skeptical of the idea of such an unstructured program, but in the end the extreme freedom had two salutary effects. First, within days of starting the program, it dawned on me that I was ultimately responsible for my own education. Of course, that's true for everybody. But in traditional programs, people don't think of that. We just do what's specified in the syllabus and don't question whether we're learning what's most important for us. The second lesson I learned was that there were no disciplinary boundaries. The study of "music" is not some sort of pre-defined subject. I could dabble in history or anthropology, physics or engineering, social science or philosophy, whatever. I attended a few classes in aesthetics, acoustics, psychology, and computer science. I did take a couple of music courses, but I found it was much more valuable simply to read books.

I spent three years reading through the ML and MT sections of the library. I typed-up thousands of pages of notes. Reading was the most important thing I did, and you can do an awful lot of reading if you have no other obligations. By the end of the second year I discovered that I was quite possibly better read than my music professors—although there were certainly big gaps in my knowledge.

As you noted, Waterloo is Canada's biggest engineering school. It also has some 5,000 students studying mathematics. So it was a particularly good place to study computer science.

DS: You then found yourself writing a thesis on Adorno?

DH: That was my master's thesis; and even then, Adorno was only a small part. My masters studies followed the same pattern as my undergraduate experience. Once again, I enrolled in an interdisciplinary program—this time at York University in Toronto. My focus was still on music, but I took courses in philosophy of knowledge, value framework of decision-making, history of physiology, critical theory, and other stuff. I mostly read about music and intellectual history on the side.

Adorno is interesting. So much writing on music is equivocal—on the one hand this, on the other hand that. For a young person looking for concrete answers, Adorno is refreshing. He models passion and anger. There is something compelling about a writer who has such strong opinions: jazz is crap and Stravinsky is unadulterated shit. Not to mention that Adorno is one of the great German stylists.

Of course, his writing is complex, so I found myself doing translations of his work from English to English—shortening the sentences and clarifying the logic behind his essays and books. Adorno gets away with a lot of woolly-headed thinking simply hidden in his convoluted rhetoric. In the end, I had a falling-out with Adorno. For me, ultimately Adorno modeled not the best of music scholarship, but the worst. After Adorno, I wanted my own writing to be nothing if not clear. If I'm wrong, I want it to be easy for readers to see how I'm wrong.

DS: How did you end up doing doctoral work in England?

DH: I was hoping to work with Ian Bent—who'd written a splendid "analysis" article in the New Grove. I liked Bent's broad perspective on music theory. He was at the University of Nottingham, so that's where I went. Unfortunately, Bent left for Columbia, just as I arrived so I ended up working with Robert Pascall, a wonderful Brahms scholar. My dissertation work focused on voice leading in Bach. I graduated with a PhD in musicology—my one and only music degree.

HISTORY OF EMPIRICAL MUSICOLOGY REVIEW

DS: Can you discuss how EMR came about?

DH: Sure. EMR arose out of conversations I had with David Butler beginning around 2004. Over the years, both David and I had become frustrated with various aspects of journal publishing. Both of us had had extensive experience as authors, reviewers, and editors. David had been the editor of College Music Symposium and I had experience for many years as an Associate Editor for Music Perception and Psychology of Music.

Journals rely on a huge volunteer effort and that comes with many problems. First, there's a lot of work involved. Most journal submissions are rejected. In music, the rejection rates can range between 50 and 90 percent depending on the journal. Typically, journals commission two or three reviews for each submission, so if a journal accepts only one in four submissions, that means each published article represents maybe 10 behind-the-scenes reviews. A single journal issue containing four articles might involve 35 or 40 behind-the-scenes reviews. That's a lot of work that's not publicly evident. Readers see only the tip of a large iceberg of editorial work.

Editors are constantly looking for reviewers who are conscientious and timely. If a reviewer is late, the Editor has little power to speed up the process. If reviewers were paid, then you could threaten to withhold payment, but journals can't afford to pay reviewers. Moreover, it's hard to find willing reviewers at the best of times, so editors don't want to alienate their most public-spirited reviewers by pestering them, even when they're late.

DS: What about the dreaded "Reviewer #2 problem"?

DH: Yes, that's an even worse problem with reviews. Because most reviews are anonymous, reviewers don't always behave like ladies and gentlemen. As we've seen with the Internet, there's nothing like anonymity to sometimes bring out the worst in people. When reviews are unsigned, it's not uncommon for reviewers to lapse into intemperate language. And especially if the reviewer has misunderstood the paper, harsh criticism only succeeds in angering authors. As action editors, both David and I frequently had the heart-sinking experience of reading disparaging reviews which we knew would simply upset authors and complicate the reviewing process.

At the other extreme, reviewers sometimes craft beautiful critiques that convey a general lesson that should be communicated widely throughout a scholarly community. Sadly, these little gems languish as private correspondence with a single author when they really ought to be broadly read.

Anyone who's ever acted as an Editor or Action Editor knows that one's power is rather limited. Your ability to accept or reject submissions is constrained by the recommendations of reviewers. (That's the whole point of the peer review process.) But reviewers can differ widely in how picky they are. Some will see the silver-lining in every cloud. Others will see red flags in every punctuation mark. As an Editor, it's difficult to go against the recommendations of reviewers. You may have two reviewers who both vote thumbs-down on a paper you think is pretty good. Or you may have reviewers who accept a paper you think should clearly be rejected. You can't simply ignore the reviewers' assessments since that's a recipe for future difficulty in recruiting cooperative reviewers. Editors can't afford to burn bridges.

DS: The solution you and David Butler came up with was a more open process.

DH: So let me step back a bit here. In running a journal, there are really four key practical concerns. How do you go about building a readership? How do you attract authors to submit their work? How do you incentivize reviewers to write timely and thoughtful reviews? And how do you make the Editor's job less tedious and more rewarding? In starting EMR, David and I really wanted to take a different approach that would address all of these various concerns. I was the one who came up with the idea of publishing the reviews.

DS: Why did you feel it was necessary to have open peer review?

DH: Publishing reviews has several useful consequences. First, publishing reviews provides a strong incentive for reviewers to volunteer to help. The promise of publication allows you to attract highly experienced scholars who might otherwise not be interested in reviewing manuscripts. Moreover, we can commission commentaries from people with contrasting backgrounds and perspectives.

Most importantly, signed reviews or commentaries ensure that reviewers write careful and thoughtful critiques. Intemperate language or failing to read an article carefully will make the reviewer look cavalier or vindictive in the eyes of the broader research community. Also, if a reviewer is especially tardy, the Editor can simply say "I'm sorry, if I don't receive your review by Friday, I can't publish it." That is, the promise of publishing reviews gives the Editor real leverage that can significantly speed up turn-around time—the time from submission to final publication. That has a knock-on effect for authors.

In EMR, each article is published with a statement giving the date the manuscript was received and date of publication. This holds our feet to the fire, but most importantly, it allows possible future authors to see that EMR has a faster turn-around time than probably any other music journal. The idea was that this would provide a useful incentive for authors to choose to submit to EMR. Unfortunately, hardly any music journals do this, so most authors have no basis for comparison.

Open peer review also makes the Editor's job more rewarding by allowing the Editor to truly make the decision about what to publish. The Editor can informally poll one or more scholars for advice about a submission, but the Editor is in full control of the journal's content. An Editor may choose to publish an item knowing that a dissenting reviewer will be able to voice their criticisms in print. The aim of scholarly publishing is not to avoid publishing articles that someone disagrees with.

DS: The journal has published some nice back-and-forth exchanges. I really enjoy having authors respond to commentaries on the work. The reader is able to understand more of the broader context within which that research project fits, and they come away knowing more about the topic.

DH: Yeah, published commentaries tend to make the journal come alive. As every storyteller knows, the key to grabbing readers' attention is conflict. Of course, we don't need to encourage or manufacture conflict or controversy. All we need to do is stop hiding it. The disagreements that currently stew in private correspondence can simply be brought into the open. Everyone will benefit by public debate and the journal will be a natural magnet for readers. Along with faster turn-around, the journal then has the potential to become something more conversational.

So, the idea was that open peer reviewing might provide several benefits. It should attract authors (by speeding up the turn-around time), it should attract readers (by making disagreements public), it should attract reviewers (by publishing their reviews), and it should make the Editor's work more rewarding (by giving the Editor more power to make real decisions regarding journal content, rather than acting as a glorified secretary.)

DS: That first issue is a pretty great one, and many of those articles and commentaries are cited quite frequently. So, David Butler served as an editor and you handled the logistics?

DH: Yes. In 2005, David Butler agreed to come out of retirement in order to act as the journal's first Editor. That same year I purchased the emusicology.org domain name, applied to the Library of Congress for an International Standard Serial Number (ISSN), and paid a designer to design the website. We had no money, so I simply paid for things out of my own pocket.

DS: One thing that really benefitted the journal was the Ohio State Library's willingness to archive it in perpetuity. They've since been doing even more work by assisting with layout, getting DOI's for every article, and providing support for pretty much every part of the backend.

DH: I think EMR is fortunate to be housed as part of the Knowledge Bank project. The impermanence of web documents rightly keeps librarians and archivists awake at night. To their great credit, digital archivists have embraced the enormous challenge of preserving digital information—especially of the scholarly sort. And what they mean by "preservation" is what any normal archivist means: something measured in centuries or millennia, not years or decades.

There are now a number of digital archive initiatives, but the Knowledge Bank project was one of the first, and coincidentally it happened to be located down the street from my office. They had started just a couple of years before we started EMR. We were the second journal to join them. KB is run by professional archivists and in our signed contract with them, they make the extraordinary commitment to preserve the journal's contents "in perpetuity." I love that language. EMR could collapse tomorrow, but the journal contents will remain online as long as digital archivists have jobs. I'm not sure a commercial publisher could guarantee that.

DS: What are some possible issues with EMR that you didn't anticipate (or perhaps any issues that you did)?

DH: I'm not sure I'm the right person to answer that question. I've never been the Editor for EMR. I simply wanted to get the journal going so I'd have another venue for publishing my work! Dan, you've been Editor for a number of years so I know you'll have much more insight regarding the problems EMR faces.

DS: I'm not sure it's necessarily a problem, but the added editorial power you've just described means that we have a duty to be as open as possible. It isn't really open peer review if the editors can just accept or reject articles on a whim, so we really try to make sure that everything is seen by many people.

DH: I expect the journal will continue to be a work in progress.

EMR AND COMMERCIAL PUBLISHERS

DS: EMR has been approached on many occasions by publishers looking to partner up with the journal, but we've chosen to stay as a library-supported venture. This was largely because they stipulated that articles would either go behind a paywall or would charge Author Publishing Charges (APCs). Do you think the benefits outweighed the possible loss in exposure to readers?

DH: Unlike in the sciences, most active music scholars aren't funded by research grants and I don't think there are many music departments that have a policy of providing APC funding for faculty publications. So that means either putting the articles behind a paywall or avoiding a publisher altogether.

What excites authors is having their work read, so who wants their work to be stuck behind a paywall? Also, I think it's important to be sensitive to the needs of scholars in less well-off countries who really can't afford to pay anything for articles.

DS: This is a crucial point. Transferring the publishing costs from reader to author is still prohibitive in many respects, and although many journals offer programs for waiving the open access fees, it still seems a bit restrictive, and I'm not entirely sure why it needs to be done.

DH: In truth, I don't really see what publishers bring to the table these days. There's no longer the need to maintain mailing lists, estimate print runs, do color separation, maintain a warehouse of back issues, deal with shipping, solicit subscriptions, or lots of other things publishers used to do in the pre-Internet days.

EMR requires authors to submit their manuscripts in camera-ready format. The journal provides detailed Microsoft Word templates for authors to use when submitting. So this greatly simplifies production.

DS: I often think of the creation of Glossa in 2016. The entire editorial board of Lingua resigned and started a new, open-access journal. It required a number of influential and visible scholars standing up to Elsevier, but the move seems to have been quite successful. Librarians often talking about how "converting" journals might be a way forward, with the production work going to library staff, but with the benefit of not paying exorbitant subscription costs.

DH: There was a similar case with the journal Topology where the entire editorial board resigned and restarted the journal under another name in order to escape Elsevier's monopolistic behavior. I think it is very important for research libraries to step forward and take over the role formerly played by publishers—at least in the case of professional periodical literature. If we got rid of journal publishers, the savings in library subscriptions would allow libraries to support journals directly. Moreover, librarians really think hard about assessing the quality of information. I'd much prefer librarians making decisions about what journals to support than publishers, especially with the proliferation of predatory journals these days.

HUMDRUM

DS: Can you talk a bit about how you started working on Humdrum? What were some of the early influences (on either the toolkit or the kern format)?

DH: I took a course in Fortran programming in my first year of university. That was way back in the horse-and-buggy days, 1975. Those were the days of mainframes and punched cards. The following year I was extraordinarily lucky to receive a UNESCO student grant to visit three places involved in computers and music. I spent a week each at Barry Vercoe's lab at MIT (that was before the existence of the Media Lab). Then there was a week at Colgate University, and a week at Binghamton University.

The experience at Binghamton was the most formative for me. They had organized a week-long workshop on DARMS—Digital Alternate Representation of Musical Scores. The goal was to be able to represent all of the elements of standard musical notation using just alphanumeric characters—a necessary step if you wanted to use computers to process musical scores.

The main force behind DARMS was Stefan Bauer-Mengelberg. He was a bigger-than-life, very colorful man. He had worked as a mathematician for many years at IBM, but he was also assistant conductor for the New York Phil, working under Leonard Bernstein. He was also president of Mannes College for a time, and then trained as a lawyer and spent the rest of his life in private practice in New York. Apparently, he'd been working on DARMS since 1966. In 1976, Ray Erickson had just completed the first formal manual for DARMS, so it was an opportune time to learn it.

DARMS was an admirable effort. There was a real attempt to represent nearly all aspects of a traditional score using just letters and numbers on a computer keyboard. In many ways, DARMS was more comprehensive than today's MusicXML—at least with regard to the notational elements. However, by this point, I'd had enough programming experience to realize that DARMS was impractical. It was certainly comprehensive, but it would be a nightmare for programmers. The representation wasn't well structured from the perspective of software development. The DARMS people struggled for years after that to write a DARMS parser, but they never succeeded so DARMS died a natural death.

So my experience at the Binghamton workshop was sort of mixed. On the one hand I was really inspired by the prospects of what could be done using computers to study music. But I was sobered by the magnitude of the practical programming problems. The main insight I gained from the failure of DARMS was that musical notation is inherently two dimensional. Most of the questions we have about the organization of music pertain to vertical sonorities and horizontal musical lines. If you want to make programming straightforward, you need to preserve both the vertical and the horizontal structure in the representation itself. DARMS represented musical scores as a linear string of characters. You had a choice as to whether the encoding was line by line, or sonority by sonority. Unfortunately, there was no way to automatically translate from one form to the other. So simply extracting a single instrument from a vertical encoding, or identifying a chord in a horizontal encoding were major programming challenges. The failure of DARMS taught me that a better representation would preserve this two-dimensional format. That was the beginning of Humdrum.

I also learned that it's useful to parse the representation problem into smaller pieces. Don't try to represent guitar tablatures or square notation in a single all-embracing representation. Deal with them separately, but ensure that they're easy to coordinate.

DS: Why the name Humdrum?

DH: When it came time to giving the toolkit a name, I didn't have any money to register or trade-mark a name. In order to avoid possible conflict with some later business product, I knew it was essential to choose a name that had no commercial value. I chose "humdrum" because I thought no commercial enterprise would ever name a product "humdrum."

DS: I think the simplicity of the kern notation, and its readability, has contributed to its longevity. Many other toolkits (such as music21) include kern parsers, as the data is fairly easy to encode and understand. It hasn't really suffered the same fate as DARMS, possibly because you were cognizant of making a format that was easily parsed from the outset.

DH: With a little experience, you can sight-read a four-part hymn from the kern code.

ORIGINS OF THE HUMDRUM TOOLKIT

DS: What were some of the early reasons for constructing the Humdrum toolkit?

DH: I never set out to develop a software system for music analysis. Instead, I started out with some questions about voice leading. I heard Al Bregman give a talk on auditory scene analysis at a conference in 1981 and was really inspired. It was one of those seminal "ah-ha" moments. I recall immediately after his talk leaving the session. I spent an hour or two in a quiet lounge simply thinking about the repercussions and essentially outlining a series of studies that would ultimately take me the next decade to do. I wanted to test the extent to which traditional part-writing conformed to principles in auditory scene analysis. I wanted to look in detail at more than just a few musical works, so I was convinced that using a computer was the way to go.

DS: This work would eventually lead to your 2001 article "Tone and Voice", which was recently expanded and transformed into Voice Leading: The Science Behind a Musical Art (MIT Press: 2016). It's interesting that this question that sparked a lot of your earlier work is still present in your work.

DH: Actually, some of these were questions I was thinking about when I worked as an oven tender as a teenager. Some things don't change.

DS: Am I right in thinking that you started work on Humdrum as a graduate student?

DH: Yes. I think it was the summer of 1986 when I started working in earnest on what became Humdrum. In retrospect, I benefitted from four pieces of luck. Two of the pieces had happened years earlier. The first, as I mentioned, was attending the DARMS workshop at Binghamton where I learned the importance of avoiding representing music with a one-dimensional linear string of characters. The second was my experience with the UNIX operating system. In the 1970s, most computer users preferred operating systems like IBM's OS360 or VM370. Somehow, I ended up with the UNIX crowd and that exposed me to the "software tools" approach to applications software. The first rule of thumb in programming is "don't write software to carry out functions for which software already exists." So for example, a programmer should never have to write a "sort" routine, because that's already been done. The trick is to create applications in which software tools can be linked seamlessly together.

Two further pieces of luck were specific to the mid 1980s. The first was the advent of relatively inexpensive "clone" PC desktop computers. It was finally possible for ordinary mortals to purchase their own machines. Before that, personal computers were really the domain of electronic hobbyists.

The second was learning a new programming language. A friend of mine was a computer programmer named Randall Howard. (I'd composed and conducted the music for his wedding.) I knew that if I wanted to manipulate text representing musical scores, it would make sense to use a high-level text-processing language. At the time, the most powerful text-related language was SNOBOL, but that was a language that only ran on big mainframes. In the mid 1980s, there were no text-processing languages available for the DOS operating system used by PCs.

As it turned out, Randall had recently written an AWK interpreter specifically for use with DOS. Just as I was about to head off for PhD studies in Britain, he handed me a 5 and 1/4 inch floppy with a copy of his new AWK interpreter on it. It was a godsend. AWK is easy to use and very well-suited for text processing. Unlike SNOBOL, AWK had regular expressions built in and was a structured language. AWK is named after Aho, Weinberger, and Kernighan—the same Kernighan involved in C. AWK has a similar syntax to C but is a lot easier to use. Compared to programming in C, C++ or Pascal, AWK saved me years of work.

DS: Humdrum would have looked completely different had your friend not written an early interpreter for AWK. That's just fortuitous. You then took that and worked on tools from Nottingham?

DH: That's right. The tools that ultimately became the Humdrum Toolkit I wrote mostly over a three-year period from 1986 to 1989. Each tool began as a specialized program intended to address a particular musical question. As I continued to work on different problems, I'd find that I could modify previous programs I'd written and make them more general. Over time, some programs were amalgamated and each tool became functionally more powerful. Also, I was able to generalize the representation so it wasn't limited to Western musical notation. You could use the same tools to process lute tablatures, white mensural notation, Indonesian number notations, even physiological measures. For example, an orchestral score could code a listener's heart rate as though it was an additional instrumental part. In Humdrum, the user can concoct their own representation so that it's tailored to their own specific musical interests.

DS: This idea of using a series of different command-line tools that play nicely together is really central to the Humdrum toolkit. Even the recent tools that Craig Sapp has developed using C++ (the Humdrum Extras) are very much designed to be command-line tools, and I sometimes think that this way of thinking is very different from someone who learns to program in Python, Ruby, or something like that. Do you think that there is some sort of Sapir-Whorf element to this? Put another way, do you think your musical questions are ever influenced by this tool-based approach?

DH: That's an interesting question. I would think that the representation has a much greater impact on the questions a user can pose than the processing tools. But I might think this because I have more experience using the tools than a typical user.

DS: Are there any tools that you're particularly proud of? When I teach a class on Humdrum, I think deg, mint, and hint, excite students the most, but when they learn about how to combine those with metpos and timebase they really begin to understand the flexibility of the tools.

DH: I have a soft spot for the context tool, especially when you understand the power of using regular expressions to define the start or end of a context. Also, I'm partial to the humsed stream editor. It's under-utilized, mostly because users have little experience with general stream editors like the UNIX sed command. I once recall reading that sed is Turing complete—and since humsed piggybacks on sed, it is far more powerful than one might suppose.

HUMDRUM DATABASES

DS: You encoded many of the datasets that are commonly used, including the Bach Chorales and the Barlow and Morgenstern dataset. Can you discuss why you chose to encode these datasets, how the process worked, etc.

DH: Let me say something about the process first, and then I'll say something about the repertoires I chose to encode. In the 1980s a number of people were interested in the possibilities of processing musical databases. The MIDI standard had just been released, but in the early 1980s MIDI was thought of simply as a way of connecting synthesizers together. Only later did people really think of it as a representation for storing score data.

You have to realize that in the early days of computing, the optimism was way ahead of the reality. People thought we'd soon be talking to computers, that computers would do automatic language translation, we'd all have robots to do housework, and so on. Of course, computers can do a lot of that now, but it took almost half a century for the reality to catch up with the expectations.

In the case of musical databases, almost everyone I knew thought that optical music recognition was just around the corner. The idea was that you'd simply scan a score and the musical data would automatically be converted. Of course, decent optical music recognition didn't really appear until the early 2000s with products like SharpEye. But the idea that scanning would soon be possible discouraged people from bothering to encode any music at all. I didn't know anyone, apart from me, who was encoding musical scores in the mid 1980s. Later I learned of others, like Helmut Schaffrath in Essen, but there was no community and no interchange.

It was my experience as a programmer that made me skeptical that OMR [Optical Music Recognition] would soon make encoding databases easy. I thought OMR was at least a decade away. In any event, I wanted to address research questions right away rather than waiting around for some future that may or may not come, so I started encoding music manually around 1984 or 85. I was simply typing-in alphanumeric codes.

DS: I think this is still the case, to some extent. OMR is better, but not perfect, and I've spoken with many people who just don't want to spend the time doing the manual encoding.

DH: I'm often struck by people's reluctance to do some manual labor. And that applies beyond a reluctance to do manual encoding of scores. Let's say you want to know the proportion of 2/4 meters to 4/4 meters. You can answer that question by simply spending half an hour in a music library looking at a large number of scores. You'd be surprised how many students or scholars would never consider doing something like that.

DS: Returning to the question of the choice of repertoires …

DH: Yeah, with regard to the choice of repertoire to encode, remember that my motivation for using computers was my interest in voice leading. I thought a good place to start was with the music of J.S. Bach. So my first encoding project was the Bach two-part inventions. I then coded the three-part sinfonias, followed by all 48 fugues in the Well-tempered Clavier.

Of course, J. S. Bach is a single composer, so you can't really think of these repertoires as representative of even late Baroque part-writing. I encoded this material at a time before I'd read anything about statistics or inferential hypothesis testing. I had an intuition that I should expand beyond Bach if I wanted to be able to make more general statements about voice leading. But that's not how musicologists are trained. My graduate advisor wanted me to focus on a single composer. So I continued my encoding with a number of Bach organ works, and finally I tackled the 371 chorale harmonizations. Later I did indeed expand my encoding activities to include lots of other composers like Handel, Telemann, Vivaldi, and others, I also expanded the historical range from School of Notre Dame to Bartók and Webern.

Over the years I got quite proficient at manual encoding. For several years I encoded music every night for an hour or so. I used to listen to classical radio broadcasts on BBC Radio 3 while encoding music. After a couple of years, I was good enough that I could watch television and encode music at the same time. When MIDI allowed me to hear the files I'd encoded I discovered that proof listening was remarkably effective in detecting errors. It's pretty obvious when a pitch is wrong.

My biggest project was encoding the ten thousand themes from the Barlow and Morgenstern Dictionary of Musical Themes. That took about a month.

DS: Can you say something about the relationship between Humdrum and the Center for Computer Assisted Research in the Humanities?

DH: I'm glad you mentioned that. I think the founding of CCARH was a seminal event in musical database development. They began publishing an annual directory of projects related to computational musicology. The directory was useful, since you could see what other folks around the world were up to.

The Center was founded and supported by Walter Hewlett who's one of the unsung heroes in the world of musical databases. At some point, Walter hired Craig Sapp, and Craig has been instrumental in developing and promoting Humdrum. As you mentioned earlier, he's written a valuable suite of supplementary tools; but in addition he's been the catalyst for some big projects like the Josquin Project at Stanford and the massive encoding project at the Chopin Institute in Warsaw. Craig's work with Humdrum Verovio has been outstanding. He's really done a fabulous job of making Humdrum more accessible, more powerful, and easier to use.

Anyway, over several decades, CCARH (which is now part of the Packard Humanities Institute) has been encoding a growing catalogue of music, much of which has been released in the Humdrum format.

DS: There's this book by Barry Kernfeld that talks about jazz lead sheets, and The Real Book. He interviews the great jazz bassist Steve Swallow, who argues that the collation of the Real Book in effect canonized certain songs at the expense of many others, and it's been strange to see how those specific tunes came to reflect what jazz is, mainly as a result of being included in the book. I think you're in a similar position. The corpora that you encoded became the datasets that everyone used and for those doing corpus studies, they were at one point taken to be representative of music as a whole. For example, much of the discussions surrounding tonality have centered around the Bach Chorales, which are actually pretty atypical, actually. How do you feel about this de facto canonization that has occurred?

DH: It's unfortunate but understandable. It's not good that a canon gets defined as what David Huron is interested in. The problem, of course, is that hardly anyone wants to encode music. It's a task that seems more appropriate for monkeys at typewriters. And there's not a lot of recognition for doing the work. After you've done all the encoding work, everyone wants your data, and hardly anyone cites the source of who encoded it.

Over the years I tried to expand beyond Western classical music. First there was also the Humdrum translation of the Essen Folksong collection, then the Native American database which I started but you and Eva did the vast majority of the work. Now we have databases representing several hundred styles and cultures from around the world. Unfortunately, many of these materials are from copyrighted sources so we can't distribute them.

With the exception of the Barlow and Morgenstern dictionary of themes, everything was encoded manually, simply typing on a keyboard. Around 2010 or so, we started encoding using optical music recognition. That's especially effective for string quartets and orchestral scores. Piano music is still cumbersome when using OMR.

DS: You mention copyright, and Stefan Bauer-Mengelberg, whom you mentioned earlier, actually wrote about this in 1980, specifically asking "What copyright problems is a music scholar likely to encounter if, in making an analysis of a copyrighted work, he creates a machine-readable version of it and enters it into a computer?" (Bauer-Mengelberg, 1980) He points to White-Smith v. Apollo, a 1908 Supreme Court verdict that found that the transformation of the musical idea into data for a player piano was not copyright infringement. This, however, seems to have been overruled by a more recent case in 2014 (two decades after Bauer-Mengelberg's death). These decisions about encoding copyrighted works require an incredible amount of foresight. When encoding these databases, how do you deal with copyright issues?

DH: I didn't know about the connection with Bauer-Mengelberg. That's very interesting. We get criticism all the time for encoding old sources and not encoding the best available critical editions. Of course, we can only distribute materials that are in the public domain. This is a topic I wish musicologists would pay attention to. Right now, we have lots of well-intentioned musicologists beavering away on various critical editions unaware of the long-term copyright repercussions. These are important labor-intensive projects that can take decades to complete.

Now suppose that the youngest member of a critical edition team is 40 years old in 2020. And let's imagine that this person lives to the modest age of 80—that would be 2060. Under current copyright law, the resulting critical edition won't enter the public domain until 60 years later, which would make it 2120. That's a century from now. Musicologists are just giving away the copyright to publishers, and in the process, they are hamstringing their current and future colleagues.

In the old days, publishers were essential because everything needed to be printed. Does anyone doubt that the future is entirely digital? In the near future, publishers won't be bringing anything at all to the table. All they'll do is collect royalties for the next century, and they'll effectively own a monopoly on the best editions of works by particular composers. I think the situation is dire. The AMS and IMS need to step in and stop this hemorrhaging of scholarly efforts. Future scholars will be paying and paying and paying in order to access the hard work of current scholars. It just doesn't make sense.

CORPUS STUDIES

DS: You were doing "corpus studies" long before it was called that. Could you discuss how the field has grown throughout your career, what are some aspects of its growth that you've found surprising, and in what directions do you see the field going?

DH: Corpus studies go back a long way. There was the work of Bertrand Bronson way back in the 1950s who tried to use an IBM card sorter to better understand British folk ballads. There was the Princeton Josquin project in the early 1960s—which spent a lot of money and sadly failed to achieve much. Then there were people like Stephen Smoliar who made a concerted effort over many years to create a computer implementation of Schenkerian analysis. Not to mention DARMS, MUSTRAN, and lots of other software initiatives. Unfortunately, none of these projects led to any useful musical insights; they were primary efforts to explore the technical possibilities.

I suppose if I've made a mark it was that the Humdrum Toolkit was the first general-purpose music analytic software that was documented and distributed. And I suppose I've published more studies that actually employ computational methods than other folks.

If there's one thing I've found surprising, it's that I thought computational musicology would have become much more widespread. There are very few music scholars who do what you and I do. Let's face it, the most important software tools for music scholars today are Microsoft Word, followed perhaps by Google Scholar, IMSLP, and then perhaps Grove Online. Even RILM and RISM aren't used as much as one might suppose.

A complementary surprise is how much of music computation has been embraced by engineers and computer scientists. The MIR folks—the Music Information Retrieval community—has really expanded dramatically. The best computational work in music is being done by engineers and computer scientists. Of course, they have different goals than music scholars. First, they're typically interested in processing audio data rather than symbolic data like notation. They're interested in automatic music transcription, music recommendation, style characterization, music summarization, music thumbnails, audio marking—the kinds of applications that have commercial value.

But some of this work really does address questions of musicological interest. Like the work of Mathias Mauch and his colleagues, I think is outstanding [Mauch, et al., 2015]. They applied Foote Novelty methods to identify moments of stylistic discontinuity or change in American popular music. It's fascinating how a bottom-up process using audio recordings can pinpoint the moment of the British invasion, the advent of New Wave, and Hip Hop. At some point, this sort of approach will be applied to the whole of documented music—both western and non-western—and we'll end up with a detailed and nuanced history and geography of stylistic influence and change. It'll be the musical equivalent of the revolution going on right now in anthropology because of methods in population genetics.

Unless there is some sort of sea-change in teaching research methods to music scholars, my guess is that many of the most important musical discoveries in the future will be made by computer scientists and engineers.

CORPUS STUDY CONCERNS

DS: Do you have any concerns about the growth of corpus studies and computational methods?

DH: Given their speed and accuracy, computers can be truly wonderful research assistants. But computers also have an unbounded capacity to generate garbage. Moreover, it's not always easy to recognize when the results are deceptive or wrong. The ease of use can lead to all kinds of temptations that need to be resisted. Folks need to be careful.

One of the things I've learned is that it's essential to create test files. Say you're searching for instances of a particular harmonic progression. There are two kinds of mistakes that can happen with searching—so-called "false hits" and "misses." False hits are passages that match your search template, but you want to exclude. Misses are passages that you're interested in, but that didn't actually turn up in your search results for one reason or other. Maybe that's because one of the chords in the progression has been prolonged, or there's an intervening short rest, or there is a non-chordal tone in a melody that makes a chord look wrong, and so on. The "misses" are particularly problematic because you have no way of knowing that a bunch of sought instances are actually absent from your search results.

I've found it very helpful to create a set of test files where each file includes variant passages—such as the inclusion of transposing instruments, messing around with rhythms, the presence of intervening events like rests or barlines, and so on. I create one set of test files that the search should definitely find, and another set of test files that are similar, but should be rejected by the search. Then you can refine your search template so that all the target test cases are properly handled.

There's actually a chapter in the Humdrum User Guide that offers maybe 40 or 50 tips about how to conduct a careful musical search. Most of those tips are pertinent to any software system, not just Humdrum.

Of course, there are lots of other potential problems that plague corpus studies. I wrote an article a few years back entitled "On the virtuous and the vexatious in an age of big data." That article highlights several major concerns I have about computational musicology and corpus studies in particular. I won't repeat those concerns here, but they deal with such problems as post-hoc theorizing, when to avoid representative sampling, problems with the idea of a random reserved data set, unrecognized multiple tests, and how comprehensive musical databases force us to abandon hypothesis testing and embrace interpretive hermeneutics.

I started conducting corpus studies some 35 years ago, and over that time I've made plenty of mistakes. Whatever people think of the results of my studies, my sincere hope is that other scholars benefit from the many lessons I've learned about how things can go wrong. For anyone who's interested, those lessons are chronicled in four of my publications: a 1988 article on detecting and handling encoding errors, several chapters in my 1999 Humdrum User Guide, a 2002 article on lessons from Humdrum, and my 2013 "virtuous and vexatious" article on big data. The Humdrum User Guide also includes a chapter offering advice about electronic music editing.

DS: I think that 1988 article is too often overlooked, and is quite important, especially as more scholars are beginning to encode corpora. There's often this view that "my corpus might have mistakes, but it's a lot of data, so it will all come out in the wash". You pointed to the problems with this argument pretty early on.

DH: At some point I realized that even a one percent pitch error rate could cause havoc. If you're looking at melodic intervals, a single pitch error results in two wrong intervals so a one percent pitch error rate becomes a two percent error rate for intervals. For four-note chords, a one percent pitch error rate translates into a four percent error rate for identifying chords. And for two-chord progressions that ends up becoming an eight percent error rate.

So even modest error rates can wreak havoc depending on the type of processing you do. When I realized that, that was a wake-up call, and so I spent a lot of time testing different ways of detecting and fixing encoding errors. All that work is documented in that 1988 article.

CORPUS STUDY ADVICE

DS: What advice would you give to young scholars wanting to incorporate corpus studies into their research more?

DH: I guess I'd offer three pieces of advice.

First, make a long list of questions that interest you. Good research is motivated by questions. The questions can be big or small, like why are Beethoven's metronome markings so fast? How did jazz spread? Why do triplet rhythms seem to convey a sense of "rotation?" What are the musical features that mark the transition from Ska to Reggae? And so on.

Second, let your questions guide your research activities. Don't be afraid to go wherever a question takes you. For example, I have long been interested in the paradox of "sad" music. What is the allure of music that ostensibly makes people feel sad? If you're serious about answering a question like that then you need to be ready to go where it takes you. In my case, that led to experiments where we had people listen to sad and happy music while we did blood draws and assays of hormone levels. As a graduate student in musicology, I don't think I would have believed it if someone told me that, at some point in the future, I'd be doing studies like that.

Disciplines are defined by the questions they ask, not by the methods they use. I never lost sight of the question: why is it that many people enjoy listening to nominally sad music. Our methodological training should not dictate the questions we ask. Instead, the questions we ask should dictate the methods we learn or develop. So, I guess the third piece of advice is don't be afraid to learn new tools and methods. If you're going to be the first person to use a new technique in music, it follows that you won't find out how to do that by reading the music literature. In order to become aware of the new tools and methods that are available, you'll need to read well beyond music.

A final piece of advice is to beware of the siren call of computers. Keep your eye on the goal. Computers are inherently interesting objects. I don't know how many musicians I've met over the years who started off with interesting musical questions, became involved in computational musicology, and then abandoned music to pursue computers instead. Part of the problem is reinforcement. I love programming. There's a wonderful sense of power and accomplishment. After spending a day programming, you can usually look back and say, I added two new features and fixed an annoying bug. By contrast, when thinking about musicological problems, you can spend several days or weeks thinking about a problem and get nowhere. Musical insights are slow and unpredictable. For many people (and I'd include myself here) programming is more fun than music scholarship. I wrote the underlying search engine for Themefinder in a single afternoon. It was a blast—very satisfying.

In my career, I want to address (and perhaps even solve) musical questions. I'd prefer to be known for my music scholarship rather than my programming accomplishments. Today, who cares about the people who envisioned and wrote the SNOBOL programming language? I'd like to think that my work on voice leading, music and sadness, or expectation might still be of interest to somebody a century from now.

The tools of course are important. If you want to do this sort of work, then learning about computers and statistics is essential. But don't lose sight of your goals. In this regard, I'm inspired by the attitude Harry Partch held regarding inventing and constructing the new instruments that made his distinctive music possible. Somewhere Partch wrote "I am not an instrument maker; I am a philosophico-music-man seduced into carpentry." I suppose my version would be: "I am not a software developer; I am a musicologist inspired by the opportunities afforded by big data."

REFERENCES

  • Bauer-Mengelberg, S. (1980). Computer-implemented music analysis and the copyright law. Computers and the Humanities, 14(1), 1–19. https://doi.org/10.1007/BF02395129
  • Erickson, R. (1976). DARMS: A Reference Manual. Duplicated typescript.
  • Huron, D. (1988). Error categories, detection and reduction in a musical database. Computers and the Humanities, Vol. 22, No. 4, pp. 253-264. https://doi.org/10.1007/BF00118601
  • Huron, D. (1999). Music Research Using Humdrum: A User's Guide. Stanford, California: Center for Computer Assisted Research in the Humanities, 414 pages.
  • Huron, D. (2002). Music information processing using the Humdrum Toolkit: Concepts, examples, and lessons. Computer Music Journal, Vol. 26, No. 2, pp. 11-26. https://doi.org/10.1162/014892602760137158
  • Huron, D. (2013). On the virtuous and the vexatious in an age of big data. Music Perception, Vol. 31, No. 1, pp. 4-9. https://doi.org/10.1525/mp.2013.31.1.4
  • Huron, D. (2016). Voice Leading: The Science Behind a Musical Art. Cambridge, Massachusetts: MIT Press. https://doi.org/10.7551/mitpress/9780262034852.001.0001
  • Kernfeld, B. D. (2006). The Story of Fake Books: Bootlegging Songs to Musicians. Scarecrow Press.
  • Lidov, D. (1999). Elements of Semiotics. London: St Martin's Press.
  • Lidov, D. (2004). Is Language a Music. Bloomington, Indiana: Indiana University Press.
  • Mauch, M., MacCallum, R. M., Levy, M., & Leroi, A. M. (2015). The evolution of popular music: USA 1960–2010. Royal Society Open Science, 2(5), 150081. https://doi.org/10.1098/rsos.150081
  • Mendel, A. (1969). Some preliminary attempts at computer-assisted style analysis in music. Computers and the Humanities, 41-52. https://doi.org/10.1007/BF02393450
  • Partch, H. (1979). Genesis of a Music: An Account of a Creative Work, its Roots, and its Fulfillments, Second Edition. Da Capo Press.
  • Selfridge-Field, E. (1997). Beyond MIDI: The Handbook of Musical Codes. Cambridge, Massachusetts: MIT Press.
  • Smoliar, S. (1976). SCHENKER: a computer aid for analysing tonal music. ACM SIGLASH Newsletter, 10(1-2), 30-61. https://doi.org/10.1145/1041351.1041354
  • Wenker, J. (1974). Mustran II: A foundation for computational musicology. In Computers in the Humanities (p. 267). University of Minnesota Press.
Return to Top of Page