THE study by Huovinen and Rinne (2023) is situated at an interesting nexus of dynamic and static aspects of music. By that I mean that on the one hand, heard music proceeds in time, sometimes very rapidly, requiring very quick and accurate production and monitoring of temporal and acoustic events. On the other hand, musical notation is by definition a permanent record of those events. On the third hand, the process of reading notation takes place in real time, even if, especially during sight-reading, the visual processing pace is slower and involves more regressions than when the piece is more familiar (Goolsby, 1994).


The authors' interest here is to analyze the type of global information experts are able to derive about a piece from a quick glance or two at a page of notation. This was likely an unusual task for the participants, for whom the usual goal is playing the piece accurately when looking at page 1 of a new score. Using both qualitative and quantitative methods, the authors concluded that expert pianists can often surmise the period, and even the composer of a piece, after only 500 ms exposure. That knowledge appears to be more of an automatic retrieval than a conscious deductive process, given the speed profiles. This outcome is consistent with other studies showing that generalizing from a quick sampling of rich material can be an effective way to optimize processing resources. I do agree with the authors that using composers who are strongly associated with the three chosen styles may have inflated the accuracy of that measure. They invoke the availability heuristic (Tversky & Kahneman, 1973), but the representativeness heuristic (Kahneman & Tversky,1972) might be relevant as well: Bach is not only very often associated with the Baroque period so is available in memory, but is often used to exemplify the 'essence' of that period, and the same for Chopin and Romanticism. The early Beethoven compositions may be less representative of the Beethoven 'hallmark' than his later works. As was shown, identification of those pieces was not as good as the other two.


Use of verbal protocols meant that the participants' responses were conscious, whereas the latency data argued for a preconscious component to style recognition. Given that notation is so tied to motor execution specific to each instrument, another way to potentially capture preconscious processing would be to place some electromyography (EMG) sensors on the hands to see if motor responses were elicited even with very quick score presentation. Another intriguing approach would be to see if the EMG data could be entered into a pattern classification algorithm (Kose et al., 2021) to see if subtle movements elicited by the score would differentiate styles, composers, and pieces, all presumably below the level of awareness. If subconscious motor activity were elicited, then one could test whether that information was actually useful in the task by having another condition in which the participant made irrelevant hand motions as the display came up. If otherwise useful information was suppressed, one might predict fewer statements particularly in the Pitch and Time categories compared to baseline (as rapid finger movements are needed for both during actual playing).


I was also curious as to how confident the participants were in their style/composer guesses (and the other statements). While a 500 ms viewing time is longer than some other 'brief slices' experiments cited in the article, it is still a very brief view. As the argument here is that recognition was being mediated by an intuitive, automatic process, the authors also mentioned that respondents were 'cautious' in their response strategies. Thus, it might be interesting to ask for a confidence rating for the style and composer answers, given that there is a correct answer (and the participants know that). Would people underestimate their accuracy, or be poorly calibrated in this task, given that their answers were not emanating from a consciously assessable chain of logical deduction?


Finally, I appreciated that the authors were able to assemble a group of 25 highly trained pianists. The range of performing career was considerable, from 4 to 27 years. I wondered whether the quantitative or qualitative responses might differ in the more vs. less experienced individuals. One hypothesis might be that the more experienced individual would rely more on the quick processing route than the less experienced; i.e. with a larger difference in the latencies for Style-C vs Style-I answers. Given that more experienced individuals are very likely to be older and thus potentially answer a bit more slowly in general, the times could be normalized.


In conclusion, I thought this study had a useful mix of quantitative and qualitative approaches to capturing what a notation pattern conveys to an expert, beyond the literal black and white of each note. Although someone might argue that the setup is weirdly artificial, as more and more musicians use electronic tablets instead of paper scores, that quick first glance as the toe taps to the next page is not that dissimilar to this experimental setup.


This article was copyedited and layout edited by Jonathan Tang.


  1. Correspondence can be addressed to: Professor Andrea Halpern, Psychology Department, Bucknell University, One Dent Drive, Lewisburg, PA, 17837, USA,
    Return to Text


Return to Top of Page