|
About Andrew Choi
MIDI Programs
MIDI File Player (External Device)
MIDI Destination Pop-Up Button
MIDI File Player (Internal Synth)
MusicSequence Sample Code
MIDI File Writer
MIDI Name Document Parser
NameConfigSetup
Fish Creek MIDI Framework
MidnamUtility
SysExSenderX
Other Programs
FCBlogEditor
FCBlog and Patch
Chinese Checkers Program
jyut6 ping3 Cantonese Input Method
Cocoa Sample Programs
Syntax Coloring Using Flex
NSTextField and Undo
NSToolbar
Implementing File Import
Launch Application and Open URL
Saving Uncommitted Text Field Edits
Algorithms
Jazz Chord Analysis as Optimization
Optimal Line Breaking for Music
Optimal Chord Spacing
|
|
|
|
A blog where I will write mostly about programming in Cocoa and CoreMIDI, and experiences from my ports of Emacs and XEmacs to the Mac OS.
|
More on Expressive Performance
|
Friday February 27, 2004
Ive read two more papers on expressive performance today. I was attracted to Widmers Machine Discoveries: a Few Simple, Robust Local Expression Principles because it promises simple rules that work at the level of individual notes. Machine learning techniques are used to discover rules that govern timing, dynamics, and articulation from performances of Mozart sonatas. It produces rules such as
Given two notes of equal duration followed by a longer note, lengthen the second of the two.
Unfortunately, because such rules only form a partial model (they are true only for a fraction of positive examples), its not clear how one might apply them to generate expressive performances.
I then also read a summary of Fribergs thesis. This work takes the reverse approach by proposing rules for generating expressive performances and basically testing whether they produce nice sounding results. Perhaps its value is its collection of rules from much of the expressive performance literature up to that point.
Perhaps Ill have to devise my own algorithm for generating bass lines (mine is a much simpler problem). I dont think I should vary the timing since in jazz accompaniment, the bass line helps to keep time. It would be interesting to apply some of the rules from these papers to determine note dynamics though. Since bass lines are generated in a previous composition step, the role of each note in them (e.g., chord tone, tonal passing note, chromatic passing note, etc.) is already known to the program. It should be a simple matter to apply a set of rules to them.
Thursday February 26, 2004
Ive been doing some reading on approaches to generate expressive performances, machine imitation of artistic nuances when human performers play a score. Im looking ahead at how a realistic bass line can be generated once weve figured out what notes to play.
My search led me to a paper by Arcos, de Mantaras, and Serra SaxEX: a case-based reasoning system for generating expressive musical performance. Their method uses Narmours implication/realization model and Lerdahl and Jackendoffs GTTM to analyze the score and identify the structure of the piece and the role of each note within its structure. The rest of it is a machine learning system that identifies relevant cases in learned performances and applies the appropriate parameters to the note under consideration.
A paper that provides some more details on SaxEx is AI and Music: From Composition to Expressive Performance by de Mantaras and Arcos. For a broader view of expressive performance, see Modelling the Rational Basis of Musical Expression by Widmer. Many related papers can be found at OFAI by typing in the key word expressive performance and at the publications page at the Music Performance Group at KTH.
|
A Design for Temporal Musical Objects
|
Tuesday February 24, 2004
Heres a design for a set of temporal musical objects which can be used in an accompaniment generation program. We already have the Note and Chord classes, which represent notes and chords without register/octave information. Well need to add classes like OctaveDependentNote and OctaveDependentChord in MusES. To save typing, lets just call these MIDINote and MIDIChord. These names are appropriate because a MIDINote object will probably be completely specified by a MIDI note number (e.g. 60 = C4). A MIDIChord object is just a vector of MIDINote objects. Notice also that velocity information is not present in these objects.
The temporal musical object system is designed to have two levels. Level one captures temporal information that may appear in a score, for example. Time intervals in this level are measured in quarter notes, eighth notes, and so on. Musical objects in this level are used in analysis, algorithmic composition, etc. Level two supports temporal information for performances. It measures time intervals in MIDI file divisions. Humanizing a level-one representation of a bass line produces a corresponding level-two representation, which can then be written to a track in a MIDI file. Note that it is also during this step that note velocity information is added. We will sketch the design of level-one objects below.
Our design centers around a class template TimeSeq. A class Duration is introduced to represent the length of level-one musical objects. Then TimeSeq is defined as:
template <class T>
class TimeSeq : public list<pair<Duration, T> >
{
...
};
We can then use TimeSeq<Chord> to represent a set of chord changes, TimeSeq<MIDINote> to represent a bass line, TimeSeq<MIDIChord> to represent a generated piano accompaniment, vector<TimeSeq<MIDINote> > to represent a drum track, and TimeSeq<Scale> to represent the result of a tonality analysis of a set of chord changes.
It doesnt take a lot of reflection on the problem of duplicating this in any other language for one to realize how powerful C++/STL really is. In fact I dare anyone to do this and come up with a cleaner implementation :-).
Monday February 23, 2004
To experiment with algorithms for generating bass lines, Ill need to implement classes for notes and chords with pitch and duration information, much like MusESs PlayableNote and PlayableChord classes. But to do that Ill first need to sketch the implementation of the bass-line algorithms. Ill work on this in the next few days.
Ill also need a way to play the notes being generated. So I studied the MIDI file format. It turns out to be a simple enough format to output, especially when I only need to write files containing a few tracks and simple timing information, much like what BiaB or MiBAC Jazz will export. I then wrote a test program to output a MIDI file with a quarter-note scale in C.
Ive also come across an interesting program called MMA. Its written in Python and defines quite a elaborate language for describing patterns and styles. In theory this is a nicer approach than editing tables using BiaBs StyleMaker utility, but I wonder why the author didnt just extend Python by providing classes! Then hell have the full power of the Python language! Anyway Im more interested in systems that are AI/expert-system based rather than pattern and probability based, and ones that can generate more realistic accompaniments.
|
Search this blog with
Lists
Less-Known Facts About Emacs
Emacs Rants
Chinese Restaurants in Calgary
Calgary/Banff Tourist Attractions
C++ Reading List
Science Fiction Series
Top-10 Reason I Stopped Working on Emacs
Top-10 Types of Questions I Get About Emacs
10 Defining Moments as Programmer
Misc
Carbon XEmacs
Emacs for
Mac OS X
|