--- /dev/null
+%-*-LaTeX-*-
+
+\documentclass{article}
+\usepackage{a4}
+\def\postMudelaExample{\setlength{\parindent}{1em}}
+\title{LilyPond, a Music Typesetter}
+\author{HWN}
+\usepackage{musicnotes}
+\usepackage{graphics}
+
+
+\begin{document}
+\maketitle
+
+% -*-LaTeX-*-
+\section{Introduction}
+
+The Internet has become a popular medium for collaborative work on
+information. Its success is partly due to its use of simple, text-based
+formats. Examples of these formats are HTML and \LaTeX. Anyone can
+produce or modify such files using nothing but a text editor, they are
+easily processed with run-of-the-mill text tools, and they can be
+integrated into other text-based formats.
+
+Software for processing this information and presenting these formats
+in an elegant form is available freely (Netscape, \LaTeX, etc.).
+Ubiquitousness of the software and simplicity of the formats have
+revolutionised the way people publish text-based information
+nowadays.
+
+In the field of performed music, where the presentation takes the form
+of sheet music, such a revolution has not started yet. Let us review
+some alternatives that have been available for transmitting sheet
+music until now:
+\begin{itemize}
+\item MIDI\cite{midi}. This format was designed for interchanging performances
+ of music; one should think of it as a glorified tape recorder
+ format. It needs dedicated editors, since it is binary. It does
+ not provide enough information for producing musical scores: some of
+ the abstract musical content of what is performed is thrown away.
+
+\item PostScript\cite{Postscript}. This format is a printer control
+ language. Printed musical scores can be transmitted in PostScript,
+ but once a score is converted to PostScript, it is virtually
+ impossible to modify the score in a meaningful way.
+
+\item Formats for various notation programs. Notation programs either
+ work with binary formats (e.g., NIFF\cite{niff-web}), need specific
+ platforms (e.g., Sibelius\cite{sibelius}, Score\cite{score}), are
+ proprietary or non-portable tools themselves (idem), produce
+ inadequate output (e.g., MUP\cite{mup}), are based on graphical
+ content (e.g., MusixTeX\cite{musixtex1}), or limit themselves to
+ specific subdomains (e.g., ABC\cite{abc2mtex}).
+
+\item SMDL\cite{smdl-web}. This is a very rich ASCII format, that is
+ designed for storing many types of music. Unfortunately, there is
+ no implementation of a program to print music from SMDL available.
+ Moreover, SMDL is so verbose, that it is not suitable for human
+ production.
+
+\item TAB\cite{tablature-web}. Tab (short for tablature) is a popular
+ format, for interchanging music over e-mail, but it can only be used
+ for guitar music.
+\end{itemize}
+
+In summary, sheet music is not published and edited on a wide scale
+across the internet because no format for music
+interchange exists that is:
+\begin{itemize}
+\item open, i.e., with publically available specifications.
+\item based on ASCII, and therefore suitable for human consumption and
+ production.
+\item rich enough for producing publication quality sheet music from
+ it.
+\item based on musical content (unlike, for example, PostScript), and
+ therefore suitable for making modifications.
+\item accompanied by tools for processing it that are freely available
+ across multiple platforms.
+\end{itemize}
+
+
+With the creation of LilyPond, we have tried to create both a
+convenient format for storing sheet music, and a portable,
+high-quality implementation of a compiler, that compiles the input
+into a printable score. You can find a small example of LilyPond
+input along with output in Figure~\ref{fig:intro-fig}.
+%
+\begin{figure}[htbp]
+ \begin{center}
+\begin{mudela}[verbatim]
+ \score {
+ \notes
+ \type GrandStaff <
+ \transpose c'' { c4 c4 g4 g4 a4 a4 g2 }
+ { \clef "bass"; c4 c'4
+ \type Staff <e'2 {\stemdown c'4 c'4}> f'4 c'4 e'4 c'4 }
+ >
+ \paper {
+ linewidth = -1.0\cm ;
+ }
+ }
+\end{mudela}
+ \caption{A small example of LilyPond input}
+ \label{fig:intro-fig}
+ \end{center}
+\end{figure}
+%
+
+
+The input language encodes musical events (such as notes and rests) on
+the basis of their time-ordering. For example, the grammar includes
+constructs that specify that notes start simultaneous and that notes
+are to be played in sequence. In this encoding some context that is
+present in sheet music is lost.
+
+The compiler reconstructs the notation from the encoded music. Its
+operation comprises four different steps (see
+Figure~\ref{fig:intro-steps}).
+
+\begin{description}
+\item[Parsing] During parsing, the input is converted in a syntax tree.
+
+\item[Interpreting] In the \emph{interpreting} step, it is determined
+ which symbols have to be printed. Objects that correspond to
+ notation (\emph{Graphical objects}) are created from the syntax tree
+ in this phase. Generally speaking, for every symbol printed there is
+ one graphical object. These objects are incomplete: their position
+ and their final shape is unknown.
+
+ The context that was lost by encoding the input in a language is
+ reconstructed during this conversion.
+\item[Formatting] The next step is determing where symbols are to be
+ placed, this is called \emph{formatting}.
+\item[Outputting]
+ Finally, all Graphical objects are outputted as PostScript or \TeX\ code.
+\end{description}
+
+\def\staffsym{\vbox to 16pt{
+ \hbox{\vrule width 1cm depth .2pt height .2pt}\nointerlineskip
+ \vfil
+ \hbox{\vrule width 1cm depth .2pt height .2pt}\nointerlineskip
+ \vfil
+ \hbox{\vrule width 1cm depth .2pt height .2pt}\nointerlineskip
+ \vfil
+ \hbox{\vrule width 1cm depth .2pt height .2pt}\nointerlineskip
+ \vfil
+ \hbox{\vrule width 1cm depth .2pt height .2pt}\nointerlineskip
+}}
+
+\def\vspacer{\vbox to 20pt{\vss}}
+\begin{figure}[h]
+\def\spacedhbox#1{\hbox{\ #1\ }}
+\begin{eqnarray*}
+ {\spacedhbox{Input}\atop \hbox{\texttt{\{c8 c8\}}}} {\spacedhbox{Parsing}\atop\longrightarrow}
+ {\spacedhbox{Syntax tree}\atop\spacedhbox{\textsf{Sequential(Note,Note)}}}
+ {\spacedhbox{Interpreting}\atop\longrightarrow}\\
+ \vspacer\\
+ {\spacedhbox{Graphic objects}\atop\spacedhbox{\texttrebleclef \textquarterhead\texteighthflag\textquarterhead\texteighthflag \staffsym }}
+ {\spacedhbox{Formatting}\atop\longrightarrow}
+ {\spacedhbox{Formatted objects}\atop\hbox{
+ \mudela{c''8 c''8}
+ }}\\
+\vspacer\\
+ {\spacedhbox{Outputting}\atop\longrightarrow}
+ {\spacedhbox{PostScript code}\atop\hbox{\texttt{\%!PS-Adobe}\ldots}}
+\end{eqnarray*}
+ \caption{Parsing, Interpreting, Formatting and Outputting}
+ \label{fig:intro-steps}
+\end{figure}
+
+
+The second step, the interpretation phase of the compiler, can be
+manipulated as a separate entity: the interpretation process is
+composed of many separate modules, and the behaviour of the modules is
+parameterised. By recombining these interpretation modules,
+and changing parameter settings, the same piece of music can be
+printed differently, as is shown in Figure~\ref{fig:intro-interpret}.
+
+This makes it easy to extend the program. Moreover, this enables the
+same music to be printed in different versions, e.g., in a conductors
+score and in extracted parts.
+
+
+\begin{figure}[h]
+ \begin{center}
+ \begin{mudela}
+ \score {
+ \notes
+ \type GrandStaff <
+ \transpose c'' { c4 c4 g4 g4 a4 a4 g2 }
+ { \clef "bass"; c4 c'4
+ \type Staff <e'2 {\stemdown c'4 c'4}> f'4 c'4 e'4 c'4 }
+ >
+ \paper {
+ linewidth = -1.0\cm ;
+ \translator {
+ \VoiceContext
+ \remove "Stem_engraver";
+ }
+ \translator {
+ \StaffContext
+ numberOfStaffLines = 3;
+ }
+ }
+ }
+ \end{mudela}
+ \caption{The interpretation phase can be manipulated: the same
+ music as in Figure~\ref{fig:intro-fig} is interpreted
+ differently: three staff lines and no stems.}
+ \label{fig:intro-interpret}
+ \end{center}
+\end{figure}
+
+
+
+\section{Preliminaries}
+
+To understand the rest of the article, it is necessary to know
+something about music notation and music typography. Since both
+communicate music, we will explain some characteristics of instruments
+and western music that motivate some notational constructs.
+
+\subsection{Music}
+
+Music notation is meant to be read by human performers. They sing or
+play instruments that can produce sounds of different pitches. These
+sounds are called \emph{notes}. Additionally, the sounds can be
+articulated in differents ways, e.g., staccato (short and separated)
+or legato (fluently bound together). The loudness of the notes can
+also be varied. Changes in loudness are called \emph{dynamics}.
+
+Silence is also an element of music. The musical terminology for
+silence within music is \emph{rest}.
+
+The basic unit of pitch is the \emph{octave}. The octave corresponds
+to a frequency ratio of 1:2. For example the pitch denoted by a'
+(frequency: 440 hertz) is one octave lower than a'' (frequency: 880
+hertz). Various instruments have a limited \emph{pitch range}, for
+example, a trumpet has a range of about 2.5 octaves. Not all
+instruments have ranges in the same register: a tuba also has a range
+of 2.5 octaves, but the range of the tuba is much lower.
+
+Musicology has a confusing mix of relative and absolute measures for
+pitches: the term `octave' refers to both a difference between two
+pitches (the frequency ratio of 1:2), and to a range of pitches. For
+example, the term `[eengestreept] octave' refers to the pitch range
+between 261.6 Hz and 523.3 Hz.
+
+
+The octave is divided into smaller pitch steps. In modern western
+music, every octave is divided into twelve approximately equidistant
+pitch steps, and each step is called a \emph{semitone}. Usually, the
+pitches in a musical piece come from a smaller subset of these twelve
+possible pitches. This smaller subset along with the musical
+functions fo the pitches is called the
+\emph{tonality}\footnote{Tonality also refers to the relations between
+ and functions of certain pitches. Since these do not have any
+ impact on notation, we ignore this} of the piece.
+
+
+The standard tonality that forms the basis of music notation
+(the key of C major) is a set of seven pitches within every octave.
+Each of these seven is denoted by a name. In English, these are names
+are (in rising pitch) denoted by c, d, e, f, g, a and b. Pitches that
+are a semitone higher or lower than one of these seven can be
+represented by suffixing the name with `sharp' or `flat'
+respectively (this is called an \emph{chromatic alteration}).
+
+A pitch therefore can be fully specified by a combination of the
+octave number, the note name and a chromatic alteration.
+Figure~\ref{fig:intro-pitches} shows the relation between names and
+frequencies.
+
+
+
+
+\begin{figure}[h]
+ \begin{center}
+ [te doen]
+ \end{center}
+ \caption{Pitches in western music. The octave number is denoted
+ by a superscript.}
+ \label{fig:intro-pitches}
+\end{figure}
+
+
+Many instruments can produce more than one note at the same time, e.g.
+pianos and guitars. When more notes are played simultaneously, they
+form a so-called \emph{chord}.
+
+The unit of duration is the \emph{beat}. When playing, the tempo is
+determined by setting the number of beats per minute. In western
+music, beats are often stressed in a regular pattern: for example
+Waltzes have a stress pattern that is strong-weak-weak, i.e. every
+note that starts on a `strong' beat is louder and has more pronounced
+articulation. This stress pattern is called \emph{meter}.
+
+\subsection{Music notation}
+
+In music notation, sounds and silences are represented by symbols that
+are called note and rest respectively.\footnote{These names serve a
+ double purpose: the same terms are used to denote the musical
+ concepts.} The shape of notes and rests indicates their duration
+(See figure~\ref{noteshapes}) relative to the whole note.
+
+\begin{figure}[h]
+ \begin{center}
+\begin{mudela}
+ \score {
+ \notes \transpose c''{ c\longa*1/4 c\breve*1/2 c1 c2 c4 c8 c16 c32 c64 }
+ \paper {
+ \translator {
+ \StaffContext
+ \remove "Staff_symbol_engraver";
+ \remove "Time_signature_engraver";
+ \remove "Bar_engraver";
+ \remove "Clef_engraver";
+ }
+linewidth = -1.;
+ }
+}
+\end{mudela}
+\begin{mudela}
+ \score {
+ \notes \transpose c''{ r\longa*1/4 r\breve*1/2 r1 r2 r4 r8 r16 r32 r64 }
+ \paper {
+ \translator {
+ \StaffContext
+ \remove "Staff_symbol_engraver";
+ \remove "Time_signature_engraver";
+ \remove "Bar_engraver";
+ \remove "Clef_engraver";
+ }
+ linewidth = -1.;
+ }
+}
+\end{mudela}
+ \caption{Note and rest shapes encode the length. At the top notes
+ are shown, at the bottom rests. From left to right a quadruple
+ note (\emph{longa}), double (\emph{breve}), whole, half,
+ quarter, eigth, sixteenth, thirtysecond and sixtyfourth. Each
+ note has half of the duration of its predecessor.}
+ \label{fig:noteshapes}
+\end{center}
+\end{figure}
+
+
+Notes are printed in a grid of horizontal lines called \emph{staff} to
+denote their pitch: each line represents the pitch of from the
+
+\subsection{Music typography}
+
+
+
+
+
+
+
+\bibliographystyle{hw-plain}
+\bibliography{engraving,boeken,colorado,computer-notation,other-packages}
+
+
+
+\end{document}
+
+The complexity of music notation was tackled by adopting a modular
+design: both the formatting system (which encodes the esthetic rules of
+notation), and the interpretation system (which encodes the semantic
+rules) are highly modular.
+
+
+The difficulty in creating a format for music notation is rooted in
+the fact that music is multi dimensional: each sound has its own
+duration, pitch, loudness and articulation. Additionally, multiple
+sounds may be played simultaneously. Because of this, there is no
+obvious way to ``flatten'' music into a context-free language.
+
+The difficulty in creating a printing engine is rooted in the fact
+that music notation complicated: it is very large graphical
+``language'' with many arbitrary esthetic and semantic conventions.
+Building a system that formats full fledged musical notation is a
+challenge in itself, regardless of whether it is part of a compiler or
+an editor.
+
+The fact that music and its notation are of a different nature,
+implies that the conversion between input notation is non-trivial.
+
+In LilyPond we solved the above problem in the following way:
+