From: fred Date: Mon, 4 Oct 1999 09:10:35 +0000 (+0000) Subject: lilypond-1.3.0 X-Git-Tag: release/1.5.59~5772 X-Git-Url: https://git.donarmstrong.com/?a=commitdiff_plain;h=fd1c1b6f9adf0e82c3749670829ae584ad5dfe03;p=lilypond.git lilypond-1.3.0 --- diff --git a/Documentation/programmer/lilypond-overview.doc b/Documentation/programmer/lilypond-overview.doc new file mode 100644 index 0000000000..a408a9e8eb --- /dev/null +++ b/Documentation/programmer/lilypond-overview.doc @@ -0,0 +1,743 @@ +%-*-LaTeX-*- + +\documentclass{article} +\usepackage{a4} +\def\postMudelaExample{\setlength{\parindent}{1em}} +\title{LilyPond, a Music Typesetter} +\author{HWN} +\usepackage{musicnotes} +\usepackage{graphics} + + +\begin{document} +\maketitle + +[THIS IS WORK IN PROGRESS. THIS IS NOT FINISHED] + +% -*-LaTeX-*- +\section{Introduction} + +The Internet has become a popular medium for collaborative work on +information. Its success is partly due to its use of simple, text-based +formats. Examples of these formats are HTML and \LaTeX. Anyone can +produce or modify such files using nothing but a text editor, they are +easily processed with run-of-the-mill text tools, and they can be +integrated into other text-based formats. + +Software for processing this information and presenting these formats +in an elegant form is available freely (Netscape, \LaTeX, etc.). +Ubiquitousness of the software and simplicity of the formats have +revolutionised the way people publish text-based information +nowadays. + +In the field of performed music, where the presentation takes the form +of sheet music, such a revolution has not started yet. Let us review +some alternatives that have been available for transmitting sheet +music until now: +\begin{itemize} +\item MIDI\cite{midi}. This format was designed for interchanging performances + of music; one should think of it as a glorified tape recorder + format. It needs dedicated editors, since it is binary. It does + not provide enough information for producing musical scores: some of + the abstract musical content of what is performed is thrown away. + +\item PostScript\cite{Postscript}. This format is a printer control + language. Printed musical scores can be transmitted in PostScript, + but once a score is converted to PostScript, it is virtually + impossible to modify the score in a meaningful way. + +\item Formats for various notation programs. Notation programs either + work with binary formats (e.g., NIFF\cite{niff-web}), need specific + platforms (e.g., Sibelius\cite{sibelius}), are proprietary or + non-portable tools themselves (idem), produce inadequate output + (e.g., MUP\cite{mup}), are based on graphical content (e.g., + MusixTeX\cite{musixtex1}), limit themselves to specific subdomains + (e.g., ABC\cite{abc2mtex}), or require considerable skill and + knowledge to use (e.g., SCORE\cite{score}) + +\item SMDL\cite{smdl-web}. This is a very rich ASCII format, that is + designed for storing many types of music. Unfortunately, there is + no implementation of a program to print music from SMDL available. + Moreover, SMDL is so verbose, that it is not suitable for human + production. + +\item TAB\cite{tablature-web}. Tab (short for tablature) is a popular + format, for interchanging music over e-mail, but it can only be used + for guitar music. +\end{itemize} + +In summary, sheet music is not published and edited on a wide scale +across the internet because no format for music +interchange exists that is: +\begin{itemize} +\item open, i.e., with publically available specifications. +\item based on ASCII, and therefore suitable for human consumption and + production. +\item rich enough for producing publication quality sheet music from + it. +\item based on musical content (unlike, for example, PostScript), and + therefore suitable for making modifications. +\item accompanied by tools for processing it that are freely available + across multiple platforms. +\end{itemize} + + +With the creation of LilyPond, we have tried to create both a +convenient format for storing sheet music, and a portable, +high-quality implementation of a compiler, that compiles the input +into a printable score. You can find a small example of LilyPond +input along with output in Figure~\ref{fig:intro-fig}. +% +\begin{figure}[htbp] + \begin{center} +\begin[verbatim]{mudela} + \score { + \notes + \context GrandStaff < + \transpose c'' { c4 c4 g4 g4 a4 a4 g2 } + { \clef "bass"; c4 c'4 + \context Staff f'4 c'4 e'4 c'4 } + > + \paper { + linewidth = -1.0\cm ; + } + } +\end{mudela} + \caption{A small example of LilyPond input} + \label{fig:intro-fig} + \end{center} +\end{figure} +% + + +The input language encodes musical events (such as notes and rests) on +the basis of their time-ordering. For example, the grammar includes +constructs that specify that notes start simultaneous and that notes +are to be played in sequence. In this encoding some context that is +present in sheet music is lost. + +The compiler reconstructs the notation from the encoded music. Its +operation comprises four different steps (see +Figure~\ref{fig:intro-steps}). + +\begin{description} +\item[Parsing] During parsing, the input is converted in a syntax tree. + +\item[Interpreting] In the \emph{interpreting} step, it is determined + which symbols have to be printed. Objects that correspond to + notation (\emph{Graphical objects}) are created from the syntax tree + in this phase. Generally speaking, for every symbol printed there is + one graphical object. These objects are incomplete: their position + and their final shape is unknown. + + The context that was lost by encoding the input in a language is + reconstructed during this conversion. +\item[Formatting] The next step is determing where symbols are to be + placed, this is called \emph{formatting}. +\item[Outputting] + Finally, all Graphical objects are outputted as PostScript or \TeX\ code. +\end{description} + +\def\staffsym{\vbox to 16pt{ + \hbox{\vrule width 1cm depth .2pt height .2pt}\nointerlineskip + \vfil + \hbox{\vrule width 1cm depth .2pt height .2pt}\nointerlineskip + \vfil + \hbox{\vrule width 1cm depth .2pt height .2pt}\nointerlineskip + \vfil + \hbox{\vrule width 1cm depth .2pt height .2pt}\nointerlineskip + \vfil + \hbox{\vrule width 1cm depth .2pt height .2pt}\nointerlineskip +}} + +\def\vspacer{\vbox to 20pt{\vss}} +\begin{figure}[h] +\def\spacedhbox#1{\hbox{\ #1\ }} +\begin{eqnarray*} + {\spacedhbox{Input}\atop \hbox{\texttt{\{c8 c8\}}}} {\spacedhbox{Parsing}\atop\longrightarrow} + {\spacedhbox{Syntax tree}\atop\spacedhbox{\textsf{Sequential(Note,Note)}}} + {\spacedhbox{Interpreting}\atop\longrightarrow}\\ + \vspacer\\ + {\spacedhbox{Graphic objects}\atop\spacedhbox{\texttrebleclef \textquarterhead\texteighthflag\textquarterhead\texteighthflag \staffsym }} + {\spacedhbox{Formatting}\atop\longrightarrow} + {\spacedhbox{Formatted objects}\atop\hbox{ + \mudela{c''8 c''8} + }}\\ +\vspacer\\ + {\spacedhbox{Outputting}\atop\longrightarrow} + {\spacedhbox{PostScript code}\atop\hbox{\texttt{\%!PS-Adobe}\ldots}} +\end{eqnarray*} + \caption{Parsing, Interpreting, Formatting and Outputting} + \label{fig:intro-steps} +\end{figure} + + +The second step, the interpretation phase of the compiler, can be +manipulated as a separate entity: the interpretation process is +composed of many separate modules, and the behaviour of the modules is +parameterised. By recombining these interpretation modules, +and changing parameter settings, the same piece of music can be +printed differently, as is shown in Figure~\ref{fig:intro-interpret}. + +This makes it easy to extend the program. Moreover, this enables the +same music to be printed in different versions, e.g., in a conductors +score and in extracted parts. + + +\begin{figure}[h] + \begin{center} + \begin{mudela} + \score { + \notes + \context GrandStaff < + \transpose c'' { c4 c4 g4 g4 a4 a4 g2 } + { \clef "bass"; c4 c'4 + \context Staff f'4 c'4 e'4 c'4 } + > + \paper { + linewidth = -1.0\cm ; + \translator { + \VoiceContext + \remove "Stem_engraver"; + } + \translator { + \StaffContext + numberOfStaffLines = 3; + } + } + } + \end{mudela} + \caption{The interpretation phase can be manipulated: the same + music as in Figure~\ref{fig:intro-fig} is interpreted + differently: three staff lines and no stems.} + \label{fig:intro-interpret} + \end{center} +\end{figure} + + + +\section{Preliminaries} + +To understand the rest of the article, it is necessary to know +something about music notation and music typography. Since both +communicate music, we will explain some characteristics of instruments +and western music that motivate some notational constructs. + +\subsection{Music} + +Music notation is meant to be read by human performers. They sing or +play instruments that can produce sounds of different pitches. These +sounds are called \emph{notes}. Additionally, the sounds can be +articulated in differents ways, e.g., staccato (short and separated) +or legato (fluently bound together). The loudness of the notes can +also be varied. Changes in loudness are called \emph{dynamics}. + +Silence is also an element of music. The musical terminology for +silence within music is \emph{rest}. + +The basic unit of pitch is the \emph{octave}. The octave corresponds +to a frequency ratio of 1:2. For example the pitch denoted by a' +(frequency: 440 hertz) is one octave lower than a'' (frequency: 880 +hertz). Various instruments have a limited \emph{pitch range}, for +example, a trumpet has a range of about 2.5 octaves. Not all +instruments have ranges in the same register: a tuba also has a range +of 2.5 octaves, but the range of the tuba is much lower. + +Musicology has a confusing mix of relative and absolute measures for +pitches: the term `octave' refers to both a difference between two +pitches (the frequency ratio of 1:2), and to a range of pitches. For +example, the term `[eengestreept] octave' refers to the pitch range +between 261.6 Hz and 523.3 Hz. + + +The octave is divided into smaller pitch steps. In modern western +music, every octave is divided into twelve approximately equidistant +pitch steps, and each step is called a \emph{semitone}. Usually, the +pitches in a musical piece come from a smaller subset of these twelve +possible pitches. This smaller subset along with the musical +functions fo the pitches is called the +\emph{tonality}\footnote{Tonality also refers to the relations between + and functions of certain pitches. Since these do not have any + impact on notation, we ignore this} of the piece. + + +The standard tonality that forms the basis of music notation +(the key of C major) is a set of seven pitches within every octave. +Each of these seven is denoted by a name. In English, these are names +are (in rising pitch) denoted by c, d, e, f, g, a and b. Pitches that +are a semitone higher or lower than one of these seven can be +represented by suffixing the name with `sharp' or `flat' +respectively (this is called an \emph{chromatic alteration}). + +A pitch therefore can be fully specified by a combination of the +octave number, the note name and a chromatic alteration. +Figure~\ref{fig:intro-pitches} shows the relation between names and +frequencies. + + + + +\begin{figure}[h] + \begin{center} + [te doen] + \end{center} + \caption{Pitches in western music. The octave number is denoted + by a superscript.} + \label{fig:intro-pitches} +\end{figure} + + +Many instruments can produce more than one note at the same time, e.g. +pianos and guitars. When more notes are played simultaneously, they +form a so-called \emph{chord}. + +The unit of duration is the \emph{beat}. When playing, the tempo is +determined by setting the number of beats per minute. In western +music, beats are often stressed in a regular pattern: for example +Waltzes have a stress pattern that is strong-weak-weak, i.e. every +note that starts on a `strong' beat is louder and has more pronounced +articulation. This stress pattern is called \emph{meter}. + +\subsection{Music notation} + +Music notation is a system that tries to represent musical ideas +through printed symbols. Music notation has no precise definition, +but most conventions have described in reference manuals on music +notation\cite{read-notation}. + +In music notation, sounds and silences are represented by symbols that +are called note and rest respectively.\footnote{These names serve a + double purpose: the same terms are used to denote the musical + concepts.} The shape of notes and rests indicates their duration +(See figure~\ref{noteshapes}) relative to the whole note. + + +\begin{figure}[h] + \begin{center} +\begin{mudela} + \score { + \notes \transpose c''{ c\longa*1/4 c\breve*1/2 c1 c2 c4 c8 c16 c32 c64 } + \paper { + \translator { + \StaffContext + \remove "Staff_symbol_engraver"; + \remove "Time_signature_engraver"; +% \remove "Bar_engraver"; + \remove "Clef_engraver"; + } +linewidth = -1.; + } +} +\end{mudela} +\begin{mudela} + \score { + \notes \transpose c''\context Staff { r\longa*1/4 r\breve*1/2 r1 r2 r4 r8 r16 r32 r64 } + \paper { + \translator { + \StaffContext + \remove "Staff_symbol_engraver"; + \remove "Time_signature_engraver"; +% \remove "Bar_engraver"; + \remove "Clef_engraver"; + } + linewidth = -1.; + } +} +\end{mudela} + \caption{Note and rest shapes encode the length. At the top notes + are shown, at the bottom rests. From left to right a quadruple + note (\emph{longa}), double (\emph{breve}), whole, half, + quarter, eigth, sixteenth, thirtysecond and sixtyfourth. Each + note has half of the duration of its predecessor.} + \label{fig:noteshapes} +\end{center} +\end{figure} + + +Notes are printed in a grid of horizontal lines called \emph{staff} to +denote their pitch: each line represents the pitch of from the +standard scale (c, d, e, f, g, a, b). The reference point is the +\emph{clef}, eg., the treble clef marks the location of the $g^1$ +pitch. The notes are printed in their time order, from left to right. + + +\begin{figure}[h] + \begin{center} + \begin{mudela} + \score { \notes { + a4 b c d e f g a \clef bass; + a4 b c d e f g a \clef alto; + a4 b c d e f g a \clef treble; + } + \paper { linewidth = 15.\cm; } + } + \end{mudela} + \caption{Pitches ranging from $a, b, c',\ldots a'$, in different + clefs. From left right the bass, alto and treble clef are + featured.} + \label{fig:pitches} + \end{center} +\end{figure} + +The chromatic alterations are indicated by printing a flat sign or a +sharp sign in front of the note head. If these chromatic alterations +occur systematically (if they are part of the tonality of the piece), +then this indicated with a \emph{key signature}. This is a list of +sharp/flat signs which is printed next to the clef. + +Articulation is notated by marking the note shapes wedges, hats and +dots all indicate specific articulations. If the notes are to be +bound fluently (legato), the note shapes are encompassed by a smooth +curve called \emph{slur}, + +\begin{figure}[h] + \begin{center} + \begin{mudela} + c'4-> c'4-. g'4 ( b'4 ) g''4 + \end{mudela} + \caption{Some articulations. From left to right: extra stress + (\emph{marcato}), short (staccato), slurred notes (legato).} + \label{fig:articulation} + \end{center} +\end{figure} + + + +Dynamics are notated in two ways: absolute dynamics are indicated by +letters: \textbf{f} (from Italian ``forte'') stands for loud, +\textbf{p} (from Italian ``piano'') means soft. Gradual changes in +loudness are notated by (de)crescendos. These are hairpin like shapes +below the staff. + +\begin{figure}[h] + \begin{center} + \begin{mudela} + g'4\pp \< g'4 \! g'4 \ff \> g'4 g' \! g'\ppp + \end{mudela} + \caption{Dynamics: start very soft (pp), grow to loud (ff) and + decrease to extremely soft (ppp)} + \label{fig:dynamics} + \end{center} +\end{figure} + + +The meter is indicated by barlines: every start of the stress pattern +is preceded by a vertical line, the \emph{bar line}. The space +between two bar lines is called measure. It is therefore the unit of +the rhythmic pattern. + +The time signature also indicates what kind of rhythmic pattern is +desired. The time signature takes the form of two numbers stacked +vertically. The top number is the number of beats in one measure, the +bottom number is the duration (relative to the whole note) of the note +that takes one beat. Example: 2/4 time signature means ``two beats +per measure, and a quarter note takes one beat'' + +Chords are written by attaching multiple note heads to one stem. When +the composer wants to emphasize the horizontal relationships between +notes, the simultaneous notes can be written as voices (where every +note head has its own stem). A small example is given in +Figure~\ref{fig:simultaneous}. + +\begin{figure}[h] + \begin{center} + \begin{mudela} + \relative c'' {\time 2/4; + \context Staff < \context Voice = VA{ + \stemdown + c4 d + b16 b b b b b b b } + \context Voice = VB { + \stemup e4 f g8 g4 g8 } > + } + \end{mudela} + \caption{Notes sounding together. Chord notation (left, before + the bar line) emphasizes vertical relations, voice notation + emphasizes horizontal relations. Separate voices needn't have + synchronous rhythms (third measure). + } + \label{fig:simultaneous} + \end{center} +\end{figure} + +Separate voices do not have to share one rhythmic pattern---this is +also demonstrated in Figure~\ref{fig:simultaneous}--- they are in a sense%vaag +independent. A different way to express this in notation, is by +printing each voice on a different staff. This is customary when +writing for piano (both left and right hand have a staff of their own) +and for ensemble (every instrument has a staff of its own). + + + +\subsection{Music typography} + +Music typography is the art of placing symbols in esthetically +pleasing configuration. Little is explicitly known about music +typography. There are only a few reference works +available\cite{ross,wanske}. Most of the knowledge of this art has +been transmitted verbally, and was subsequently lost. + +The motivation behind choices in typography is to represent the idea +as clearly as possible. Among others, this results in the following +guidelines: +\begin{itemize} +\item The printed score should use visual hints to accentuate the + musical content +\item The printed score should not contain distracting elements, such + as large empty regions or blotted regions. +\end{itemize} + +An example of the first guideline in action is the horizontal spacing. +The amount of space following a note should reflect the duration of +that note: short notes get a small amount of space, long notes larger +amounts. Such spacing constraints can be subtle, for the +``amount of space'' is only the impression that should be conveyed; there +has to be some correction for optical illusions. See +Figure~\ref{fig:spacing}. + +\begin{figure}[h] + \begin{center} + \begin{mudela} + \relative c'' { \time 3/4; c16 c c c c8 c8 | f4 f, f' } + \end{mudela} + \caption{Spacing conveys information about duration. Sixteenth + notes at the left get less space than quarter notes in the + middle. Spacing is ``visual'', there should be more space + after the first note of the last measure, and less after second. } + \label{fig:spacing} + \end{center} +\end{figure} + +Another example of music typography is clearly visible in collisions. +When chords or separate voices are printed, the notes that start at +the same time should be printed aligned (ie., with the same $x$ +position). If the pitches are close to each other, the note heads +would collide. To prevent this, some notes (or note heads) have to be +shifted horizontally. An example of this is given in +Figure~\ref{fig:collision}. +\begin{figure}[h] + \begin{center} + [todo] + \caption{Collisions} + \label{fig:collision} + \end{center} +\end{figure} + +\bibliographystyle{hw-plain} +\bibliography{engraving,boeken,colorado,computer-notation,other-packages} + +\section{Requirements} + + +\section{Approach} + +\subsection{Input} + +The input format consists of combining a symbolic representation of +music with style sheet that describes how the symbolic presentation +can converted to notation. The symbolic representation is based on a +context free language called \textsf{music}. Music is a recursively +defined construction in the input language. It can be constructed by +combining lists of \textsf{music} sequentially or parallel or from +terminals like notes or lyrics. + +The grammar for \textsf{music} is listed below. It has been edited to +leave out the syntactic and ergonomic details. + +\begin{center} + \begin{tabular}{ll} +Music: & SimpleMusic\\ + & $|$ REPEATED int Music ALTERNATIVE MusicList\\ + & $|$ SIMULTANEOUS MusicList\\ + & $|$ SEQUENTIAL MusicList\\ + & $|$ CONTEXT STRING '=' STRING Music\\ + & $|$ TIMES int int Music \\ + & $|$ TRANSPOSE PITCH Music \\ +SimpleMusic: & $|$ Note\\ + & $|$ Lyric\\ + & $|$ Rest\\ + & $|$ Chord\\ + & $|$ Command\\ +Command: & METERCHANGE\\ + & $|$ CLEFCHANGE\\ + &$|$ PROPERTY STRING '=' STRING\\ +Chord: &PitchList DURATION\\ +Rest: &REST DURATION\\ +Lyric: &STRING DURATION\\ +Note: &PITCH DURATION\\ +\end{tabular} +\end{center} + +The terminals are both purely musical concepts that have a duration, +and take a non-zero amount of musical time, like notes and lyrics, and +commands that behave as if they have no duration.\footnote{The + PROPERTY command is a generic mechanism for controlling the + interpretation, i.e. the musical style sheets. See [forward ref]} + +The nonterminal productions can +\begin{itemize} +\item Some productions combine multiple elements: one can specify that + element are to be played in sequence, simultaneously or repetitively. +\item There are productions for transposing music, and for dilating + durations of music: the TIMES production can be used to encode a + triplet.\footnote{A triplet is a group of three notes marked by a + bracket, that are played 3/2 times faster.} +\item + There are productions that give directions to the interpretation + engine (the CONTEXT production) +\end{itemize} + + +\section{Context in notation} + +Music notation relies heavily on context. Notational symbols do not +have meaning if they are not surrounded by other context elements. In +this section we give some examples how the reader uses this context do +derive meaning of a piece of notation. We will focus on the prime +example of context: the staff. + +A staff is the grid of five horizontal lines, but it contains more components : +\begin{itemize} +\item A staff can have a key signature (printed at the left) +\item A staff can have a time signature (printed at the left) +\item A staff has bar lines +\item A staff has a clef (printed at the left) +\end{itemize} + +It is still possible to print notes without these components, but one +cannot determine the meaning of the notes. +\begin{mudela} +\score{ +\notes \relative c' { \time 2/4; g'4 c,4 a'4 f4 e c d2 } +\paper { + linewidth = -1.; + \translator { + \StaffContext + \remove "Time_signature_engraver"; +% \remove "Bar_engraver"; + \remove "Staff_symbol_engraver"; + \remove "Clef_engraver"; + \remove "Key_engraver"; + } + } +} +\end{mudela} + +As you can see, you can still make out the general form of the melody +and the rhythm that is to be played, but the notation is difficult to +read and the musical information is not complete. The stress +pattern in the notes can't be deduced from this output. For this, we +need a time signature. Adding barlines helps with finding the strong +and weak beats. +\begin{mudela} +\score { + \notes \relative c' { \time 2/4; g'4 c,4 a'4 f4 e c d2 } + \paper{ + linewidth = -1.; +\translator{ + \StaffContext + \remove "Staff_symbol_engraver"; + \remove "Clef_engraver"; + \remove "Key_engraver";} + } + } +\end{mudela} + +It is impossible to deduce the exact pitch of the notes. One needs a +clef to do so. Staff lines help the eye in determining the vertical +position of a note wrt. to the clef. +\begin{mudela} +\score { + \notes \relative c' {\clef alto; \time 2/4; g'4 c,4 a'4 f4 e c d2 } + \paper { + linewidth = -1.; + } +} +\end{mudela} + +Now you know the pitch of the notes: you look at the start of the line +and see a clef, and with this clef, you can determine the notated pitches. +You have found the em(context) in which the notation is to be +interpreted! + + +\section{Interpretation context} + +Context (clef, time signature etc.) determines the relationship +between musical and its notation in notes. Because LilyPond writes +notation, context works the other way around for LilyPond: with +context a piece of music can be converted to notation. + +A reader remembers this context while reading the notation from left +to right. By analogy, LilyPond constructs this context while +constructing notes from left to right. This is what happens in the +``Interpretation'' phase from~\ref{fig:intro-fig}. In LilyPond, the +state of this context is a set of variables with their values; A staff +context contains variables like + +\begin{itemize} +\item current clef +\item current time signature +\item current key +\end{itemize} + +These variables determine when and how clefs, time signatures, bar +lines and accidentals are printed. + + +Staff is not the only form of context in notation. In polyphonic +music, the stem direction shows which notes form a voice: all notes of +the same voice have stems pointing in the same direction. The value +of this variable determines the appearance of the printed stems. + +In LilyPond ``Notation context'' is abstracted to a data structure +that is used, constructed and modified during the interpretation +phase. It contains context properties, and is responsible for +creating notational elements: the staff context creates symbols for +clefs, time signatures and key signatures. The Voice context creates +stems, note heads. + +For the fragment of polyphonic music below, +\begin{mudela} + \context Staff { c'4 < { \stemup c'4 } \context Voice = VB { \stemdown a4 } > } +\end{mudela} +A staff context is created. Within this staff context (which printed +the clef), a Voice context is created, which prints the first note. +Then, a second Voice context is created, with stem direction set to +``up'', and the direction for the other is set to down. Both Voice +contexts are still part of the same Staff context. + +In the same way, multiple staff scores are created: within the score +context, multiple staff contexts are created. Every staff context +creates the notation associated with a staff. + +\section{Discussion} + + + +\end{document} + +The complexity of music notation was tackled by adopting a modular +design: both the formatting system (which encodes the esthetic rules of +notation), and the interpretation system (which encodes the semantic +rules) are highly modular. + + +The difficulty in creating a format for music notation is rooted in +the fact that music is multi dimensional: each sound has its own +duration, pitch, loudness and articulation. Additionally, multiple +sounds may be played simultaneously. Because of this, there is no +obvious way to ``flatten'' music into a context-free language. + +The difficulty in creating a printing engine is rooted in the fact +that music notation complicated: it is very large graphical +``language'' with many arbitrary esthetic and semantic conventions. +Building a system that formats full fledged musical notation is a +challenge in itself, regardless of whether it is part of a compiler or +an editor. + +The fact that music and its notation are of a different nature, +implies that the conversion between input notation is non-trivial. + +In LilyPond we solved the above problem in the following way: +