question(How does PS output work?)
-Make Type-1 fonts: issue
+itemize(
+ it()
+Generate the PostScript Type-3 fonts. In the file(mf/)
+subdirectory, issue:
verb(
make pfa
-) in the mf/ subdirectory
-Set GS_FONTPATH to the directory containing the pfas. In the source
-tree, this is file(mf/out/). Run lilypond with option tt(-f ps).
+) in the mf/ subdirectory. This will also make file(mfplain) for metapost.
+it()
+Run lilypond with option tt(-f ps):
+verb(
+ lilypond -fps foo.ly
+)
+it() To view the file(.ps) output with GhostView, set GS_FONTPATH to the
+directory containing the file(pfa)s, and set GS_LIB to the directory containing the file(.ps) library files of LilyPond. In the source tree, these are file(mf/out/) and file(ps/).
+
+
+i.e. do something like:
+verb(
+ export GS_FONTPATH=$HOME/usr/src/lilypond/mf/out
+ export GS_LIB=$HOME/usr/src/lilypond/ps
+ gv foo.ps &
+)
+)
+
+Direct PS output is still experimental. For creating nice looking ps
+output, use TeX() and code(dvips).
+
question(The beams and slurs are gone if use the XDvi magnifying glass!?)
The beams and slurs are done in PostScript. XDvi doesn't show
Grep for TODO and ugh/ugr/urg.
.* BUGS
+. * header for PS enteredby = "bla <bla@bar.com>"
+. * ps/lily.ps
. * AFM for BlueSky AFM files.
. * staff size for post/prebreaks
. * .ly files
. *P.P.S. It can be cool in mudela-book to distinguish in pre,postMudelaExample,
whether MudelaExample is epsed or not: ( if this fragment is floating eps, than 1,
otherwise 2). say preMudelaExample[eps]{}, and change it in document body sometimes.
-
-. * tetex: mfplain.mem
. * fix singleStaffBracket
. * declare performers in \midi
. * fix MIDI
specify the third. Should there be?
.* TODO before 1.2
+. * \selectmusic to cut pieces from music.
. * break priority setting from SCM.
. * Gade score
. * remove [] in favour of auto-beamer
. * glibc 2.0:
f = fopen ("/dev/null", "r")
assert (feof (f))
+
+. * tetex: mfplain.mem
.* 3RD PARTY PROJECTS:
. * make GCC warn about ctor that leaves member vars uninitialised.
. * GNU patch
So the best way of handling this, is
-* supporting dynamic settings in Audio_note
+ 1 supporting dynamic settings in Audio_note
-* mimicking the broadcast/acknowledge mechanism of the Engravers in
+ 2 mimicking the broadcast/acknowledge mechanism of the Engravers in
the Performers
-* using that mechanism to write a Dynamics_performer that will modify
+ 3 using that mechanism to write a Dynamics_performer that will modify
any notes it finds to set appropriate strengths.
You could also kludge this by deriving from Performer_group_performer