# Including DOT Graphs as PostScript or PDF files in LaTeX documents

Graphviz is a great open source graph visualization software. In this post I’ll demonstrate how to include DOT graphs in PDF documents using LaTeX.

The most convenient way to insert graphics into PDF documents is to use vector graphics. In contrast to raster graphics (which are pixel-based) vector graphics can be scaled arbitrarily without getting “pixelated”.

From DOT the graphics can be exported in vector graphics as Portable Document Format (PDF) or Encapsulated Post Script (EPS) files. The file type you need depends on the LaTeX compiler you are using. The standard latex compiler can handle EPS files, whereas pdflatex supports PDF graphics.

# Option 1: Using latex and EPS graphics

Assuming you have persisted your graph in a dot file, say myGraph.dot, you can convert it to an Encapsulated Post Script (EPS) file by using the dot tool from the command line:

1 dot -Tps2 myGraph.dot -o myGraph.eps

Here is a nice shell script which converts all dot files in the current directory to eps files:

1 2 3 4 5 6 7 8 #!/bin/bash for f in *.dot; do   basename=$(basename "$f")   filename=${basename%.*} command="dot -Tps2$f -o ${filename}.eps"; echo$command;   $command; done To use it, simply save it as an .sh file, execute chmod 755 on it and execute it in the terminal (works for unix-based systems only). Having done that, we try to insert that image in our pdf file: 1 2 3 4 5 6 7 \begin{figure}[htp] \begin{center} \includegraphics[width=0.9\textwidth]{myGraph} \caption{My Caption} \label{fig:myGraph} \end{center} \end{figure} Note that the file extension is ommited. Unfortunately, the pdflatex compiler does not support EPS images. But don’t worry, there are two options for solving that. # Option 2: Using pdflatex and PDF graphics If you are using pdflatex, PDF graphics have to be produced instead of EPS graphics. This is done as described in the previous section, except another target file type is specified: 1 dot -Tpdf myGraph.dot -o myGraph.pdf # Option 3: Using pdflatex and converting EPS to PDF on the fly The epstopdf LaTeX package converts your EPS files to PDF files on the fly, which then can be included in your document. This is how you must configure your graphic packages in the preamble of your LaTeX document (code from this blog post): 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 \newif\ifpdf \ifx\pdfoutput\undefined \pdffalse \else \pdfoutput=1 \pdftrue \fi \ifpdf \usepackage{graphicx} \usepackage{epstopdf} \DeclareGraphicsRule{.eps}{pdf}{.pdf}{epstopdf #1} \pdfcompresslevel=9 \else \usepackage{graphicx} \fi The epstopdf package will automatically create a PDF file of your figure and include it in your document. However, the following requirements must be fulfilled: The -shell-escape option must be set when invoking the compiler. The command line setting for this invocation can be set in your LaTeX IDE. Here I will illustrate how to configure this setting using the TeXlipse IDE for Eclipse: Open Preferences -> Texlipse -> Builder Settings, select the PdfLatex program and click Edit. The following dialog appears, in which you can add the -shell-escape option to the command line: Another requirement (which is specific to the TeXlipse environment) is the PATH variable, which has to be set. Otherwise, the shell commands called from LaTeX can not be executed correctly. Check out your system path by opening a terminal and typing: 1 echo$PATH

Copy the output to the clipboard, and in TeXlipse go to Preferences -> TeXlipse -> Builder Settings -> Environment. Click Add and create a new environment variable named PATH with the value of your system path, which you just copied from the terminal:

After that, the EPS-to-PDF conversion should work on the fly. If it does not work for some reason, you can manually invoke the conversion from the command line:

1 epstopdf myGraph.eps

You can now publish PDF documents with infinitely-scalable graphs 🙂

# Dividing Git Repositories

I’ve been working in a single git repository with many projects (subfolders) and needed to divide it by moving some folders (including their history) to a new repository, thereby removing these files (with history) from the original repository. There are a number of posts on this, however for my case none worked exactly as indicated. I rather had to combine instructions from various sources, that’s why I decided to write a summarizing blog post here.

# Short Instructions

Basically, you have to perform these steps:

1. Make a backup
2. Clone the source repository
3. Execute git subtree commands on all folders to be moved
4. Create a new repository
5. Pull the isolated folders into the new repository and move them to subdirectories
6. Restore the original repository
7. Delete all moved folders (including history) from the original repository

# Detailed Instructions

First of all: Make a backup. Seriously, these operations will rewrite your git history and can be destructive, so make sure everything is stored elsewhere.

The first step is to create a new working copy of the existing repository.

1 git clone git://my-server.tld/my-repo.git

Imagine the following git repository with the following folders on top level:

A
B
C
D

Let’s say we want to move A and B to another, new git repository and leave C and D in the original repository. Therefore we invoke git subtree split – a magic command that will create a branch containing only the history of a given subdirectory. The syntax is:

1 git subtree split -P <directory> -b <branch>

In our example, we isolate folder A on branch onlyA and folder B on branch onlyB:

1 2 git subtree split -P A -b onlyA git subtree split -P B -b onlyB

The source repository is now prepared. The next step is to create the new target repository:

1 2 3 4 cd .. mkdir newrepo cd newrepo git init

Now we pull a branch from the original repository:

1 git pull /path/to/original/repository onlyA

The path to the original repository must be an absolute path, at least relative paths did not work for me. When listing the directory content you will notice that everything was imported to the root of your repository. If you want the merged content to be in a subfolder again, you need to create it and move the files using git mv commands manually. Dont’t forget to commit your moved files. Don’t forget to move hidden files also! These can be discovered using ls -la.

1 2 3 4 5 mkdir A git mv file1 A git mv file2 A ... git commit -m "Merged A and moved to subfolder"

Note: Apparently it is not possible to move the files in the original repository and then use the split command. When doing this, my history got lost. For me it only worked the other way round (first split, then move the files in the target repository). Feel free to comment if you can explain this behaviour.

Repeat this process for all other folders to be moved:

1 2 3 4 5 6 git pull /path/to/original/repository onlyB mkdir B git mv file1 B git mv file2 B ... git commit -m "Merged B and moved to subfolder"

After that we restore the original repository completely (either using git clone or from a backup). Then we remove all moved folders (including history) from that repository. This is achieved using git filter-branch commands:

1 2 git filter-branch -f --tree-filter 'rm -rf A' HEAD git filter-branch -f --tree-filter 'rm -rf B' HEAD

That’s it! We have now moved a part of the original repository (including history) to a new repository and removed that part (including history) from the original repository. Effectively this divided the original repository into two independent repositories.

# Resources

Here are some links that might be helpful for details and performance optimized operations:

# Defining Custom Language Templates for LaTeX Listings

The listings package enables LaTeX users to include source code snippets into LaTeX documents. The source code can be formatted using predefined languages templates. To select a predefined language, use the following code (as an example, Python is used):

\usepackage{listings} \lstset{language=Python} \lstinputlisting{path/to/myFile.py}

However, it is also possible to define own formatting templates for custom languages, which can be useful for developers of Domain-specific Languages (DSLs), for example. This is accomplished by using the command \lstdefinelanguage. The following example defines a language named myLang with various keywords, single line comments prefixed with //, multiline comments delimited by /* and */ and Strings delimited with double quotes:

% Define Language \lstdefinelanguage{myLang} {   % list of keywords   morekeywords={     import,     if,     while,     for   },   sensitive=false, % keywords are not case-sensitive   morecomment=[l]{//}, % l is for line comment   morecomment=[s]{/*}{*/}, % s is for start and end delimiter   morestring=[b]" % defines that strings are enclosed in double quotes }

Subsequently you can use the \lstset command to activate your language template for all following listings. All parameters are documented inline. This results in nice listings with line numbers, borders with rounded corners and syntax highlighting for comments, keywords and strings. The colors are based on the Java highlighting style in Eclipse:

% Define Colors \usepackage{color} \definecolor{eclipseBlue}{RGB}{42,0.0,255} \definecolor{eclipseGreen}{RGB}{63,127,95} \definecolor{eclipsePurple}{RGB}{127,0,85}   % Set Language \lstset{   language={myLang},   basicstyle=\small\ttfamily, % Global Code Style   captionpos=b, % Position of the Caption (t for top, b for bottom)   extendedchars=true, % Allows 256 instead of 128 ASCII characters   tabsize=2, % number of spaces indented when discovering a tab   columns=fixed, % make all characters equal width   keepspaces=true, % does not ignore spaces to fit width, convert tabs to spaces   showstringspaces=false, % lets spaces in strings appear as real spaces   breaklines=true, % wrap lines if they don't fit   frame=trbl, % draw a frame at the top, right, left and bottom of the listing   frameround=tttt, % make the frame round at all four corners   framesep=4pt, % quarter circle size of the round corners   numbers=left, % show line numbers at the left   numberstyle=\tiny\ttfamily, % style of the line numbers   commentstyle=\color{eclipseGreen}, % style of comments   keywordstyle=\color{eclipsePurple}, % style of keywords   stringstyle=\color{eclipseBlue}, % style of strings }

That’s it. Now you can include your listings in your custom language using

\lstinputlisting{path/to/myFile.ext}

Now have fun with your language and LaTeX 🙂

# How to Install Custom LaTeX Extensions with MacTeX

When writing articles for conferences, authors are often supplied with custom layout templates, classes, bibliography styles etc. for LaTeX. The most common file extensions of these files are:

• .cls – Class files to add new document classes
• .bst – BibTeX Style files to format your bibliography
• .sty – Style files containing LaTeX macros

Using those files documents are created according to the style guidelines of the conference proceedings / journals the article will be published in. When using MacTeX on Mac OS X, the proper location for custom extensions is:

~/Library/texmf/tex/latex/

where ~ ist your user home folder. I particularly needed this to install the LaTeX template for Springer Lecture Notes in Computer Science (LNCS), which are available here. After copying the whole llncs folder into ~/Library/texmf/tex/latex/, I could use the class in my LaTeX environment like this:

\documentclass{llncs}

Happy publishing 🙂

# Creating Audio Unit Plugins with SuperCollider

In this post I’ll show you how to build Audio Unit Plugins programmed in SuperCollider using the AudioUnitBuilder. First of all, I want to refer to this excellent Tutorial, in which Abel Domingues explains in great detail how the whole Audio Unit architecture works and how the SuperCollider AU works. I would advise you to read this first, especially if you want detailed background knowledge. However, some things were not working for me out of the box and therefore I decided to share some problem solutions here.

My instructions were created with the following Setup:

• Mac OS X Mountain Lion (10.8.4)
• SuperCollider 3.5.1
• Xcode 4.6.3 (4H1503)

# Install Dependencies

First of all, the following software prerequisites need to be installed:

2. Download Xcode 4. This requires an Apple Developer Account. Unfortunately, the file is very large (1.7 GB). Install it and open it (make sure you don’t confuse it with your old Xcode, if you had one before)
3. In Xcode, click on the Main Application Menu and go to Open Developer Tool -> More Developer Tools. Alternatively, you can go directly to this page. Download the Command Line Tools for Xcode and install them.
4. Download the SuperCollider AU from here. It has to be extracted to ~/Library/Audio/Plug-Ins/Components.
5. Open your SuperCollider and execute Quarks.gui. Install the AudioUnitBuilder Quark (you need to have an internet connection and SVN installed for that).

# Adjust Paths in the AudioUnitBuilder Code

The current AudioUnitBuilder refers to some old paths, which have changed. In particular, the Rez executable is now in another location, as well as the Server Plugins. Both are now packaged into an .app Container. Open the AudioUnitBuilder Source Code (mark the classname and press Cmd+Y). Inside the file you see the classvar

1 classvar <>rez="/Developer/Tools/Rez"

which has to be changed to

1 classvar <>rez="/Applications/Xcode.app/Contents/Developer/Tools/Rez"

Next, look for the method copyPlugins (in the lower third of the class). The line

1 2 cmd = "grep -e _"++ugens.asSequenceableCollection.join("_ -e _") + "_ plugins/*.scx";

must be changed to

1 2 cmd = "grep  -e _"++ugens.asSequenceableCollection.join("_ -e _") + "_ SuperCollider.app/Contents/Resources/plugins/*.scx";

Don’t forget to recompile your library (Cmd+K) before you proceed.

# Open An Example AU Specification

The AudioUnitBuilder comes with an example file called fedDelay.rtf. You find it in the Quark Installation Directory, which is ~/Application Support/SuperCollider/quarks/AudioUnitBuilder/examples. Open the file in SuperCollider.

To use the AudioUnitBuilder, you’ll need to specify the following:

• Name of your Audio Unit
• Component Type (\aumu, \aumx, \augn or \aufx, which stands for Instruments, Mixers, Generators or Effects, respectively)
• Component Subtype (A four letter code identifying your plugin)
• specs: A two-dimensional array specifying the parameters for your plugin
• func: the function implementing the actual signal processing

# Building the Audio Unit

At the end of the example file, you see the simple instructions needed to build your AU using the AudioUnitBuilder:

1 2 builder = AudioUnitBuilder.new(name, componentSubtype,func, specs, componentType); builder.makeInstall;

Basically the things we have specified are passed in the constructor, and then the method makeInstall does the magic. Hm, but what does it do actually? It is a good idea to look into the source code to understand it. I will try to summarize it here shortly:

1. The contents of SuperColliderAU.component are copied recursively to <yourPluginName>.component
2. All plugins (Ugens) are deleted
3. XML Property list files are created to configure the server running inside the AU and your plugin
4. A Resource file is created based on a template (which is SuperColliderAU.rsrc in the quark directory). Some placeholders in that template are replaced using the sed Unix command. The resource is processed with the Rez tool delivered with Xcode.
5. The DSP function you specified is compiled as SynthDef.
6. The compiled SynthDef is copied to the synthdefs folder inside the AU.
7. Plugins (UGens compiled as .scx files) needed for signal processing are copied from your SuperCollider installation to the AU.

To let the AudioUnitBuilder do all those things for you, you simply need to select the whole file and execute it. If you are lucky, you will get an output like

Created ~/Library/Audio/Plug-Ins/Components/fedDelay.component
an AudioUnitBuilder

But if something goes wrong, here are some of my solutions and debugging tips:

• I got “Error running Rez” on the console which I traced down to an inclusion problem in the resource file. If you open  SuperColliderAU.rsrc file in a text editor, you’ll see the inclusion #include <CoreServices/CoreServices.r> which could not be resolved. The solution is to install the Xcode Command Line Tools, as described above.
• You can debug the commands executed in the Builder by changing .systemCmd calls to .postln.systemCmd.
• You can inspect the files created during the building process by changing mv commands to cp commands. This way all files can be reviewed in the quarks/AudioUnitBuilder directory.
• Check the contents of the .component target file by browsing it in the Finder (right click -> Show Package Contents).
• My working Audio Unit has a size of about 2 MB. If it is significantly smaller, that you might forgot to put the SuperColliderAU.component as skeleton AU into your Plug-Ins/Components folder.

The working AU should contain

• Contents/MacOS/SuperColliderAU
• Resources/plugins/(some .scx files)
• Resources/pluginSpec.plist
• Resources/serverConfig.plist
• Resources/SuperColliderAU.rsrc
• Resources/synthdefs/fedDelay.scsyndef

Alright? Then let’s validate our AudioUnit.

# Audio Unit Validation

Apple developed a tool called auval to validate Audio Units. This is very useful for debugging, as you see error message from the SuperCollider server during the validation process.

Open a terminal window and execute

1 auval -v aufx FEDL SCAU

This will start the validation process for the plugin identified by a triple. The three parameters are component type, component subtype and manufacturer. Read the output if you find any error messages like “SynthDef not found”. If this is the case, then you most likely used the wrong SuperCollider version. To be compatible with the server shipped in the AU, the AudioUnitBuilder must be executed with SuperColldier 3.5.1.

# Try Out Your Audio Unit

If validation passed successfully, you can use your AU in any AU-compatible audio environment. For example you can download the Audio Tools for Xcode from the Apple Developer page. It contains an App called AU Lab, which you can use to test your plugin. Here is how you insert the AU as effect:

And this is the generated UI to control the plugin:

So this is it. I hope you got your AU running and have lots of fun coding you own AUs with SuperCollider 🙂

# Standalone Parsing with Xtext

Xtext is an awesome framework to create your own domain specific languages (DSLs). Providing a grammar, Xtext will create your data model, Lexer, Parser and even a powerful Editor integrated into an Eclipse IDE including syntax highlighting and auto completion 🙂

In this blog post I want to summarize some of my experiences with the behaviour of Xtext using different parsing approaches. These are useful if you want to parse input with Xtext in standalone applications or in the Eclipse context in order to get a model representation of your DSL code.

# Approach 1: Injecting an IParser instance

The first approach uses the IParser interface. In a standalone application (that means if your code is not running in an Eclipse/Equinox environment), a parser instance can be retrieved using the injector returned by an instance of <MyDSL>StandaloneSetup:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 public class XtextParser {   @Inject private IParser parser;   public XtextParser() { setupParser(); }   private void setupParser() { Injector injector = new MyDSLStandaloneSetup().createInjectorAndDoEMFRegistration(); injector.injectMembers(this); }   /** * Parses data provided by an input reader using Xtext and returns the root node of the resulting object tree. * @param reader Input reader * @return root object node * @throws IOException when errors occur during the parsing process */ public EObject parse(Reader reader) throws IOException { IParseResult result = parser.parse(reader); if(result.hasSyntaxErrors()) { throw new ParseException("Provided input contains syntax errors."); } return result.getRootASTElement(); } }

Using this approach, your parse result can be retrieved with only very few lines of code. However, it only works in standalone applications. If you execute this code in the Eclipse context, the following errror is logged:

java.lang.IllegalStateException: Passed org.eclipse.xtext.builder.clustering.CurrentDescriptions not of type org.eclipse.xtext.resource.impl.ResourceSetBasedResourceDescriptions
at org.eclipse.xtext.resource.containers.ResourceSetBasedAllContainersStateProvider.get(ResourceSetBasedAllContainersStateProvider.java:35)

To resolve this, the injector has to be created differently, using Guice.createInjector and the Module of your language:

1 Injector injector = Guice.createInjector(new MyDSLRuntimeModule());

Now the parser works fine, even in Eclipse. But if you use references to other resources or import mechanisms, you will find that the references to other resources can not be resolved. That’s why you need a resource to parse Xtext input properly.

# Approach 2: Using an XtextResourceSet

To parse input using resources, you inject an XtextResourceSet and create a resource inside the ResourceSet. There are two ways to specify the input:

1. an InputStream
2. an URI specifying the location of a resource

In my implementation there are two methods for those two alternatives, respectively:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 public class XtextParser {   @Inject private XtextResourceSet resourceSet;   public XtextParser() { setupParser(); }   private void setupParser() { new org.eclipse.emf.mwe.utils.StandaloneSetup().setPlatformUri("../"); Injector injector = Guice.createInjector(new MyDSLRuntimeModule()); injector.injectMembers(this); resourceSet.addLoadOption(XtextResource.OPTION_RESOLVE_ALL, Boolean.TRUE); }   /** * Parses an input stream and returns the resulting object tree root element. * @param in Input Stream * @return Root model object * @throws IOException When and I/O related parser error occurs */ public EObject parse(InputStream in) throws IOException { Resource resource = resourceSet.createResource(URI.createURI("dummy:/inmemory.ext")); resource.load(in, resourceSet.getLoadOptions()); return resource.getContents().get(0); }   /** * Parses a resource specified by an URI and returns the resulting object tree root element. * @param uri URI of resource to be parsed * @return Root model object */ public EObject parse(URI uri) { Resource resource = resourceSet.getResource(uri, true); return resource.getContents().get(0); }   }

In both cases, the resource set is injected using Guice. Also, the Eclipse platform path is initialized using new org.eclipse.emf.mwe.utils.StandaloneSetup().setPlatformUri(“../”). The load option RESOLVE_ALL is added to the resource set.

If an InputStream is provided, the underlying resource is a dummy resource created in the ResourceSet. Make sure that the file extension matches the one of your DSL.

In case of a given resource URI, your resource can be parsed directly using resourceSet.getResource(). Using this approach, all references (even to imported / other referenced resources) will be resolved.

# The Dependency Injection Issue

Still there is another problem because we use Dependency Injection here in a way which is not exactly elegant. The point is that a class should not need to care about how its members are injected. In a well-designed DI-based application, there is only one injection call and all members are intantiated recursively from “outside the class”. To learn more about this issue, please read this excellent blog post by Jan Köhnlein.

I hope this post was useful to you. Please feel free to share your thoughts / ideas for improvements.

# Installing LaTeX, Eclipse and TeXlipse on Mac OS X

LaTeX is an essential typesetting language for scientific papers and presentations. In this blog post I will explain how to install a complete LaTeX environment on your mac based on MacTex, Eclipse and TeXlipse. These components form a very powerful IDE for writing documents with LaTeX.

# Installing a LaTeX distribution

The recommended LaTeX distribution to install on OS X is MacTex. Just follow the link, download the (very large!) installation bundle and execute it.

# Installing Eclipse

Eclipse is a very powerful software platform, allowing to install plugins in order to assemble a flexible and powerful IDE. Download Eclipse (Version for Java Developers) here for your appropriate processor architecture. Unpack the file and execute Eclipse.app inside the unpacked folder.

If you are running Mountain Lion and get a system message that “Eclipse can not be executed because the developer is not verified”, unlock the App by starting a Terminal window and typing the following command:

xattr -d com.apple.quarantine /Applications/eclipse/Eclipse.app

Of course, you have to change the path to your custom location.

When Eclipse is started for the first time, it will ask you for a “workspace location”. This is a folder on your hard disk where your projects and files that are edited in Eclipse are stored. Choose an appropriate location of your choice and select “use this as default and do not ask again“.

On the first start, a welcome screen will appear, which you can simply close.

# Installing TeXlipse (Option A – via Eclipse Marketplace)

To install the LaTeX plugin for Eclipse named TeXlipse, you simply go to Help -> Eclipse Marketplace.

Wait until the list is loaded and type texlipse into the search field. TeXlipse will be listed as first option in the list. Click the Install button.

In the following dialog, click Next, read and accept the license and then click Finish. If you get a security warning saying that “you are trying to install software with unsigned content”, click OK. After the installation you’ll be asked to restart Eclipse, which you confirm with Yes.

# Installing TeXlipse (Option B – Update Site)

If for some reason the menu item Eclipse Marketplace is missing, you can also install TeXlipse via Help -> Install new Software. In that case, you need to provide an Update Site, which is http://texlipse.sourceforge.net/.

# Configuring TeXlipse

After the successful installation, TeXlipse needs some configuration. Therefore, go to Eclipse -> Preferences (Shortcut: Cmd + ,).

In the preferences, go to TeXlipse -> Builder Settings. Here the paths to the MacTex binaries have to be set.

To set all paths simultaneously, click on the Browse… button and choose your LaTeX executable folder. On my system (64 bit), the path is

/usr/local/texlive/2011/bin/x86_64-darwin

On 32 bit systems, the path should be similar to

/usr/local/texlive/2011/bin/universal-darwin

After the path is selected, hit the Apply button. Now almost all executable paths should be configured (see screenshot).

Next, you have to configure a PDF viewer. Go to TeXlipse -> Viewer settings -> New… and add the values shown in the screenshot (Name: open, Command: /user/bin/open, Arguments: %file).

Move your configuration up in the list.

# Creating a LaTeX project

To create your first LaTeX project, right click inside the Package Explorer (on the left) and choose New -> Project… and in the dialog select TeXlipse -> LaTeX Project. On the next screen, enter a project name and the language code and choose the output format pdf. The builds commands should automatically switch to pdflatex. The article template is a good starting point.

When you click Finish, Eclipse will ask you if the LaTeX perspective should be opened, which you confirm with Yes.

# Building the PDF

To build the PDF, simply make changes to a .tex file and save it. Eclipse will automatically trigger a PDF build on every document save action. You can view the output in the console (located at the bottom of the IDE by default). To view the PDF, use the shortcut Cmd + 4. If everything went fine, you should see a simple PDF file:

Now you are ready to write great papers with LaTeX. Enjoy 🙂

# Creating your own SuperCollider-based Standalone Application

Let’s build our own standalone application based on the ingenious sound synthesis engine and programming language SuperCollider! These instructions were created with SuperCollider 3.5.5 and are suitable to create a standalone app for OS X.

Here is what we need to do:

2. Extract the source files to a directory and rename SuperCollider-Source to the name of your app (e.g. MyApp)
3. In platform/mac, create a copy of the folder Standalone Resources and rename it to MyApp Resources (replace with your app name).
4. Inside your MyApp Resources, delete the file SCClassLibrary/SCSA_Demo.sc.
5. Copy any custom classes that your app requires inside the SCClassLibrary folder.
6. Edit the modifyStartup.sc file to define what your app does when it starts. Here you can remove the post window, for example. Make sure to comment or delete the lines where SCSA_Demo is initialized.
7. (optional) Change the icon (SCcube.icns). Download the application img2icns, which converts any images into the OS X .icns format. Overwrite the old SCcube.icns file with your icon.
8. (optional) Edit the file English.lproj/MainMenu.nib. Here you can adjust the menu located at the top of the screen of your application. A better alternative to add new menu items for your SuperCollider application (on Mac OS X) is a custom class adding the following startup instructions:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 MyClass {   *initClass { if(Platform.ideName == "scapp") {      StartUp.add {          this.createMenu;          }      } }   *createMenu {   var menuGroup;      menuGroup = SCMenuGroup(nil, "My Menu", 13);        SCMenuItem(menuGroup, "My Item")      .action = { "Item selected!".postln }; } }

Ok, now it’s time to compile and package your standalone application. Open a Terminal window and execute the following commands:

1 2 3 cd /path/to/MyApp mkdir build cd build

This will create a directory build in your application’s source directory. The reason we do this is that the build artifacts are cleanly separated from the source files. Now we configure the build process using cmake:

1 cmake -DCMAKE_OSX_ARCHITECTURES='i386;x86_64' -D standalone="MyApp" ..

These commands make sure your app is built with support for 32bit and 64bit architectures (-DCMAKE_OSX_ARCHITECTURES=’i386;x86_64′) and that the resources for your custom app are used (-D standalone=”MyApp”). The two dots at the end indicate that the source files are located in the parent folder, not in the build folder we just created.

If you don’t need QT in your app, you can omit it, which will result in an application bundle consuming much less space on the hard disk:

1 -DSC_QT=OFF

If everything goes fine, cmake will check for some components and in the end appears a message similar to

-- Configuring done
-- Generating done
-- Build files have been written to: /Users/name/Development/SuperCollider/MyApp/build

Now the build configuration is saved, consequently we won’t have to run the cmake command again fur future builds. Now we’re ready to compile our app.

1 make install

After the build process, which takes some time when executed for the first time, your bundled App is waiting for you in build/install/MyApp/MyApp.app. That’s it 🙂

For future builds, you might want to program a litte build script, which updates the resource files and executes the build automatically. A very simple example could be:

1 2 3 mkdir ../platform/mac/MyApp\ Resources/SCClassLibrary/SomeFolder/ cp ~/Library/Application\ Support/SuperCollider/Extensions/SomeFolder/* ../platform/mac/MyApp\ Resources/SCClassLibrary/SomeFolder/ make install

This script creates a target folder and copies a bunch of classes from your Extensions directory to that folder. Save it as .sh file in your build directory, and simply execute this if you want a new version of your App. For security reasons, the file might not be executable from scratch. In this case, you have to modify its permissions:

1 chmod 755 build.sh

Now your script should be executable. And now: have fun with your app!

# LaTex Citations and Bibliography with Author-Name Pattern

For many days I messed around with LaTeX Bibliographies and Citations and finally got it working. What I wanted to implement were citations with the following pattern:

Java is a turing-complete programming language (Author, Year, p. 42).

Furthermore, I needed the fields URL, DOI and the last date of visit for online sources to be listed in my bibliography, as well as Journal details for articles. After a lot of hours I got it working, and I hope I can save you guys some time with this blog post.

In my first attempts, I used the natbib package in combination with various bibliography styles. One which didn’t produce errors and was quite close to what I wanted was the achicago style. However, the details in the bibliography were missing. There is also a tool with which you can create your own bibliography styles, but that took a lot of time to configure (by answering a lot of questions and typing in choices) and resulted in weired citations, where the year and some random digits preceded the author… Next, I tried the amsrefs package, but also with no success.

Finally, I found the right package: biblatex 🙂 It has a lot of options to configure your bibliography and citation behaviour. I will describe the options I used. More details can be found in the reference.

• citestye=apa: Defines the style of citations (in the document). Apa is an author-year based style with a comma between author and year. Unfortunately, when more than n authors are given, the apa style does not support the et al. abbreviation. At least the package options minnames and maxnames to configure this did not work for me. If you want et al. abbreviations, you must specify authoryear here (but this will also remove the ocmma between author and year).
• bibstyle=authoryear: Basic author-year based configuration for the bibliography listing at the end of your document.
• url: indicates that URLs are listed in the bibliography
• doi: causes biblatex to print DOIs of articles
• dashed=false: When two references of the same author(s) are listed, in the second one the author(s) is replaced with a dash by default. This flag deactivates that.

For biblatex your .bib file has to be referenced with another command than with natbib or amsrefs. It is named \addbibresource and has to be in the preamble. Note that the .bib extension has to be included. The whole header code looks like this:

1 2 \usepackage[citestyle=authoryear,bibstyle=authoryear,doi,url,dashed=false]{biblatex} \addbibresource{thesis.bib}

Now, the right citation command has to be used in your document. For the kind of citations I like it is not \cite, but \parencite (cite with parenthesis). To add specific pages in the citation, it has to be prepended to the curly braces:

1 \parencite[p.~42]{key}

Note that I used a non-breakable space (~) which will be kept together with the page number. Otherwise, ugly line breaks can occur in your document.

To print your bibliography, simply use the command

1 \printbibliography

Ok, that’s how it works if no problems occur. If it’s already working for you, be happy 😉 Here are some of the problems I had to deal with:

1. Lots of error messages complaining about citations that can not be resolved (solution: clear the temp folder, maybe there is some old stuff in there confusing the compilers)
2. Instead of Author and Year, the title of the referenced sources is displayed (solution: this happens when bibtex is used to create the bibliography files, but biber is configured as backend for biblatex (which is the default). I solved this by executing biber thesis manually at the command line)
3. Errors because of non-UTF-8 characters in the input (solution: Check your .bib file for non-standard characters such as German Umlauts or French accents. They need to be escaped. For example: ä -> {\”a}, é -> {\’e}.
4. Last visit date for online sources was not displayed (solution: I used the field lastchecked in the .bib file, but the right one is urldate, which is not displayed on the JabRef GUI. The date must be entered in the format yyyy-mm-dd!)
5. Long book titles in the bibliography were underlined and overflowed the page, no line break was inserted (solution: Solved by adding \usepackage[normalem]{ulem} in the preamble)
6. The abbreviation et al. was not printed in the document for sources with more than n authors (specified with the package option maxnames or maxcitenames). When I changed the package option citestyle from apa to authoryear it finally worked.

Puh, what a ride…I hope you guys will benefit from this!

# Acronyms in LaTeX Glossaries

Implementing a list of acronyms in a LaTeX can be very complicated. As I had some problems with this task I will explain how I managed to do it.

First we need to reference the glossaries package, including the following options:

• xindy indicates that we want to use xindy as indexing processor rather than makeindex
• acronym indicates that a spearate acronym glossary should be created
• nomain suppresses the creation of the main glossary, since we only need acronyms

1 \usepackage[xindy,toc,acronym,nomain]{glossaries}

Next, we need to get LaTeX to create various files needed by xindy. This is achieved through the simple instruction

1 \makeglossaries

Normally, xindy should automatically be invoked if you have Perl installed, but this did not work for me. Instead, I had to execute the following command manually when new acronyms were added (in the directory where your .tex, .xdy and .acn files are located, assuming your doucment is named thesis.tex):

1 makeglossaries thesis

When using TeXlipse, the LaTeX plugin for Eclipse, the build process is mostly executed in the tmp folder, and therefore you need to execute makeglossaries in there. Before and after the process, input and output files have to be moved to their proper location. I solved this by writing the following script:

1 2 3 4 5 6 cd tmp cp ../thesis.xdy . cp ../thesis.acn . makeglossaries thesis cp thesis.acr .. cp thesis.alg ..

Finally, another thing I stubled across is that acronyms are only listed if they are referenced somewhere in the document. That means defining an acronym, e.g.

1 \newacronym{API}{API}{Application Programming Interface}

is not enough. It has to be referenced with \gls{API} or similar instructions in order to be taken into account for glossary creation.

Ah and if you don’t like the dots after your terms in the acronym list, you can remove them by issuing

1 \renewcommand*{\glspostdescription}{}`

That’s it. I hope that this is useful for someone else 🙂