Monday, May 06, 2013

Quick Tip: Naming Eclipse Workspaces

I often have multiple eclipse workspaces open.  Windows 7 stacks them nicely for me, but this presents a problem: it is difficult to distinguish between them when switching between the applications.  The path of the currently open file in each instance is visible, but that is not always enough information.

 I've always used the -showLocation command line option which appends the location of the workspace to the end of the title.  Unfortunately, most of the time the title is too long for the workspace location to show up when switching between applications.

Today I decided to go hunting through the Eclipse preferences and I found exactly what I was looking for:

Naming Eclipse Workspaces

After setting names for all my workspaces, their titles are now useful:

Wednesday, February 02, 2011

Embedding the Orion Editor

[Edit Jan 3, 2012: Orion has evolved significantly since this post was first written a year ago. Please see this post written by Felipe for an update.]

Yesterday I wrote a blog post which contains a few snippets of Ant code. If you have java-script enabled, those snippets should be nicely formatted with line numbers and some basic syntax highlighting.

Those code snippets are actually being shown using the Orion Editor embedded into my blog post. Granted, using the full editor here is a little over-kill since we aren't doing very much with it, but this was an interesting exercise.

Here is the javascript I wrote to do this:

 * Copyright (c) 2011 IBM Corporation and others.
 * All rights reserved. This program and the accompanying materials
 * are made available under the terms of the Eclipse Public License v1.0
 * which accompanies this distribution, and is available at
function findOrionElements(tagName){
   var elements = [];
   var tags=document.getElementsByTagName(tagName);
   for(var i=0;i<tags.length;i++) {
      if(tags[i].getAttribute('name')==='orion') {
   return elements;

function createEditors() {
   //find all pre elements named 'orion'
   var elements = findOrionElements('pre');

   for(var i=0; i < elements.length; i++) {
      var element = elements[i];

      //extract the text from inside the pre
      var text = "";
      for( var j=0; j < element.childNodes.length; j++ ) {
         var nodeName = element.childNodes[j].nodeName;
         if (nodeName === "#text")
            text += element.childNodes[j].nodeValue;
         else if (nodeName === "BR" )
          text += '\n';
      /* Create the editor: 
          - parent is the containing div element
          - readonly by default, but can specify class="writable"
          - use the given stylesheet  */
      var parentNode = element.parentNode;      
      var elementClass = element.getAttribute('class');
      var editor = new eclipse.Editor({
         parent: parentNode, 
         readonly: !(elementClass && elementClass.indexOf('writable') > -1),
         stylesheet: ""

      // use javascript styler for now, there is no html/xml syntax highlighting yet
      var styler = new eclipse.TextStyler(editor, "js");
      // add a ruler with line numbers to the left side
      var lines = new eclipse.LineNumberRuler("left", {styleClass: "ruler_lines"}, {styleClass: "ruler_lines_odd"}, {styleClass: "ruler_lines_even"});

      //fix the height of the containing div = (editor.getLineHeight() * (editor.getModel().getLineCount() + 1)) + 2 + 'px';  

This code is looking for all <pre name='orion'> elements and creating editors to replace them. These elements should generally be contained inside a <div> element. HTML for a replaceable section would look like this:

<div><pre name="orion" class="writable">
   var hello = "Hello World!";

This example above was made writable by adding "class='writable'" to the <pre/> element. Inside the <pre> element, all '<' should be escaped as '&lt;' and all '>' as '&gt;'.

As a final step, to get this working on, I combined my javascript together with:
and minified it all using the Google Closure compiler. I hosted the resulting .js file on and added the following lines to the bottom of my blog post:

<script type="text/javascript" src=""></script>
<script type="text/javascript">document.onload=createEditors();</script>

If someone visits the post without javascript enabled, then they don't get the nice formatting on the code snippets, but at least they are still inside <pre> elements which give them some basic formatting.

Tuesday, February 01, 2011

Overriding PDE/Build with the Ant Import task

PDE/Build has a number of places where you can run custom ant scripts during the build.

The general pattern is the you would copy one of the customization templates from into your builder directory and then modify the file as required. For simple builds it is often unnecessary to copy these files at all and PDE/Build will just use its original copy.

If you only have a small change to make to the customization scripts, then it can be cleaner to not copy the template file and instead use Ant's Import task.

Using Import to make minor changes

The ant Import task allows for overriding a target from the imported file. As an example, the Eclipse SDK includes the Build Id in its about box. The About Box contents come from "about.mappings" files inside plug-ins.

What we want to do is after getting all our source from CVS, we do a quick replace in all the about.mappings files to update them with the build id.

Instead of copying the customTargets.xml file into our builder, we create our own that contains just the following:
<project name="customTargets overrides" >
<import file="${eclipse.pdebuild.templates}/headless-build/customTargets.xml"/>

<target name="postFetch">
<replace dir="${buildDirectory}/plugins" value="${buildLabel}" token="@build@">
<include name="**/about.mappings" />

${eclipse.pdebuild.templates} is a property that is automatically set by PDE/Build and it points the folder containing the template files. This small snippet is much cleaner than copying the entire customTargets.xml just to add a few lines.

This pattern can make for smaller and neater build scripts, but it turns out that this can also be a very powerful tool for modifying PDE/Build's behaviour.

Here is the Magic

During M5 milestone week, the Orion builds began failing about 90% of the time. The failure was "Unable to delete directory" in the middle of packaging ant scripts that are generated by PDE/Build. This seems to have been caused by an overloaded/lagged NFS server (or disk array).

When building a product for multiple platforms with p2, PDE/Build installs the product into a temporary directory, zips it up, deletes that directory, and then repeats for the next platform using the same temporary directory. If there is a problem deleting the directory then we are in trouble because even if we could ignore the problem, the next platform will be contaminated with contents from the previous one.

In order to work around the problem, we need to modify a target named cleanup.assembly which simply performs an ant <delete/> on the temporary directory. The problem is, that this target is in the middle of the PDE/Build generated configuration specific packaging scripts. A deeper understanding of how these scripts work is required.

Package Script Overview

When running a product build, the generated package scripts are organized per platform that we are building for. As an example, if we are building for windows, mac and linux, then we would have the following scripts:
The "" portion of the file name comes from this being a product build. In a feature build this would be the name of the top level feature being built. The first (*.all.xml) script is the main entry point for the packaging process. Each of the other scripts do the packaging for each platform. Every one of those platform specific scripts contain a cleanup.assembly target that needs to be modified.

Script Delegation

The top level packaging scripts does not call all the others directly, rather it uses a kind of delegation through the allElements.xml file. This file can be copied to your builder and modified to change the archive name or perform pre or post processing on the archive.

For each platform, the top level packaging script will call allElements.xml/defaultAssemble (or a platform specific assemble.*.[config] target if one is defined) passing it the name of the platform specific packaging script to invoke.

This is where we can insert our change in order to override the platform specific packaging scripts.

The Modified allElements.xml file

We copy the allElements.xml file from into our builder and change the "defaultAssemble" target to look like this:
<target name="defaultAssemble">
<ant antfile="${builder}/packageOverride.xml" dir="${buildDirectory}">
<property name="assembleScriptName" value="${assembleScriptName}" />
<property name="archiveName" value="${archiveNamePrefix}-${config}.zip"/>

The name of the platform specific packaging script is specified by the ${assembleScriptName} property. Instead of calling this directly, we instead call a script of our own "packageOverride.xml" and pass it the script name. Product builds normally use their own allElements.xml provided by PDE/Build which also sets the archive name based on the configuration being built. Since we will be using our own allElements.xml file, we also set the archive name here.

Product Builds (using are hardcoded to use their own copy of the allElements.xml file. In order to change this we must set a property allElementsFile which points to our copy. This property must be set before invoking productBuild.xml, which means setting it on the command line, or in a wrapping ant script. This is not necessary when doing a feature build.

The new packageOverride.xml script

The allElements.xml delegation script has now been modified to invoke our own packageOverride.xml script. Our script looks something like this:
<project name="package.override" default="main" >
<import file="${buildDirectory}/${assembleScriptName}" />

<target name="cleanup.assembly">
<condition property="doAssemblyCleanup" >
<not><isset property="runPackager" /></not>
<contains string="${assembleScriptName}" substring="package." />
<antcall target="perform.cleanup.assembly" />
<target name="perform.cleanup.assembly" if="doAssemblyCleanup" >
<exec executable="mv" dir="${buildDirectory}" >
<arg value="${assemblyTempDir}" />
<arg value="${buildDirectory}/tmp.${os}.${ws}.${arch}" />
<exec executable="rm" dir="${buildDirectory}" >
<arg line="-rf ${buildDirectory}/tmp.${os}.${ws}.${arch}" />

Here, we import the package script that was passed to us from allElements.xml, each time the packaging script calls us, we will be importing a different script. We inherit all the generated targets and override the cleanup.assembly target. Our modified version moves the temporary folder to a different location before trying to delete it. If the delete fails, that is ok because it is no longer in the way of the next platform. I used the native 'mv' and 'rm' hoping that they would behave better with a slow NFS server.

The packageOverride.xml script must specify default="main" as that setting is not inherited.

It is important to note that this override also affects the generated assemble.* scripts which are very similar to the package scripts. The assemble scripts also have an cleanup.assembly target which is getting overridden here. However, that target is only supposed to run during assembly if we are not going to be doing packaging. This is why we need a condition here to make sure the temporary folder is only deleted when it should be. The condition I used here would be wrong for feature builds where the top level feature name contains "package." because the generated scripts in a feature build contain the top level feature id.

Final Notes

The change I have outlined here mas made to fix a specific problem with the Orion build. Care must be taken when applying these techniques to other problems and builders.

The exact details here have been modified from the changes I actually made, so I have not actually tested the scripts as they are written above. Specifically, the condition on the overridden target has been added.

This specific problem can also be fixed in itself, this is tracked by bug 336020.

Thursday, January 27, 2011

Building from Git

Git was introduced at Eclipse about a year ago. Projects are slowly migrating over to use git as the SCM system instead of CVS or SVN. When IBM made its initial contribution for the new Orion project, we migrated from internal CVS servers to git at This post will give an overview of the changes I had to make to our releng setup to start building from git.

There have been a few PDE/Build bug fixes in 3.7 to support building from Git. I recommend using a recent 3.7 build as your base builder. 3.7M5 is due out this week.

General Setup

The Orion releng build is a relatively standard p2 enabled PDE/Build product build.

There are a few things that need to be done to get the build working with git:
  1. Bootstrapping the builder
  2. Getting map files
  3. Fetching source from Git
The e4 builds consume source code from git repositories, but the releng project and map files are still in CVS. Only the 3rd step here was required for e4. The entire Orion project including releng project and mapfiles is in git so we need all three.

Bootstrapping the Build

The Orion releng builds run via cron job on We need a small shell script that can get the Orion releng project from git and start everything off. We do this using the git archive command:
git archive --format=tar --remote=/gitroot/e4/org.eclipse.orion.server.git master
 releng/org.eclipse.orion.releng | tar -xf -
The build machine has local access to the git repository, if we were running from somewhere else, this would change to something like --remote=ssh://

This will get the releng project into the current working directory, at which point we can invoke ant on it.

Getting map files from Git

PDE/Build uses map files to fetch our code from source repositories. The first step to this is getting the map files themselves.

PDE/Build comes with default support to fetch map files from CVS which is controlled by a few properties (see Fetch phase Control). This obviously doesn't apply here. However, this step is fully customizable using the customTargets.xml file.

All we need to do is copy the file into our builder and modify the getMapFiles target. We can then use the git archive command to get our map files. It would look something like this:
<target name="getMapFiles"  unless="skipMaps">
 <mkdir dir="${buildDirectory}/maps" />
 <exec executable="git" dir="${buildDirectory}/maps"
    <arg line="archive -format=tar" />
    <arg line="--remote=/gitroot/e4/org.eclipse.orion.server.git" />
    <arg line="master releng/org.eclipse.orion.releng/maps" />
 <untar src="${buildDirectory}/maps/maps.tar" dest="${buildDirectory}/maps" />
Because the "| tar -xf -" we used earlier is a redirection done by the shell, that doesn't work when we invoke git from ant. Here we specify a file to hold the output of the archive command, this ends up being a tar file which we can just untar.

Fetching source from Git

PDE/Build has an extension point where fetch script generators for different repositories can be plugged in. The EGit project provides an implementation for this extension point.
The org.eclipse.egit.fetchfactory bundle is available from the p2 repository. Install that bundle into the eclipse install that is used to run your build.

Git Map Files

Once we have the egit fetchfactory, all we need to do is update our map files with entries for GIT. Here is an example map file entry from Orion:
  • tag is the tag to use when fetching the bundle from git
  • repo is the path to the git repository. In order to omit the user from the repository path, the build needs to run as a user who has ssh access to the git repo at
  • path is the path within the git repository to the project we are interested in.

Final Details

  • The EGit fetch factory works by cloning the git repository to a local directory, checking out the tag and then copying the project over to the build Directory. Builders can set the fetchCacheLocation property to specify a local directory where the git clones can be kept. This location may be reused from build to build to avoid having to re-download the entire repository each build.
  • "Nightly" builds are set up to build the latest code from HEAD. For CVS, this is usually accomplished by setting "fetchTag=HEAD" to override the map file entries. For Git you would use "fetchTag=origin/master". If you are using both CVS and GIT you can set both with "fetchTag=CVS=HEAD,GIT=origin/master".
  • The Eclipse Platform team uses the releng tool plugin to manage their map files in CVS, there is not yet an equivalent tool for git. See the Orion Releng and E4/Git wiki pages for instruction on how to manage map files.

Thursday, January 20, 2011

Releng tricks from e4 and Orion

In the last couple of months I've found myself in charge of two releng builds: e4 and Orion. The e4 build is actually 2 pieces: building the Eclipse 4.1 SDK and building additional e4 bundles which are not part of the SDK by default.

Being the PDE/Build project lead gives me a unique perspective on this entire process so I thought I would share some tip and tricks for specific problems I encountered.

The first covers how we do signing when building the Eclipse 4.1 SDK.

Signing the Eclipse 4.1 SDK

We produce signed bundles in our builds. The specifics of how to do this have already been worked out by Kim. Essentially we provide a zip file that gets sent off to to be signed.

For the 4.1 SDK there is a slight twist to the problem. The 4.1 SDK is mostly composed of binary bundles we reconsume from 3.7 together with some new e4 bundles that we compile ourselves. We really only want to sign the bundles that we compiled ourselves and avoid resigning the binary bundles.

The trick for creating an archive containing only the bundles we compiled works best for p2 enabled builds (using p2.gathering=true).

Custom Assembly Targets

PDE/Build supports customization of your build using provided template files. In particular we are interested in the customAssembly.xml script. This provides targets that will be invoked by PDE/Build during the packaging and assembly phases of the build.

Specifically, there is a target which is invoked for every bundle that we are building immediately after the contents for that bundle are published into the p2 repository. There is another target which is called after we are finished with all the bundles.

The idea is that we use the target to record which bundles we compiled, and the to sign these bundles and update the p2 repository. At the time is called, the p2 repository will contain binary bundles as well as the compiled ones which is why we need a record of which ones to sign.

The script looks something like this:

<project name="CustomAssemble.overrides" default="noDefault">
<import file="${eclipse.pdebuild.templates}/headless-build/customAssembly.xml" />

<!-- every time is called, we will record the project being built -->
<target name="" >
<echo append="true" file="${buildDirectory}/built.list"
message="**/${projectName}.jar${line.separator}" />

<target name="" >
<property name="signingArchive" value="${buildDirectory}/${buildLabel}/sign-${buildId}.zip" />
<zip zipfile="${signingArchive}" basedir="${}"
includesFile="${buildDirectory}/built.list" />

<!-- sign! -->
<ant antfile="${builder}/sign.xml" dir="${basedir}" target="signMasterFeature" >
<property name="signingArchive" value="${signingArchive}" />

<!--unzip signed archive over top of the repository -->
<unzip dest="${}" src="${signingArchive}" />

<!--update repository with new checksums for signed bundles -->
<p2.process.artifacts repositoryPath="file://${}" />
Some notes:
  • ${projectName} is a property set by PDE/Build and it contains the bundle symbolic name and the version of the bundle being built (ie org.eclipse.foo_1.0.0.v2011).
  • The bundles are recorded in built.list in the form of an ant include pattern.
  • The signing archive is created from the p2 repository using the generated built.list as an includes file.
  • The sign.xml script being used is the one from the e4 build and is available here.
  • The p2 artifact repository contains checksums for each artifact, so after extracting the signed archive over top of the repository, we need to update the repository to recalculate these checksums.
  • I have not actually tested the above ant snippet, it may require some tweaks. The general strategy is based on what we do in the e4 build but some of the details have changed.

Wednesday, July 14, 2010

Permgen problems and Running Eclipse on Java 1.6 update 21

People running Eclipse on windows using the latest Java 1.6 update 21 jvm from Oracle/Sun are noticing frequent vm crashes or freezes:
Unhandled event loop exception
PermGen space
As indicated in the Eclipse FAQ, there is a simple workaround for this problem, edit your eclipse.ini file and add -XX:MaxPermSize=256m below the -vmargs line:

History and Explanations

Starting as far back as Eclipse 3.1 it was noticed that Eclipse uses a lot of "PermGen" memory under Sun VMs. Permgen memory is where .class file information is stored. The way to avoid this memory problem is to increase the permgen size by using -XX:MaxPermSize.

The problem is that this is a non-standard vm argument and can cause vms from other vendors to not start at all. We eventually fixed this by introducing a new argument in the eclipse.ini file: --launcher.XXMaxPermSize.

When this argument is specified, the eclipse executable launcher tries to identify whether the vm is from Sun or not. If the vm is Sun, then the launcher adds the -XX:MaxPermSize vm argument. On windows, we identify Sun vms using the GetFileVersionInfo API. We read the version information from the java executable (or jvm.dll) and check to see if the company name is "Sun Microsystems".

This worked great and everyone was happy. Fast forward a few years and Oracle acquires Sun. Now starting in Java 6 update 21, the company name in the jre is Oracle. This means the launcher no longer recognizes the vm as being from Sun and the -XX:MaxPermSize vm argument no longer gets applied results once more in Permgen memory problems.

The fix for this change is being tracked in bug 319514.

Friday, May 28, 2010

Opening files in Eclipse from the command line

I ran a query to see all the bugs fixed in the Eclipse Platform in 3.6; it is a long list (4309 and counting). Felipe gets credit for the oldest bug fixed (raised in 2001), but in a close second is bug 4922 (raised only a day later).

This bug is about opening files in eclipse from the command line. Fixing it required a coordinated effort between Platform UI, SWT, and the Equinox launcher. A lot of the credit for what was done goes to Kevin Barnes.

This post is an effort to explain some of the technical details of what is going here.

On the Mac...: On the mac all we do is handle the apple event "kAEOpenDocuments", most of the rest of this post doesn't apply to the mac.

Windows and GTK... Everything below applies to Windows and GTK, though there are some differences in the implementation details.

On Motif... Sorry, this doesn't work on motif.

The Launcher

Everything starts in the eclipse launcher. We added a few new command line options:
  • --launcher.openFile : obvious enough, specifies the file we want to open.
  • --launcher.defaultAction : less obvious, specifies the action to take when the launcher is started without any '-' arguments on the command line. Currently the only support value is "openFile".
  • --launcher.timeout : a timeout value for how long we should spend trying to communicate with an already running eclipse before we give up and just open a new eclipse instance. Default is 60 (seconds).
The first argument is obvious enough, open the specified file in eclipse.
eclipse --launcher.openFile myFile.txt
This is great, but it is a bit much to type on the command line and is not quite enough to make everyone happy. We introduced the "default action" argument. This goes in the eclipse.ini file, the value should be "openFile":
This tells the launcher that if it is called with a command line that only contains arguments that don't start with "-", then those arguments should be treated as if they followed "--launcher.openFile".
eclipse myFile.txt
This is the kind of command line the launcher will receive on windows when you double click a file that is associated with eclipse, or you select files and choose "Open With" or "Send To" Eclipse.

Relative paths will be resolved first against the current working directory, and second against the eclipse program directory.

Talking to SWT

The launcher talks to SWT through the use of a hidden window. The launcher and SWT both need to agree on the name of this window. This allows the launcher to find an already running eclipse and tell it to open the file. Any RCP application will need to ensure they get this right for things to work.

The launcher bases this on its "official name". The official name can be set with the -name argument. If -name is not set, then the official name is derived from the launcher executable, the extension is removed and the first letter is capitalized: rcp.exe becomes Rcp.

SWT bases this on the value set with the Display.setAppName() function. Normally, this is set by the Workbench when it creates the display and the value is the "appName" taken from the product extension point.

Listening to SWT

To take advantage of this, an RCP Application will need to register a listener for the SWT.OpenDocument event. It should register this listener before calling PlatformUI.createAndRunWorkbench so that the listener is in place before the workbench starts running the event loop.

The event loop will start running while the splash screen is still up, so events may arrive before the workbench is ready to actually open an editor for the file. This means that the listener should save the file paths it gets from the OpenDocument events so they can be opened at some later time. WorkbenchAdvisor#eventLoopIdle can be a good place to check for saved open file events.

Implementation details

Here is an overview of the flow of events in the launcher when processing --launcher.openFile on windows.
  1. Get the Official Name. As mentioned above, this is the "-name" argument, or derived from the executable name. For this explanation, we will be using "OfficialName".

  2. Create and lock a mutex named "SWT_Mutex_OfficialName".
    • If multiple files are selected and opened on windows, then a seperate eclipse process will be created for each one. The mutex allows us to ensure only one eclipse instance is actually started.
    • One process will win the race to acquire the mutex, at this point, there will be no eclipse instance running that has the SWT window available. This process will start normally and eventually create the SWT window at which point it will release the mutex.
    • All the other processes wait trying to acquire the mutex, once the original process releases it, they will be able to find the SWT window and post their open file message there.
    • Each process only waits for --launcher.timeout seconds (default 60 seconds) before giving up and just starting its own full eclipse instance.

  3. Find the window named "SWT_Window_OfficialName"
    • If no such window exists, we are the first eclipse instance. In this case, we set a timer to look again later and then proceed with starting eclipse.
    • The timer fires every second for --launcher.timeout seconds.
    • If we don't find the SWT window before the timeout (perhaps it took too long for the workbench to create the display), then we will be unable to open the file.

  4. Send a message to the SWT window
    • Once we've found the SWT window, we create a custom message named "SWT_OPENDOC". We send this message with wParam & lParam specifying a shared memory id.
    • We write to the name of the file to open into shared memory, and when SWT receives the SWT_OPENDOC message, it uses that id to read the shared memory.
    • The launcher has long used shared memory on all platforms for the splash screen, restarts and exit messages.
    • Once SWT reads the file name from shared memory, it posts its own SWT.OpenDocument event.
On GTK, things happen in a similar manner with a few differences:
  1. We use semaphores.
    • Semaphores are not cleaned up automatically if the process exits unexpectedly. So we try to hold the semaphore for as short a time as possible and we install SIGINT and SIGQUIT signal handlers for the time we hold the semaphore.
    • The launcher creates a hidden GTK window named SWT_Window_LauncherOfficalName which is used in the same way as the mutex on windows. This lets us avoid holding the semaphore for an extended time while the first eclipse process starts up.

  2. We use a property instead of a message.
    • The property is named org.eclipse.swt.filePath.message.
    • The value is a colon separate list of file paths to open. Shared memory is not used like it is on windows.