Thursday, January 20, 2011

Releng tricks from e4 and Orion

In the last couple of months I've found myself in charge of two releng builds: e4 and Orion. The e4 build is actually 2 pieces: building the Eclipse 4.1 SDK and building additional e4 bundles which are not part of the SDK by default.

Being the PDE/Build project lead gives me a unique perspective on this entire process so I thought I would share some tip and tricks for specific problems I encountered.

The first covers how we do signing when building the Eclipse 4.1 SDK.

Signing the Eclipse 4.1 SDK

We produce signed bundles in our builds. The specifics of how to do this have already been worked out by Kim. Essentially we provide a zip file that gets sent off to eclipse.org to be signed.

For the 4.1 SDK there is a slight twist to the problem. The 4.1 SDK is mostly composed of binary bundles we reconsume from 3.7 together with some new e4 bundles that we compile ourselves. We really only want to sign the bundles that we compiled ourselves and avoid resigning the binary bundles.

The trick for creating an archive containing only the bundles we compiled works best for p2 enabled builds (using p2.gathering=true).

Custom Assembly Targets

PDE/Build supports customization of your build using provided template files. In particular we are interested in the customAssembly.xml script. This provides targets that will be invoked by PDE/Build during the packaging and assembly phases of the build.

Specifically, there is a target gather.bin.parts which is invoked for every bundle that we are building immediately after the contents for that bundle are published into the p2 repository. There is another target post.gather.bin.parts which is called after we are finished with all the bundles.

The idea is that we use the gather.bin.parts target to record which bundles we compiled, and the post.gather.bin.parts to sign these bundles and update the p2 repository. At the time post.gather.bin.parts is called, the p2 repository will contain binary bundles as well as the compiled ones which is why we need a record of which ones to sign.

The script looks something like this:

<project name="CustomAssemble.overrides" default="noDefault">
<import file="${eclipse.pdebuild.templates}/headless-build/customAssembly.xml" />

<!-- every time gather.bin.parts is called, we will record the project being built -->
<target name="gather.bin.parts" >
<echo append="true" file="${buildDirectory}/built.list"
message="**/${projectName}.jar${line.separator}" />
</target>

<target name="post.gather.bin.parts" >
<property name="signingArchive" value="${buildDirectory}/${buildLabel}/sign-${buildId}.zip" />
<zip zipfile="${signingArchive}" basedir="${p2.build.repo}"
includesFile="${buildDirectory}/built.list" />

<!-- sign! -->
<ant antfile="${builder}/sign.xml" dir="${basedir}" target="signMasterFeature" >
<property name="signingArchive" value="${signingArchive}" />
</ant>

<!--unzip signed archive over top of the repository -->
<unzip dest="${p2.build.repo}" src="${signingArchive}" />

<!--update repository with new checksums for signed bundles -->
<p2.process.artifacts repositoryPath="file://${p2.build.repo}" />
</target>
</project>
Some notes:
  • ${projectName} is a property set by PDE/Build and it contains the bundle symbolic name and the version of the bundle being built (ie org.eclipse.foo_1.0.0.v2011).
  • The bundles are recorded in built.list in the form of an ant include pattern.
  • The signing archive is created from the p2 repository using the generated built.list as an includes file.
  • The sign.xml script being used is the one from the e4 build and is available here.
  • The p2 artifact repository contains checksums for each artifact, so after extracting the signed archive over top of the repository, we need to update the repository to recalculate these checksums.
  • I have not actually tested the above ant snippet, it may require some tweaks. The general strategy is based on what we do in the e4 build but some of the details have changed.

4 comments:

Gunnar said...

Thanks a lot for sharing this. Do you also have a recommendation for building products after a "master" build?

For our project I'm building an SDK feature. This produces a single repo zip with all the great stuff (source bundles, etc). Now I'd like to build products from this repo. It would be great if the product IUs could be published into the repo. But I think I also need to run the director somewhow after signing in order to produce the product zips, correct?

Could the be done as part of the running PDE Build or do I have to integrate this into the wrapper script? The Eclipse build does something like that but it looks heavily customized, though.

Unknown said...

Take a look at the 4.1 SDK builder: org.eclipse.e4.sdk/builder. In its customTargets/postBuild target it uses <p2.publish.product> to publish a .product file and then later calls the director for each platform and archives the results.

The tricky part is that product builds automatically generate some product configuration metadata for you (setting start levels, and other things). This is missing if you do a master feature build and publish the product file directly like we do here.

The Eclipse SDK build has done a little magic dance to create a org.eclipse.rcp.configuration.feature.group which contains this metadata for the Eclipse SDK. This IU is branded for eclipse and not generally reusable for RCP apps or other products. We are just reusing this by including it in our .product file as if it was a feature. (We are building another version of the same product so that works out, but for example EPP packages alter the branding a little and might not be able to use this directly).

Gunnar said...

Thanks a lot Andrew! I got signing working and trying now to get my head around the product configuration thing. It seems that I have to mimic some PDE Build implementation details. I was wondering if it would be a better approach to not do everything in one step.

What about producing the signed "master" repo in on step. The next step would issue a vanilla product build which does not checkout any source but uses the repo produced in the first step. Do you see any issues with that approach? The thing that comes to my head is duplicating and synchronizing build properties (eg. configs to build). But the approach seems cleaner/easier. What do you think?

Unknown said...

Doing the second step as a product build is likely the most straightforward solution. There is some duplication in the builders, and perhaps technically some extra work being done, but this is worth it for simplicity.

You do need to get at least the features transformed into folder shape. This simplest way to do this is use the repoBaseLocation support or call p2.repo2runnable yourself.